diff --git a/sdk/face/azure-ai-vision-face/CHANGELOG.md b/sdk/face/azure-ai-vision-face/CHANGELOG.md index d4229322e309..b8dfe785e159 100644 --- a/sdk/face/azure-ai-vision-face/CHANGELOG.md +++ b/sdk/face/azure-ai-vision-face/CHANGELOG.md @@ -1,15 +1,27 @@ # Release History -## 1.0.0b2 (Unreleased) +## 1.0.0b2 (2024-10-23) ### Features Added +- Added support for the Large Face List and Large Person Group: + - Added operation groups `LargeFaceListOperations` and `LargePersonGroupOperations` to `FaceAdministrationClient`. + - Added operations `find_similar_from_large_face_list`, `identify_from_large_person_group` and `verify_from_large_person_group` to `FaceClient`. + - Added models for supporting Large Face List and Large Person Group. +- Added support for latest Detect Liveness Session API: + - Added operations `get_session_image` and `detect_from_session_image` to `FaceSessionClient`. + - Added properties `enable_session_image` and `liveness_single_modal_model` to model `CreateLivenessSessionContent`. + - Added model `CreateLivenessWithVerifySessionContent`. + ### Breaking Changes -### Bugs Fixed +- Changed the parameter of `create_liveness_with_verify_session` from model `CreateLivenessSessionContent` to `CreateLivenessWithVerifySessionContent`. +- Changed the enum value of `FaceDetectionModel`, `FaceRecognitionModel`, `LivenessModel` and `Versions`. ### Other Changes +- Change the default service API version to `v1.2-preview.1`. + ## 1.0.0b1 (2024-05-28) This is the first preview of the `azure-ai-vision-face` client library that follows the [Azure Python SDK Design Guidelines](https://azure.github.io/azure-sdk/python_design.html). diff --git a/sdk/face/azure-ai-vision-face/README.md b/sdk/face/azure-ai-vision-face/README.md index 0b7221337a41..40a476f4f6b7 100644 --- a/sdk/face/azure-ai-vision-face/README.md +++ b/sdk/face/azure-ai-vision-face/README.md @@ -6,6 +6,7 @@ The Azure AI Face service provides AI algorithms that detect, recognize, and ana - Liveness detection - Face recognition - Face verification ("one-to-one" matching) + - Face identification ("one-to-many" matching) - Find similar faces - Group faces @@ -130,6 +131,18 @@ face_client = FaceClient(endpoint, credential) - Finding similar faces from a smaller set of faces that look similar to the target face. - Grouping faces into several smaller groups based on similarity. +### FaceAdministrationClient + +`FaceAdministrationClient` is provided to interact with the following data structures that hold data on faces and +person for Face recognition: + + - `large_face_list`: It is a list of faces which can hold faces and used by [find similar faces][find_similar]. + - It can up to 1,000,000 faces. + - Training (`begin_train()`) is required before calling `find_similar_from_large_face_list()`. + - `large_person_group`: It is a container which can hold person objects, and is used by face recognition. + - It can up to 1,000,000 person objects, with each person capable of holding up to 248 faces. The total person objects in all `large_person_group` should not exceed 1,000,000,000. + - For [face verification][face_verification], call `verify_from_large_person_group()`. + - For [face identification][face_identification], training (`begin_train()`) is required before calling `identify_from_large_person_group()`. ### FaceSessionClient @@ -139,12 +152,23 @@ face_client = FaceClient(endpoint, credential) - Query the liveness and verification result. - Query the audit result. +### Long-running operations + +Long-running operations are operations which consist of an initial request sent to the service to start an operation, +followed by polling the service at intervals to determine whether the operation has completed or failed, and if it has +succeeded, to get the result. + +Methods that train a group (LargeFaceList or LargePersonGroup) are modeled as long-running operations. +The client exposes a `begin_` method that returns an `LROPoller` or `AsyncLROPoller`. Callers should wait +for the operation to complete by calling `result()` on the poller object returned from the `begin_` method. +Sample code snippets are provided to illustrate using long-running operations [below](#examples "Examples"). ## Examples The following section provides several code snippets covering some of the most common Face tasks, including: * [Detecting faces in an image](#face-detection "Face Detection") +* [Identifying the specific face from a LargePersonGroup](#face-recognition-from-largepersongroup "Face Recognition from LargePersonGroup") * [Determining if a face in an video is real (live) or fake (spoof)](#liveness-detection "Liveness Detection") ### Face Detection @@ -173,8 +197,8 @@ with FaceClient(endpoint=endpoint, credential=AzureKeyCredential(key)) as face_c result = face_client.detect( file_content, - detection_model=FaceDetectionModel.DETECTION_03, # The latest detection model. - recognition_model=FaceRecognitionModel.RECOGNITION_04, # The latest recognition model. + detection_model=FaceDetectionModel.DETECTION03, # The latest detection model. + recognition_model=FaceRecognitionModel.RECOGNITION04, # The latest recognition model. return_face_id=True, return_face_attributes=[ FaceAttributeTypeDetection03.HEAD_POSE, @@ -192,6 +216,103 @@ with FaceClient(endpoint=endpoint, credential=AzureKeyCredential(key)) as face_c print(f"Face: {face.as_dict()}") ``` +### Face Recognition from LargePersonGroup + +Identify a face against a defined LargePersonGroup. + +First, we have to use `FaceAdministrationClient` to create a `LargePersonGroup`, add a few `Person` to it, and then register faces with these `Person`. + +```python +from azure.core.credentials import AzureKeyCredential +from azure.ai.vision.face import FaceAdministrationClient, FaceClient +from azure.ai.vision.face.models import FaceDetectionModel, FaceRecognitionModel + + +def read_file_content(file_path: str): + with open(file_path, "rb") as fd: + file_content = fd.read() + + return file_content + + +endpoint = "" +key = "" + +large_person_group_id = "lpg_family" + +with FaceAdministrationClient(endpoint=endpoint, credential=AzureKeyCredential(key)) as face_admin_client: + print(f"Create a large person group with id: {large_person_group_id}") + face_admin_client.large_person_group.create( + large_person_group_id, name="My Family", recognition_model=FaceRecognitionModel.RECOGNITION04 + ) + + print("Create a Person Bill and add a face to him.") + bill_person_id = face_admin_client.large_person_group.create_person( + large_person_group_id, name="Bill", user_data="Dad" + ).person_id + bill_image_file_path = "./samples/images/Family1-Dad1.jpg" + face_admin_client.large_person_group.add_face( + large_person_group_id, + bill_person_id, + read_file_content(bill_image_file_path), + detection_model=FaceDetectionModel.DETECTION03, + user_data="Dad-0001", + ) + + print("Create a Person Clare and add a face to her.") + clare_person_id = face_admin_client.large_person_group.create_person( + large_person_group_id, name="Clare", user_data="Mom" + ).person_id + clare_image_file_path = "./samples/images/Family1-Mom1.jpg" + face_admin_client.large_person_group.add_face( + large_person_group_id, + clare_person_id, + read_file_content(clare_image_file_path), + detection_model=FaceDetectionModel.DETECTION03, + user_data="Mom-0001", + ) +``` + +Before doing the identification, we need to train the LargePersonGroup first. +```python + print(f"Start to train the large person group: {large_person_group_id}.") + poller = face_admin_client.large_person_group.begin_train(large_person_group_id) + + # Wait for the train operation to be completed. + # If the training status isn't succeed, an exception will be thrown from the poller. + training_result = poller.result() +``` + +When the training operation is completed successfully, we can identify the faces in this LargePersonGroup through +`FaceClient`. +```python +with FaceClient(endpoint=endpoint, credential=AzureKeyCredential(key)) as face_client: + # Detect the face from the target image. + target_image_file_path = "./samples/images/identification1.jpg" + detect_result = face_client.detect( + read_file_content(target_image_file_path), + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, + return_face_id=True, + ) + target_face_ids = list(f.face_id for f in detect_result) + + # Identify the faces in the large person group. + result = face_client.identify_from_large_person_group( + face_ids=target_face_ids, large_person_group_id=large_person_group_id + ) + for idx, r in enumerate(result): + print(f"----- Identification result: #{idx+1} -----") + print(f"{r.as_dict()}") +``` + +Finally, use `FaceAdministrationClient` to remove the large person group if you don't need it anymore. +```python +with FaceAdministrationClient(endpoint=endpoint, credential=AzureKeyCredential(key)) as face_admin_client: + print(f"Delete the large person group: {large_person_group_id}") + face_admin_client.large_person_group.delete(large_person_group_id) +``` + ### Liveness detection Face Liveness detection can be used to determine if a face in an input video stream is real (live) or fake (spoof). The goal of liveness detection is to ensure that the system is interacting with a physically present live person at diff --git a/sdk/face/azure-ai-vision-face/assets.json b/sdk/face/azure-ai-vision-face/assets.json index a150f1de8649..30044025e7bc 100644 --- a/sdk/face/azure-ai-vision-face/assets.json +++ b/sdk/face/azure-ai-vision-face/assets.json @@ -2,5 +2,5 @@ "AssetsRepo": "Azure/azure-sdk-assets", "AssetsRepoPrefixPath": "python", "TagPrefix": "python/face/azure-ai-vision-face", - "Tag": "python/face/azure-ai-vision-face_f787b7aa30" + "Tag": "python/face/azure-ai-vision-face_0b4013000f" } diff --git a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/__init__.py b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/__init__.py index 3b5d75d6c9f2..fb2a4f6ec76e 100644 --- a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/__init__.py +++ b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/__init__.py @@ -6,6 +6,7 @@ # Changes may cause incorrect behavior and will be lost if the code is regenerated. # -------------------------------------------------------------------------- +from ._client import FaceAdministrationClient from ._patch import FaceClient from ._patch import FaceSessionClient from ._version import VERSION @@ -16,6 +17,7 @@ from ._patch import patch_sdk as _patch_sdk __all__ = [ + "FaceAdministrationClient", "FaceClient", "FaceSessionClient", ] diff --git a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_client.py b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_client.py index 321b3f0a8b79..b2f532920bb5 100644 --- a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_client.py +++ b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_client.py @@ -8,22 +8,120 @@ from copy import deepcopy from typing import Any, TYPE_CHECKING, Union +from typing_extensions import Self from azure.core import PipelineClient from azure.core.credentials import AzureKeyCredential from azure.core.pipeline import policies from azure.core.rest import HttpRequest, HttpResponse -from ._configuration import FaceClientConfiguration, FaceSessionClientConfiguration -from ._operations import FaceClientOperationsMixin, FaceSessionClientOperationsMixin +from ._configuration import ( + FaceAdministrationClientConfiguration, + FaceClientConfiguration, + FaceSessionClientConfiguration, +) from ._serialization import Deserializer, Serializer +from .operations import ( + FaceClientOperationsMixin, + FaceSessionClientOperationsMixin, + LargeFaceListOperations, + LargePersonGroupOperations, +) if TYPE_CHECKING: - # pylint: disable=unused-import,ungrouped-imports from azure.core.credentials import TokenCredential -class FaceClient(FaceClientOperationsMixin): # pylint: disable=client-accepts-api-version-keyword +class FaceAdministrationClient: + """FaceAdministrationClient. + + :ivar large_face_list: LargeFaceListOperations operations + :vartype large_face_list: azure.ai.vision.face.operations.LargeFaceListOperations + :ivar large_person_group: LargePersonGroupOperations operations + :vartype large_person_group: azure.ai.vision.face.operations.LargePersonGroupOperations + :param endpoint: Supported Cognitive Services endpoints (protocol and hostname, for example: + https://{resource-name}.cognitiveservices.azure.com). Required. + :type endpoint: str + :param credential: Credential used to authenticate requests to the service. Is either a + AzureKeyCredential type or a TokenCredential type. Required. + :type credential: ~azure.core.credentials.AzureKeyCredential or + ~azure.core.credentials.TokenCredential + :keyword api_version: API Version. Known values are "v1.2-preview.1" and None. Default value is + "v1.2-preview.1". Note that overriding this default value may result in unsupported behavior. + :paramtype api_version: str or ~azure.ai.vision.face.models.Versions + :keyword int polling_interval: Default waiting time between two polls for LRO operations if no + Retry-After header is present. + """ + + def __init__(self, endpoint: str, credential: Union[AzureKeyCredential, "TokenCredential"], **kwargs: Any) -> None: + _endpoint = "{endpoint}/face/{apiVersion}" + self._config = FaceAdministrationClientConfiguration(endpoint=endpoint, credential=credential, **kwargs) + _policies = kwargs.pop("policies", None) + if _policies is None: + _policies = [ + policies.RequestIdPolicy(**kwargs), + self._config.headers_policy, + self._config.user_agent_policy, + self._config.proxy_policy, + policies.ContentDecodePolicy(**kwargs), + self._config.redirect_policy, + self._config.retry_policy, + self._config.authentication_policy, + self._config.custom_hook_policy, + self._config.logging_policy, + policies.DistributedTracingPolicy(**kwargs), + policies.SensitiveHeaderCleanupPolicy(**kwargs) if self._config.redirect_policy else None, + self._config.http_logging_policy, + ] + self._client: PipelineClient = PipelineClient(base_url=_endpoint, policies=_policies, **kwargs) + + self._serialize = Serializer() + self._deserialize = Deserializer() + self._serialize.client_side_validation = False + self.large_face_list = LargeFaceListOperations(self._client, self._config, self._serialize, self._deserialize) + self.large_person_group = LargePersonGroupOperations( + self._client, self._config, self._serialize, self._deserialize + ) + + def send_request(self, request: HttpRequest, *, stream: bool = False, **kwargs: Any) -> HttpResponse: + """Runs the network request through the client's chained policies. + + >>> from azure.core.rest import HttpRequest + >>> request = HttpRequest("GET", "https://www.example.org/") + + >>> response = client.send_request(request) + + + For more information on this code flow, see https://aka.ms/azsdk/dpcodegen/python/send_request + + :param request: The network request you want to make. Required. + :type request: ~azure.core.rest.HttpRequest + :keyword bool stream: Whether the response payload will be streamed. Defaults to False. + :return: The response of your network call. Does not do error handling on your response. + :rtype: ~azure.core.rest.HttpResponse + """ + + request_copy = deepcopy(request) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + + request_copy.url = self._client.format_url(request_copy.url, **path_format_arguments) + return self._client.send_request(request_copy, stream=stream, **kwargs) # type: ignore + + def close(self) -> None: + self._client.close() + + def __enter__(self) -> Self: + self._client.__enter__() + return self + + def __exit__(self, *exc_details: Any) -> None: + self._client.__exit__(*exc_details) + + +class FaceClient(FaceClientOperationsMixin): """FaceClient. :param endpoint: Supported Cognitive Services endpoints (protocol and hostname, for example: @@ -33,8 +131,8 @@ class FaceClient(FaceClientOperationsMixin): # pylint: disable=client-accepts-a AzureKeyCredential type or a TokenCredential type. Required. :type credential: ~azure.core.credentials.AzureKeyCredential or ~azure.core.credentials.TokenCredential - :keyword api_version: API Version. Default value is "v1.1-preview.1". Note that overriding this - default value may result in unsupported behavior. + :keyword api_version: API Version. Known values are "v1.2-preview.1" and None. Default value is + "v1.2-preview.1". Note that overriding this default value may result in unsupported behavior. :paramtype api_version: str or ~azure.ai.vision.face.models.Versions """ @@ -94,7 +192,7 @@ def send_request(self, request: HttpRequest, *, stream: bool = False, **kwargs: def close(self) -> None: self._client.close() - def __enter__(self) -> "FaceClient": + def __enter__(self) -> Self: self._client.__enter__() return self @@ -102,7 +200,7 @@ def __exit__(self, *exc_details: Any) -> None: self._client.__exit__(*exc_details) -class FaceSessionClient(FaceSessionClientOperationsMixin): # pylint: disable=client-accepts-api-version-keyword +class FaceSessionClient(FaceSessionClientOperationsMixin): """FaceSessionClient. :param endpoint: Supported Cognitive Services endpoints (protocol and hostname, for example: @@ -112,8 +210,8 @@ class FaceSessionClient(FaceSessionClientOperationsMixin): # pylint: disable=cl AzureKeyCredential type or a TokenCredential type. Required. :type credential: ~azure.core.credentials.AzureKeyCredential or ~azure.core.credentials.TokenCredential - :keyword api_version: API Version. Default value is "v1.1-preview.1". Note that overriding this - default value may result in unsupported behavior. + :keyword api_version: API Version. Known values are "v1.2-preview.1" and None. Default value is + "v1.2-preview.1". Note that overriding this default value may result in unsupported behavior. :paramtype api_version: str or ~azure.ai.vision.face.models.Versions """ @@ -173,7 +271,7 @@ def send_request(self, request: HttpRequest, *, stream: bool = False, **kwargs: def close(self) -> None: self._client.close() - def __enter__(self) -> "FaceSessionClient": + def __enter__(self) -> Self: self._client.__enter__() return self diff --git a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_configuration.py b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_configuration.py index 02c3cdb902d8..4fe969da162c 100644 --- a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_configuration.py +++ b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_configuration.py @@ -14,10 +14,64 @@ from ._version import VERSION if TYPE_CHECKING: - # pylint: disable=unused-import,ungrouped-imports from azure.core.credentials import TokenCredential +class FaceAdministrationClientConfiguration: # pylint: disable=too-many-instance-attributes + """Configuration for FaceAdministrationClient. + + Note that all parameters used to create this instance are saved as instance + attributes. + + :param endpoint: Supported Cognitive Services endpoints (protocol and hostname, for example: + https://{resource-name}.cognitiveservices.azure.com). Required. + :type endpoint: str + :param credential: Credential used to authenticate requests to the service. Is either a + AzureKeyCredential type or a TokenCredential type. Required. + :type credential: ~azure.core.credentials.AzureKeyCredential or + ~azure.core.credentials.TokenCredential + :keyword api_version: API Version. Known values are "v1.2-preview.1" and None. Default value is + "v1.2-preview.1". Note that overriding this default value may result in unsupported behavior. + :paramtype api_version: str or ~azure.ai.vision.face.models.Versions + """ + + def __init__(self, endpoint: str, credential: Union[AzureKeyCredential, "TokenCredential"], **kwargs: Any) -> None: + api_version: str = kwargs.pop("api_version", "v1.2-preview.1") + + if endpoint is None: + raise ValueError("Parameter 'endpoint' must not be None.") + if credential is None: + raise ValueError("Parameter 'credential' must not be None.") + + self.endpoint = endpoint + self.credential = credential + self.api_version = api_version + self.credential_scopes = kwargs.pop("credential_scopes", ["https://cognitiveservices.azure.com/.default"]) + kwargs.setdefault("sdk_moniker", "ai-vision-face/{}".format(VERSION)) + self.polling_interval = kwargs.get("polling_interval", 30) + self._configure(**kwargs) + + def _infer_policy(self, **kwargs): + if isinstance(self.credential, AzureKeyCredential): + return policies.AzureKeyCredentialPolicy(self.credential, "Ocp-Apim-Subscription-Key", **kwargs) + if hasattr(self.credential, "get_token"): + return policies.BearerTokenCredentialPolicy(self.credential, *self.credential_scopes, **kwargs) + raise TypeError(f"Unsupported credential: {self.credential}") + + def _configure(self, **kwargs: Any) -> None: + self.user_agent_policy = kwargs.get("user_agent_policy") or policies.UserAgentPolicy(**kwargs) + self.headers_policy = kwargs.get("headers_policy") or policies.HeadersPolicy(**kwargs) + self.proxy_policy = kwargs.get("proxy_policy") or policies.ProxyPolicy(**kwargs) + self.logging_policy = kwargs.get("logging_policy") or policies.NetworkTraceLoggingPolicy(**kwargs) + self.http_logging_policy = kwargs.get("http_logging_policy") or policies.HttpLoggingPolicy(**kwargs) + self.custom_hook_policy = kwargs.get("custom_hook_policy") or policies.CustomHookPolicy(**kwargs) + self.redirect_policy = kwargs.get("redirect_policy") or policies.RedirectPolicy(**kwargs) + self.retry_policy = kwargs.get("retry_policy") or policies.RetryPolicy(**kwargs) + self.authentication_policy = kwargs.get("authentication_policy") + if self.credential and not self.authentication_policy: + self.authentication_policy = self._infer_policy(**kwargs) + + class FaceClientConfiguration: # pylint: disable=too-many-instance-attributes """Configuration for FaceClient. @@ -31,13 +85,13 @@ class FaceClientConfiguration: # pylint: disable=too-many-instance-attributes AzureKeyCredential type or a TokenCredential type. Required. :type credential: ~azure.core.credentials.AzureKeyCredential or ~azure.core.credentials.TokenCredential - :keyword api_version: API Version. Default value is "v1.1-preview.1". Note that overriding this - default value may result in unsupported behavior. + :keyword api_version: API Version. Known values are "v1.2-preview.1" and None. Default value is + "v1.2-preview.1". Note that overriding this default value may result in unsupported behavior. :paramtype api_version: str or ~azure.ai.vision.face.models.Versions """ def __init__(self, endpoint: str, credential: Union[AzureKeyCredential, "TokenCredential"], **kwargs: Any) -> None: - api_version: str = kwargs.pop("api_version", "v1.1-preview.1") + api_version: str = kwargs.pop("api_version", "v1.2-preview.1") if endpoint is None: raise ValueError("Parameter 'endpoint' must not be None.") @@ -73,7 +127,7 @@ def _configure(self, **kwargs: Any) -> None: self.authentication_policy = self._infer_policy(**kwargs) -class FaceSessionClientConfiguration: # pylint: disable=too-many-instance-attributes,name-too-long +class FaceSessionClientConfiguration: # pylint: disable=too-many-instance-attributes """Configuration for FaceSessionClient. Note that all parameters used to create this instance are saved as instance @@ -86,13 +140,13 @@ class FaceSessionClientConfiguration: # pylint: disable=too-many-instance-attri AzureKeyCredential type or a TokenCredential type. Required. :type credential: ~azure.core.credentials.AzureKeyCredential or ~azure.core.credentials.TokenCredential - :keyword api_version: API Version. Default value is "v1.1-preview.1". Note that overriding this - default value may result in unsupported behavior. + :keyword api_version: API Version. Known values are "v1.2-preview.1" and None. Default value is + "v1.2-preview.1". Note that overriding this default value may result in unsupported behavior. :paramtype api_version: str or ~azure.ai.vision.face.models.Versions """ def __init__(self, endpoint: str, credential: Union[AzureKeyCredential, "TokenCredential"], **kwargs: Any) -> None: - api_version: str = kwargs.pop("api_version", "v1.1-preview.1") + api_version: str = kwargs.pop("api_version", "v1.2-preview.1") if endpoint is None: raise ValueError("Parameter 'endpoint' must not be None.") diff --git a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_model_base.py b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_model_base.py index 5cf70733404d..9d401b0cf012 100644 --- a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_model_base.py +++ b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_model_base.py @@ -1,10 +1,11 @@ +# pylint: disable=too-many-lines # coding=utf-8 # -------------------------------------------------------------------------- # Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the MIT License. See License.txt in the project root for # license information. # -------------------------------------------------------------------------- -# pylint: disable=protected-access, arguments-differ, signature-differs, broad-except +# pylint: disable=protected-access, arguments-differ, signature-differs, broad-except, too-many-lines import copy import calendar @@ -19,6 +20,7 @@ import email.utils from datetime import datetime, date, time, timedelta, timezone from json import JSONEncoder +import xml.etree.ElementTree as ET from typing_extensions import Self import isodate from azure.core.exceptions import DeserializationError @@ -123,7 +125,7 @@ def _serialize_datetime(o, format: typing.Optional[str] = None): def _is_readonly(p): try: - return p._visibility == ["read"] # pylint: disable=protected-access + return p._visibility == ["read"] except AttributeError: return False @@ -286,6 +288,12 @@ def _deserialize_decimal(attr): return decimal.Decimal(str(attr)) +def _deserialize_int_as_str(attr): + if isinstance(attr, int): + return attr + return int(attr) + + _DESERIALIZE_MAPPING = { datetime: _deserialize_datetime, date: _deserialize_date, @@ -307,9 +315,11 @@ def _deserialize_decimal(attr): def get_deserializer(annotation: typing.Any, rf: typing.Optional["_RestField"] = None): + if annotation is int and rf and rf._format == "str": + return _deserialize_int_as_str if rf and rf._format: return _DESERIALIZE_MAPPING_WITHFORMAT.get(rf._format) - return _DESERIALIZE_MAPPING.get(annotation) + return _DESERIALIZE_MAPPING.get(annotation) # pyright: ignore def _get_type_alias_type(module_name: str, alias_name: str): @@ -441,6 +451,10 @@ def _serialize(o, format: typing.Optional[str] = None): # pylint: disable=too-m return float(o) if isinstance(o, enum.Enum): return o.value + if isinstance(o, int): + if format == "str": + return str(o) + return o try: # First try datetime.datetime return _serialize_datetime(o, format) @@ -471,11 +485,16 @@ def _create_value(rf: typing.Optional["_RestField"], value: typing.Any) -> typin return value if rf._is_model: return _deserialize(rf._type, value) + if isinstance(value, ET.Element): + value = _deserialize(rf._type, value) return _serialize(value, rf._format) class Model(_MyMutableMapping): _is_model = True + # label whether current class's _attr_to_rest_field has been calculated + # could not see _attr_to_rest_field directly because subclass inherits it from parent class + _calculated: typing.Set[str] = set() def __init__(self, *args: typing.Any, **kwargs: typing.Any) -> None: class_name = self.__class__.__name__ @@ -486,10 +505,58 @@ def __init__(self, *args: typing.Any, **kwargs: typing.Any) -> None: for rest_field in self._attr_to_rest_field.values() if rest_field._default is not _UNSET } - if args: - dict_to_pass.update( - {k: _create_value(_get_rest_field(self._attr_to_rest_field, k), v) for k, v in args[0].items()} - ) + if args: # pylint: disable=too-many-nested-blocks + if isinstance(args[0], ET.Element): + existed_attr_keys = [] + model_meta = getattr(self, "_xml", {}) + + for rf in self._attr_to_rest_field.values(): + prop_meta = getattr(rf, "_xml", {}) + xml_name = prop_meta.get("name", rf._rest_name) + xml_ns = prop_meta.get("ns", model_meta.get("ns", None)) + if xml_ns: + xml_name = "{" + xml_ns + "}" + xml_name + + # attribute + if prop_meta.get("attribute", False) and args[0].get(xml_name) is not None: + existed_attr_keys.append(xml_name) + dict_to_pass[rf._rest_name] = _deserialize(rf._type, args[0].get(xml_name)) + continue + + # unwrapped element is array + if prop_meta.get("unwrapped", False): + # unwrapped array could either use prop items meta/prop meta + if prop_meta.get("itemsName"): + xml_name = prop_meta.get("itemsName") + xml_ns = prop_meta.get("itemNs") + if xml_ns: + xml_name = "{" + xml_ns + "}" + xml_name + items = args[0].findall(xml_name) # pyright: ignore + if len(items) > 0: + existed_attr_keys.append(xml_name) + dict_to_pass[rf._rest_name] = _deserialize(rf._type, items) + continue + + # text element is primitive type + if prop_meta.get("text", False): + if args[0].text is not None: + dict_to_pass[rf._rest_name] = _deserialize(rf._type, args[0].text) + continue + + # wrapped element could be normal property or array, it should only have one element + item = args[0].find(xml_name) + if item is not None: + existed_attr_keys.append(xml_name) + dict_to_pass[rf._rest_name] = _deserialize(rf._type, item) + + # rest thing is additional properties + for e in args[0]: + if e.tag not in existed_attr_keys: + dict_to_pass[e.tag] = _convert_element(e) + else: + dict_to_pass.update( + {k: _create_value(_get_rest_field(self._attr_to_rest_field, k), v) for k, v in args[0].items()} + ) else: non_attr_kwargs = [k for k in kwargs if k not in self._attr_to_rest_field] if non_attr_kwargs: @@ -508,24 +575,27 @@ def copy(self) -> "Model": return Model(self.__dict__) def __new__(cls, *args: typing.Any, **kwargs: typing.Any) -> Self: # pylint: disable=unused-argument - # we know the last three classes in mro are going to be 'Model', 'dict', and 'object' - mros = cls.__mro__[:-3][::-1] # ignore model, dict, and object parents, and reverse the mro order - attr_to_rest_field: typing.Dict[str, _RestField] = { # map attribute name to rest_field property - k: v for mro_class in mros for k, v in mro_class.__dict__.items() if k[0] != "_" and hasattr(v, "_type") - } - annotations = { - k: v - for mro_class in mros - if hasattr(mro_class, "__annotations__") # pylint: disable=no-member - for k, v in mro_class.__annotations__.items() # pylint: disable=no-member - } - for attr, rf in attr_to_rest_field.items(): - rf._module = cls.__module__ - if not rf._type: - rf._type = rf._get_deserialize_callable_from_annotation(annotations.get(attr, None)) - if not rf._rest_name_input: - rf._rest_name_input = attr - cls._attr_to_rest_field: typing.Dict[str, _RestField] = dict(attr_to_rest_field.items()) + if f"{cls.__module__}.{cls.__qualname__}" not in cls._calculated: + # we know the last nine classes in mro are going to be 'Model', '_MyMutableMapping', 'MutableMapping', + # 'Mapping', 'Collection', 'Sized', 'Iterable', 'Container' and 'object' + mros = cls.__mro__[:-9][::-1] # ignore parents, and reverse the mro order + attr_to_rest_field: typing.Dict[str, _RestField] = { # map attribute name to rest_field property + k: v for mro_class in mros for k, v in mro_class.__dict__.items() if k[0] != "_" and hasattr(v, "_type") + } + annotations = { + k: v + for mro_class in mros + if hasattr(mro_class, "__annotations__") # pylint: disable=no-member + for k, v in mro_class.__annotations__.items() # pylint: disable=no-member + } + for attr, rf in attr_to_rest_field.items(): + rf._module = cls.__module__ + if not rf._type: + rf._type = rf._get_deserialize_callable_from_annotation(annotations.get(attr, None)) + if not rf._rest_name_input: + rf._rest_name_input = attr + cls._attr_to_rest_field: typing.Dict[str, _RestField] = dict(attr_to_rest_field.items()) + cls._calculated.add(f"{cls.__module__}.{cls.__qualname__}") return super().__new__(cls) # pylint: disable=no-value-for-parameter @@ -535,12 +605,10 @@ def __init_subclass__(cls, discriminator: typing.Optional[str] = None) -> None: base.__mapping__[discriminator or cls.__name__] = cls # type: ignore # pylint: disable=no-member @classmethod - def _get_discriminator(cls, exist_discriminators) -> typing.Optional[str]: + def _get_discriminator(cls, exist_discriminators) -> typing.Optional["_RestField"]: for v in cls.__dict__.values(): - if ( - isinstance(v, _RestField) and v._is_discriminator and v._rest_name not in exist_discriminators - ): # pylint: disable=protected-access - return v._rest_name # pylint: disable=protected-access + if isinstance(v, _RestField) and v._is_discriminator and v._rest_name not in exist_discriminators: + return v return None @classmethod @@ -548,14 +616,28 @@ def _deserialize(cls, data, exist_discriminators): if not hasattr(cls, "__mapping__"): # pylint: disable=no-member return cls(data) discriminator = cls._get_discriminator(exist_discriminators) - exist_discriminators.append(discriminator) - mapped_cls = cls.__mapping__.get(data.get(discriminator), cls) # pyright: ignore # pylint: disable=no-member - if mapped_cls == cls: + if discriminator is None: return cls(data) - return mapped_cls._deserialize(data, exist_discriminators) # pylint: disable=protected-access + exist_discriminators.append(discriminator._rest_name) + if isinstance(data, ET.Element): + model_meta = getattr(cls, "_xml", {}) + prop_meta = getattr(discriminator, "_xml", {}) + xml_name = prop_meta.get("name", discriminator._rest_name) + xml_ns = prop_meta.get("ns", model_meta.get("ns", None)) + if xml_ns: + xml_name = "{" + xml_ns + "}" + xml_name + + if data.get(xml_name) is not None: + discriminator_value = data.get(xml_name) + else: + discriminator_value = data.find(xml_name).text # pyright: ignore + else: + discriminator_value = data.get(discriminator._rest_name) + mapped_cls = cls.__mapping__.get(discriminator_value, cls) # pyright: ignore # pylint: disable=no-member + return mapped_cls._deserialize(data, exist_discriminators) def as_dict(self, *, exclude_readonly: bool = False) -> typing.Dict[str, typing.Any]: - """Return a dict that can be JSONify using json.dump. + """Return a dict that can be turned into json using json.dump. :keyword bool exclude_readonly: Whether to remove the readonly properties. :returns: A dict JSON compatible object @@ -563,6 +645,7 @@ def as_dict(self, *, exclude_readonly: bool = False) -> typing.Dict[str, typing. """ result = {} + readonly_props = [] if exclude_readonly: readonly_props = [p._rest_name for p in self._attr_to_rest_field.values() if _is_readonly(p)] for k, v in self.items(): @@ -617,6 +700,8 @@ def _deserialize_dict( ): if obj is None: return obj + if isinstance(obj, ET.Element): + obj = {child.tag: child for child in obj} return {k: _deserialize(value_deserializer, v, module) for k, v in obj.items()} @@ -637,6 +722,8 @@ def _deserialize_sequence( ): if obj is None: return obj + if isinstance(obj, ET.Element): + obj = list(obj) return type(obj)(_deserialize(deserializer, entry, module) for entry in obj) @@ -647,12 +734,12 @@ def _sorted_annotations(types: typing.List[typing.Any]) -> typing.List[typing.An ) -def _get_deserialize_callable_from_annotation( # pylint: disable=R0911, R0915, R0912 +def _get_deserialize_callable_from_annotation( # pylint: disable=too-many-return-statements, too-many-branches annotation: typing.Any, module: typing.Optional[str], rf: typing.Optional["_RestField"] = None, ) -> typing.Optional[typing.Callable[[typing.Any], typing.Any]]: - if not annotation or annotation in [int, float]: + if not annotation: return None # is it a type alias? @@ -727,7 +814,6 @@ def _get_deserialize_callable_from_annotation( # pylint: disable=R0911, R0915, try: if annotation._name in ["List", "Set", "Tuple", "Sequence"]: # pyright: ignore if len(annotation.__args__) > 1: # pyright: ignore - entry_deserializers = [ _get_deserialize_callable_from_annotation(dt, module, rf) for dt in annotation.__args__ # pyright: ignore @@ -762,12 +848,23 @@ def _deserialize_default( def _deserialize_with_callable( deserializer: typing.Optional[typing.Callable[[typing.Any], typing.Any]], value: typing.Any, -): +): # pylint: disable=too-many-return-statements try: if value is None or isinstance(value, _Null): return None + if isinstance(value, ET.Element): + if deserializer is str: + return value.text or "" + if deserializer is int: + return int(value.text) if value.text else None + if deserializer is float: + return float(value.text) if value.text else None + if deserializer is bool: + return value.text == "true" if value.text else None if deserializer is None: return value + if deserializer in [int, float, bool]: + return deserializer(value) if isinstance(deserializer, CaseInsensitiveEnumMeta): try: return deserializer(value) @@ -808,6 +905,7 @@ def __init__( default: typing.Any = _UNSET, format: typing.Optional[str] = None, is_multipart_file_input: bool = False, + xml: typing.Optional[typing.Dict[str, typing.Any]] = None, ): self._type = type self._rest_name_input = name @@ -818,6 +916,7 @@ def __init__( self._default = default self._format = format self._is_multipart_file_input = is_multipart_file_input + self._xml = xml if xml is not None else {} @property def _class_type(self) -> typing.Any: @@ -868,6 +967,7 @@ def rest_field( default: typing.Any = _UNSET, format: typing.Optional[str] = None, is_multipart_file_input: bool = False, + xml: typing.Optional[typing.Dict[str, typing.Any]] = None, ) -> typing.Any: return _RestField( name=name, @@ -876,6 +976,7 @@ def rest_field( default=default, format=format, is_multipart_file_input=is_multipart_file_input, + xml=xml, ) @@ -883,5 +984,176 @@ def rest_discriminator( *, name: typing.Optional[str] = None, type: typing.Optional[typing.Callable] = None, # pylint: disable=redefined-builtin + visibility: typing.Optional[typing.List[str]] = None, + xml: typing.Optional[typing.Dict[str, typing.Any]] = None, +) -> typing.Any: + return _RestField(name=name, type=type, is_discriminator=True, visibility=visibility, xml=xml) + + +def serialize_xml(model: Model, exclude_readonly: bool = False) -> str: + """Serialize a model to XML. + + :param Model model: The model to serialize. + :param bool exclude_readonly: Whether to exclude readonly properties. + :returns: The XML representation of the model. + :rtype: str + """ + return ET.tostring(_get_element(model, exclude_readonly), encoding="unicode") # type: ignore + + +def _get_element( + o: typing.Any, + exclude_readonly: bool = False, + parent_meta: typing.Optional[typing.Dict[str, typing.Any]] = None, + wrapped_element: typing.Optional[ET.Element] = None, +) -> typing.Union[ET.Element, typing.List[ET.Element]]: + if _is_model(o): + model_meta = getattr(o, "_xml", {}) + + # if prop is a model, then use the prop element directly, else generate a wrapper of model + if wrapped_element is None: + wrapped_element = _create_xml_element( + model_meta.get("name", o.__class__.__name__), + model_meta.get("prefix"), + model_meta.get("ns"), + ) + + readonly_props = [] + if exclude_readonly: + readonly_props = [p._rest_name for p in o._attr_to_rest_field.values() if _is_readonly(p)] + + for k, v in o.items(): + # do not serialize readonly properties + if exclude_readonly and k in readonly_props: + continue + + prop_rest_field = _get_rest_field(o._attr_to_rest_field, k) + if prop_rest_field: + prop_meta = getattr(prop_rest_field, "_xml").copy() + # use the wire name as xml name if no specific name is set + if prop_meta.get("name") is None: + prop_meta["name"] = k + else: + # additional properties will not have rest field, use the wire name as xml name + prop_meta = {"name": k} + + # if no ns for prop, use model's + if prop_meta.get("ns") is None and model_meta.get("ns"): + prop_meta["ns"] = model_meta.get("ns") + prop_meta["prefix"] = model_meta.get("prefix") + + if prop_meta.get("unwrapped", False): + # unwrapped could only set on array + wrapped_element.extend(_get_element(v, exclude_readonly, prop_meta)) + elif prop_meta.get("text", False): + # text could only set on primitive type + wrapped_element.text = _get_primitive_type_value(v) + elif prop_meta.get("attribute", False): + xml_name = prop_meta.get("name", k) + if prop_meta.get("ns"): + ET.register_namespace(prop_meta.get("prefix"), prop_meta.get("ns")) # pyright: ignore + xml_name = "{" + prop_meta.get("ns") + "}" + xml_name # pyright: ignore + # attribute should be primitive type + wrapped_element.set(xml_name, _get_primitive_type_value(v)) + else: + # other wrapped prop element + wrapped_element.append(_get_wrapped_element(v, exclude_readonly, prop_meta)) + return wrapped_element + if isinstance(o, list): + return [_get_element(x, exclude_readonly, parent_meta) for x in o] # type: ignore + if isinstance(o, dict): + result = [] + for k, v in o.items(): + result.append( + _get_wrapped_element( + v, + exclude_readonly, + { + "name": k, + "ns": parent_meta.get("ns") if parent_meta else None, + "prefix": parent_meta.get("prefix") if parent_meta else None, + }, + ) + ) + return result + + # primitive case need to create element based on parent_meta + if parent_meta: + return _get_wrapped_element( + o, + exclude_readonly, + { + "name": parent_meta.get("itemsName", parent_meta.get("name")), + "prefix": parent_meta.get("itemsPrefix", parent_meta.get("prefix")), + "ns": parent_meta.get("itemsNs", parent_meta.get("ns")), + }, + ) + + raise ValueError("Could not serialize value into xml: " + o) + + +def _get_wrapped_element( + v: typing.Any, + exclude_readonly: bool, + meta: typing.Optional[typing.Dict[str, typing.Any]], +) -> ET.Element: + wrapped_element = _create_xml_element( + meta.get("name") if meta else None, meta.get("prefix") if meta else None, meta.get("ns") if meta else None + ) + if isinstance(v, (dict, list)): + wrapped_element.extend(_get_element(v, exclude_readonly, meta)) + elif _is_model(v): + _get_element(v, exclude_readonly, meta, wrapped_element) + else: + wrapped_element.text = _get_primitive_type_value(v) + return wrapped_element + + +def _get_primitive_type_value(v) -> str: + if v is True: + return "true" + if v is False: + return "false" + if isinstance(v, _Null): + return "" + return str(v) + + +def _create_xml_element(tag, prefix=None, ns=None): + if prefix and ns: + ET.register_namespace(prefix, ns) + if ns: + return ET.Element("{" + ns + "}" + tag) + return ET.Element(tag) + + +def _deserialize_xml( + deserializer: typing.Any, + value: str, ) -> typing.Any: - return _RestField(name=name, type=type, is_discriminator=True) + element = ET.fromstring(value) # nosec + return _deserialize(deserializer, element) + + +def _convert_element(e: ET.Element): + # dict case + if len(e.attrib) > 0 or len({child.tag for child in e}) > 1: + dict_result: typing.Dict[str, typing.Any] = {} + for child in e: + if dict_result.get(child.tag) is not None: + if isinstance(dict_result[child.tag], list): + dict_result[child.tag].append(_convert_element(child)) + else: + dict_result[child.tag] = [dict_result[child.tag], _convert_element(child)] + else: + dict_result[child.tag] = _convert_element(child) + dict_result.update(e.attrib) + return dict_result + # array case + if len(e) > 0: + array_result: typing.List[typing.Any] = [] + for child in e: + array_result.append(_convert_element(child)) + return array_result + # primitive case + return e.text diff --git a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_operations/_operations.py b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_operations/_operations.py deleted file mode 100644 index 2744deda0070..000000000000 --- a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_operations/_operations.py +++ /dev/null @@ -1,3860 +0,0 @@ -# pylint: disable=too-many-lines,too-many-statements -# coding=utf-8 -# -------------------------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See License.txt in the project root for license information. -# Code generated by Microsoft (R) Python Code Generator. -# Changes may cause incorrect behavior and will be lost if the code is regenerated. -# -------------------------------------------------------------------------- -from io import IOBase -import json -import sys -from typing import Any, Callable, Dict, IO, List, Optional, Type, TypeVar, Union, overload - -from azure.core.exceptions import ( - ClientAuthenticationError, - HttpResponseError, - ResourceExistsError, - ResourceNotFoundError, - ResourceNotModifiedError, - map_error, -) -from azure.core.pipeline import PipelineResponse -from azure.core.rest import HttpRequest, HttpResponse -from azure.core.tracing.decorator import distributed_trace -from azure.core.utils import case_insensitive_dict - -from .. import _model_base, models as _models -from .._model_base import SdkJSONEncoder, _deserialize -from .._serialization import Serializer -from .._vendor import FaceClientMixinABC, FaceSessionClientMixinABC, prepare_multipart_form_data - -if sys.version_info >= (3, 9): - from collections.abc import MutableMapping -else: - from typing import MutableMapping # type: ignore # pylint: disable=ungrouped-imports -JSON = MutableMapping[str, Any] # pylint: disable=unsubscriptable-object -_Unset: Any = object() -T = TypeVar("T") -ClsType = Optional[Callable[[PipelineResponse[HttpRequest, HttpResponse], T, Dict[str, Any]], Any]] - -_SERIALIZER = Serializer() -_SERIALIZER.client_side_validation = False - - -def build_face_detect_from_url_request( - *, - detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, - recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, - return_face_id: Optional[bool] = None, - return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, - return_face_landmarks: Optional[bool] = None, - return_recognition_model: Optional[bool] = None, - face_id_time_to_live: Optional[int] = None, - **kwargs: Any, -) -> HttpRequest: - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) - - content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("content-type", None)) - accept = _headers.pop("Accept", "application/json") - - # Construct URL - _url = "/detect" - - # Construct parameters - if detection_model is not None: - _params["detectionModel"] = _SERIALIZER.query("detection_model", detection_model, "str") - if recognition_model is not None: - _params["recognitionModel"] = _SERIALIZER.query("recognition_model", recognition_model, "str") - if return_face_id is not None: - _params["returnFaceId"] = _SERIALIZER.query("return_face_id", return_face_id, "bool") - if return_face_attributes is not None: - _params["returnFaceAttributes"] = _SERIALIZER.query( - "return_face_attributes", return_face_attributes, "[str]", div="," - ) - if return_face_landmarks is not None: - _params["returnFaceLandmarks"] = _SERIALIZER.query("return_face_landmarks", return_face_landmarks, "bool") - if return_recognition_model is not None: - _params["returnRecognitionModel"] = _SERIALIZER.query( - "return_recognition_model", return_recognition_model, "bool" - ) - if face_id_time_to_live is not None: - _params["faceIdTimeToLive"] = _SERIALIZER.query("face_id_time_to_live", face_id_time_to_live, "int") - - # Construct headers - if content_type is not None: - _headers["content-type"] = _SERIALIZER.header("content_type", content_type, "str") - _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") - - return HttpRequest(method="POST", url=_url, params=_params, headers=_headers, **kwargs) - - -def build_face_detect_request( - *, - detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, - recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, - return_face_id: Optional[bool] = None, - return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, - return_face_landmarks: Optional[bool] = None, - return_recognition_model: Optional[bool] = None, - face_id_time_to_live: Optional[int] = None, - **kwargs: Any, -) -> HttpRequest: - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) - - content_type: str = kwargs.pop("content_type") - accept = _headers.pop("Accept", "application/json") - - # Construct URL - _url = "/detect" - - # Construct parameters - if detection_model is not None: - _params["detectionModel"] = _SERIALIZER.query("detection_model", detection_model, "str") - if recognition_model is not None: - _params["recognitionModel"] = _SERIALIZER.query("recognition_model", recognition_model, "str") - if return_face_id is not None: - _params["returnFaceId"] = _SERIALIZER.query("return_face_id", return_face_id, "bool") - if return_face_attributes is not None: - _params["returnFaceAttributes"] = _SERIALIZER.query( - "return_face_attributes", return_face_attributes, "[str]", div="," - ) - if return_face_landmarks is not None: - _params["returnFaceLandmarks"] = _SERIALIZER.query("return_face_landmarks", return_face_landmarks, "bool") - if return_recognition_model is not None: - _params["returnRecognitionModel"] = _SERIALIZER.query( - "return_recognition_model", return_recognition_model, "bool" - ) - if face_id_time_to_live is not None: - _params["faceIdTimeToLive"] = _SERIALIZER.query("face_id_time_to_live", face_id_time_to_live, "int") - - # Construct headers - _headers["content-type"] = _SERIALIZER.header("content_type", content_type, "str") - _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") - - return HttpRequest(method="POST", url=_url, params=_params, headers=_headers, **kwargs) - - -def build_face_find_similar_request(**kwargs: Any) -> HttpRequest: - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - - content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) - accept = _headers.pop("Accept", "application/json") - - # Construct URL - _url = "/findsimilars" - - # Construct headers - if content_type is not None: - _headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str") - _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") - - return HttpRequest(method="POST", url=_url, headers=_headers, **kwargs) - - -def build_face_verify_face_to_face_request(**kwargs: Any) -> HttpRequest: - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - - content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) - accept = _headers.pop("Accept", "application/json") - - # Construct URL - _url = "/verify" - - # Construct headers - if content_type is not None: - _headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str") - _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") - - return HttpRequest(method="POST", url=_url, headers=_headers, **kwargs) - - -def build_face_group_request(**kwargs: Any) -> HttpRequest: - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - - content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) - accept = _headers.pop("Accept", "application/json") - - # Construct URL - _url = "/group" - - # Construct headers - if content_type is not None: - _headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str") - _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") - - return HttpRequest(method="POST", url=_url, headers=_headers, **kwargs) - - -def build_face_session_create_liveness_session_request(**kwargs: Any) -> HttpRequest: # pylint: disable=name-too-long - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - - content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) - accept = _headers.pop("Accept", "application/json") - - # Construct URL - _url = "/detectLiveness/singleModal/sessions" - - # Construct headers - if content_type is not None: - _headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str") - _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") - - return HttpRequest(method="POST", url=_url, headers=_headers, **kwargs) - - -def build_face_session_delete_liveness_session_request( # pylint: disable=name-too-long - session_id: str, **kwargs: Any -) -> HttpRequest: - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - - accept = _headers.pop("Accept", "application/json") - - # Construct URL - _url = "/detectLiveness/singleModal/sessions/{sessionId}" - path_format_arguments = { - "sessionId": _SERIALIZER.url("session_id", session_id, "str"), - } - - _url: str = _url.format(**path_format_arguments) # type: ignore - - # Construct headers - _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") - - return HttpRequest(method="DELETE", url=_url, headers=_headers, **kwargs) - - -def build_face_session_get_liveness_session_result_request( # pylint: disable=name-too-long - session_id: str, **kwargs: Any -) -> HttpRequest: - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - - accept = _headers.pop("Accept", "application/json") - - # Construct URL - _url = "/detectLiveness/singleModal/sessions/{sessionId}" - path_format_arguments = { - "sessionId": _SERIALIZER.url("session_id", session_id, "str"), - } - - _url: str = _url.format(**path_format_arguments) # type: ignore - - # Construct headers - _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") - - return HttpRequest(method="GET", url=_url, headers=_headers, **kwargs) - - -def build_face_session_get_liveness_sessions_request( # pylint: disable=name-too-long - *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any -) -> HttpRequest: - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) - - accept = _headers.pop("Accept", "application/json") - - # Construct URL - _url = "/detectLiveness/singleModal/sessions" - - # Construct parameters - if start is not None: - _params["start"] = _SERIALIZER.query("start", start, "str") - if top is not None: - _params["top"] = _SERIALIZER.query("top", top, "int") - - # Construct headers - _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") - - return HttpRequest(method="GET", url=_url, params=_params, headers=_headers, **kwargs) - - -def build_face_session_get_liveness_session_audit_entries_request( # pylint: disable=name-too-long - session_id: str, *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any -) -> HttpRequest: - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) - - accept = _headers.pop("Accept", "application/json") - - # Construct URL - _url = "/detectLiveness/singleModal/sessions/{sessionId}/audit" - path_format_arguments = { - "sessionId": _SERIALIZER.url("session_id", session_id, "str"), - } - - _url: str = _url.format(**path_format_arguments) # type: ignore - - # Construct parameters - if start is not None: - _params["start"] = _SERIALIZER.query("start", start, "str") - if top is not None: - _params["top"] = _SERIALIZER.query("top", top, "int") - - # Construct headers - _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") - - return HttpRequest(method="GET", url=_url, params=_params, headers=_headers, **kwargs) - - -def build_face_session_create_liveness_with_verify_session_request( # pylint: disable=name-too-long - **kwargs: Any, -) -> HttpRequest: - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - - content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) - accept = _headers.pop("Accept", "application/json") - - # Construct URL - _url = "/detectLivenessWithVerify/singleModal/sessions" - - # Construct headers - if content_type is not None: - _headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str") - _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") - - return HttpRequest(method="POST", url=_url, headers=_headers, **kwargs) - - -def build_face_session_create_liveness_with_verify_session_with_verify_image_request( # pylint: disable=name-too-long - **kwargs: Any, -) -> HttpRequest: - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - - accept = _headers.pop("Accept", "application/json") - - # Construct URL - _url = "/detectLivenessWithVerify/singleModal/sessions" - - # Construct headers - _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") - - return HttpRequest(method="POST", url=_url, headers=_headers, **kwargs) - - -def build_face_session_delete_liveness_with_verify_session_request( # pylint: disable=name-too-long - session_id: str, **kwargs: Any -) -> HttpRequest: - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - - accept = _headers.pop("Accept", "application/json") - - # Construct URL - _url = "/detectLivenessWithVerify/singleModal/sessions/{sessionId}" - path_format_arguments = { - "sessionId": _SERIALIZER.url("session_id", session_id, "str"), - } - - _url: str = _url.format(**path_format_arguments) # type: ignore - - # Construct headers - _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") - - return HttpRequest(method="DELETE", url=_url, headers=_headers, **kwargs) - - -def build_face_session_get_liveness_with_verify_session_result_request( # pylint: disable=name-too-long - session_id: str, **kwargs: Any -) -> HttpRequest: - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - - accept = _headers.pop("Accept", "application/json") - - # Construct URL - _url = "/detectLivenessWithVerify/singleModal/sessions/{sessionId}" - path_format_arguments = { - "sessionId": _SERIALIZER.url("session_id", session_id, "str"), - } - - _url: str = _url.format(**path_format_arguments) # type: ignore - - # Construct headers - _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") - - return HttpRequest(method="GET", url=_url, headers=_headers, **kwargs) - - -def build_face_session_get_liveness_with_verify_sessions_request( # pylint: disable=name-too-long - *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any -) -> HttpRequest: - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) - - accept = _headers.pop("Accept", "application/json") - - # Construct URL - _url = "/detectLivenessWithVerify/singleModal/sessions" - - # Construct parameters - if start is not None: - _params["start"] = _SERIALIZER.query("start", start, "str") - if top is not None: - _params["top"] = _SERIALIZER.query("top", top, "int") - - # Construct headers - _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") - - return HttpRequest(method="GET", url=_url, params=_params, headers=_headers, **kwargs) - - -def build_face_session_get_liveness_with_verify_session_audit_entries_request( # pylint: disable=name-too-long - session_id: str, *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any -) -> HttpRequest: - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) - - accept = _headers.pop("Accept", "application/json") - - # Construct URL - _url = "/detectLivenessWithVerify/singleModal/sessions/{sessionId}/audit" - path_format_arguments = { - "sessionId": _SERIALIZER.url("session_id", session_id, "str"), - } - - _url: str = _url.format(**path_format_arguments) # type: ignore - - # Construct parameters - if start is not None: - _params["start"] = _SERIALIZER.query("start", start, "str") - if top is not None: - _params["top"] = _SERIALIZER.query("top", top, "int") - - # Construct headers - _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") - - return HttpRequest(method="GET", url=_url, params=_params, headers=_headers, **kwargs) - - -class FaceClientOperationsMixin(FaceClientMixinABC): - - @overload - def _detect_from_url( - self, - body: JSON, - *, - content_type: str = "application/json", - detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, - recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, - return_face_id: Optional[bool] = None, - return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, - return_face_landmarks: Optional[bool] = None, - return_recognition_model: Optional[bool] = None, - face_id_time_to_live: Optional[int] = None, - **kwargs: Any, - ) -> List[_models.FaceDetectionResult]: ... - @overload - def _detect_from_url( - self, - *, - url: str, - content_type: str = "application/json", - detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, - recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, - return_face_id: Optional[bool] = None, - return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, - return_face_landmarks: Optional[bool] = None, - return_recognition_model: Optional[bool] = None, - face_id_time_to_live: Optional[int] = None, - **kwargs: Any, - ) -> List[_models.FaceDetectionResult]: ... - @overload - def _detect_from_url( - self, - body: IO[bytes], - *, - content_type: str = "application/json", - detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, - recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, - return_face_id: Optional[bool] = None, - return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, - return_face_landmarks: Optional[bool] = None, - return_recognition_model: Optional[bool] = None, - face_id_time_to_live: Optional[int] = None, - **kwargs: Any, - ) -> List[_models.FaceDetectionResult]: ... - - @distributed_trace - def _detect_from_url( - self, - body: Union[JSON, IO[bytes]] = _Unset, - *, - url: str = _Unset, - detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, - recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, - return_face_id: Optional[bool] = None, - return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, - return_face_landmarks: Optional[bool] = None, - return_recognition_model: Optional[bool] = None, - face_id_time_to_live: Optional[int] = None, - **kwargs: Any, - ) -> List[_models.FaceDetectionResult]: - # pylint: disable=line-too-long - """Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, - and attributes. - - .. - - [!IMPORTANT] - To mitigate potential misuse that can subject people to stereotyping, discrimination, or - unfair denial of services, we are retiring Face API attributes that predict emotion, gender, - age, smile, facial hair, hair, and makeup. Read more about this decision - https://azure.microsoft.com/blog/responsible-ai-investments-and-safeguards-for-facial-recognition/. - - - * - - - * No image will be stored. Only the extracted face feature(s) will be stored on server. The - faceId is an identifier of the face feature and will be used in "Identify", "Verify", and "Find - Similar". The stored face features will expire and be deleted at the time specified by - faceIdTimeToLive after the original detection call. - * Optional parameters include faceId, landmarks, and attributes. Attributes include headPose, - glasses, occlusion, accessories, blur, exposure, noise, mask, and qualityForRecognition. Some - of the results returned for specific attributes may not be highly accurate. - * JPEG, PNG, GIF (the first frame), and BMP format are supported. The allowed image file size - is from 1KB to 6MB. - * The minimum detectable face size is 36x36 pixels in an image no larger than 1920x1080 pixels. - Images with dimensions higher than 1920x1080 pixels will need a proportionally larger minimum - face size. - * Up to 100 faces can be returned for an image. Faces are ranked by face rectangle size from - large to small. - * For optimal results when querying "Identify", "Verify", and "Find Similar" ('returnFaceId' is - true), please use faces that are: frontal, clear, and with a minimum size of 200x200 pixels - (100 pixels between eyes). - * Different 'detectionModel' values can be provided. To use and compare different detection - models, please refer to - https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/specify-detection-model - - * 'detection_02': Face attributes and landmarks are disabled if you choose this detection - model. - * 'detection_03': Face attributes (mask, blur, and headPose) and landmarks are supported if - you choose this detection model. - - * Different 'recognitionModel' values are provided. If follow-up operations like "Verify", - "Identify", "Find Similar" are needed, please specify the recognition model with - 'recognitionModel' parameter. The default value for 'recognitionModel' is 'recognition_01', if - latest model needed, please explicitly specify the model you need in this parameter. Once - specified, the detected faceIds will be associated with the specified recognition model. More - details, please refer to - https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/specify-recognition-model. - - :param body: Is either a JSON type or a IO[bytes] type. Required. - :type body: JSON or IO[bytes] - :keyword url: URL of input image. Required. - :paramtype url: str - :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported - 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default - value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". - Default value is None. - :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel - :keyword recognition_model: The 'recognitionModel' associated with the detected faceIds. - Supported 'recognitionModel' values include 'recognition_01', 'recognition_02', - 'recognition_03' or 'recognition_04'. The default value is 'recognition_01'. 'recognition_04' - is recommended since its accuracy is improved on faces wearing masks compared with - 'recognition_03', and its overall accuracy is improved compared with 'recognition_01' and - 'recognition_02'. Known values are: "recognition_01", "recognition_02", "recognition_03", and - "recognition_04". Default value is None. - :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel - :keyword return_face_id: Return faceIds of the detected faces or not. The default value is - true. Default value is None. - :paramtype return_face_id: bool - :keyword return_face_attributes: Analyze and return the one or more specified face attributes - in the comma-separated string like 'returnFaceAttributes=headPose,glasses'. Face attribute - analysis has additional computational and time cost. Default value is None. - :paramtype return_face_attributes: list[str or ~azure.ai.vision.face.models.FaceAttributeType] - :keyword return_face_landmarks: Return face landmarks of the detected faces or not. The default - value is false. Default value is None. - :paramtype return_face_landmarks: bool - :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is - false. This is only applicable when returnFaceId = true. Default value is None. - :paramtype return_recognition_model: bool - :keyword face_id_time_to_live: The number of seconds for the face ID being cached. Supported - range from 60 seconds up to 86400 seconds. The default value is 86400 (24 hours). Default value - is None. - :paramtype face_id_time_to_live: int - :return: list of FaceDetectionResult - :rtype: list[~azure.ai.vision.face.models.FaceDetectionResult] - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # JSON input template you can fill out and use as your body input. - body = { - "url": "str" # URL of input image. Required. - } - - # response body for status code(s): 200 - response == [ - { - "faceRectangle": { - "height": 0, # The height of the rectangle, in pixels. - Required. - "left": 0, # The distance from the left edge if the image to - the left edge of the rectangle, in pixels. Required. - "top": 0, # The distance from the top edge if the image to - the top edge of the rectangle, in pixels. Required. - "width": 0 # The width of the rectangle, in pixels. - Required. - }, - "faceAttributes": { - "accessories": [ - { - "confidence": 0.0, # Confidence level of the - accessory type. Range between [0,1]. Required. - "type": "str" # Type of the accessory. - Required. Known values are: "headwear", "glasses", and "mask". - } - ], - "age": 0.0, # Optional. Age in years. - "blur": { - "blurLevel": "str", # An enum value indicating level - of blurriness. Required. Known values are: "low", "medium", and - "high". - "value": 0.0 # A number indicating level of - blurriness ranging from 0 to 1. Required. - }, - "exposure": { - "exposureLevel": "str", # An enum value indicating - level of exposure. Required. Known values are: "underExposure", - "goodExposure", and "overExposure". - "value": 0.0 # A number indicating level of exposure - level ranging from 0 to 1. [0, 0.25) is under exposure. [0.25, 0.75) - is good exposure. [0.75, 1] is over exposure. Required. - }, - "facialHair": { - "beard": 0.0, # A number ranging from 0 to 1 - indicating a level of confidence associated with a property. - Required. - "moustache": 0.0, # A number ranging from 0 to 1 - indicating a level of confidence associated with a property. - Required. - "sideburns": 0.0 # A number ranging from 0 to 1 - indicating a level of confidence associated with a property. - Required. - }, - "glasses": "str", # Optional. Glasses type if any of the - face. Known values are: "noGlasses", "readingGlasses", "sunglasses", and - "swimmingGoggles". - "hair": { - "bald": 0.0, # A number describing confidence level - of whether the person is bald. Required. - "hairColor": [ - { - "color": "str", # Name of the hair - color. Required. Known values are: "unknown", "white", - "gray", "blond", "brown", "red", "black", and "other". - "confidence": 0.0 # Confidence level - of the color. Range between [0,1]. Required. - } - ], - "invisible": bool # A boolean value describing - whether the hair is visible in the image. Required. - }, - "headPose": { - "pitch": 0.0, # Value of angles. Required. - "roll": 0.0, # Value of angles. Required. - "yaw": 0.0 # Value of angles. Required. - }, - "mask": { - "noseAndMouthCovered": bool, # A boolean value - indicating whether nose and mouth are covered. Required. - "type": "str" # Type of the mask. Required. Known - values are: "faceMask", "noMask", "otherMaskOrOcclusion", and - "uncertain". - }, - "noise": { - "noiseLevel": "str", # An enum value indicating - level of noise. Required. Known values are: "low", "medium", and - "high". - "value": 0.0 # A number indicating level of noise - level ranging from 0 to 1. [0, 0.25) is under exposure. [0.25, 0.75) - is good exposure. [0.75, 1] is over exposure. [0, 0.3) is low noise - level. [0.3, 0.7) is medium noise level. [0.7, 1] is high noise - level. Required. - }, - "occlusion": { - "eyeOccluded": bool, # A boolean value indicating - whether eyes are occluded. Required. - "foreheadOccluded": bool, # A boolean value - indicating whether forehead is occluded. Required. - "mouthOccluded": bool # A boolean value indicating - whether the mouth is occluded. Required. - }, - "qualityForRecognition": "str", # Optional. Properties - describing the overall image quality regarding whether the image being - used in the detection is of sufficient quality to attempt face - recognition on. Known values are: "low", "medium", and "high". - "smile": 0.0 # Optional. Smile intensity, a number between - [0,1]. - }, - "faceId": "str", # Optional. Unique faceId of the detected face, - created by detection API and it will expire 24 hours after the detection - call. To return this, it requires 'returnFaceId' parameter to be true. - "faceLandmarks": { - "eyeLeftBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeLeftInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeLeftOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeLeftTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowLeftInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowLeftOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowRightInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowRightOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "mouthLeft": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "mouthRight": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseLeftAlarOutTip": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseLeftAlarTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRightAlarOutTip": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRightAlarTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRootLeft": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRootRight": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseTip": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "pupilLeft": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "pupilRight": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "underLipBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "underLipTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "upperLipBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "upperLipTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - } - }, - "recognitionModel": "str" # Optional. The 'recognitionModel' - associated with this faceId. This is only returned when - 'returnRecognitionModel' is explicitly set as true. Known values are: - "recognition_01", "recognition_02", "recognition_03", and "recognition_04". - } - ] - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - _params = kwargs.pop("params", {}) or {} - - content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("content-type", None)) - cls: ClsType[List[_models.FaceDetectionResult]] = kwargs.pop("cls", None) - - if body is _Unset: - if url is _Unset: - raise TypeError("missing required argument: url") - body = {"url": url} - body = {k: v for k, v in body.items() if v is not None} - content_type = content_type or "application/json" - _content = None - if isinstance(body, (IOBase, bytes)): - _content = body - else: - _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore - - _request = build_face_detect_from_url_request( - detection_model=detection_model, - recognition_model=recognition_model, - return_face_id=return_face_id, - return_face_attributes=return_face_attributes, - return_face_landmarks=return_face_landmarks, - return_recognition_model=return_recognition_model, - face_id_time_to_live=face_id_time_to_live, - content_type=content_type, - content=_content, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(List[_models.FaceDetectionResult], response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore - - @distributed_trace - def _detect( - self, - image_content: bytes, - *, - detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, - recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, - return_face_id: Optional[bool] = None, - return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, - return_face_landmarks: Optional[bool] = None, - return_recognition_model: Optional[bool] = None, - face_id_time_to_live: Optional[int] = None, - **kwargs: Any, - ) -> List[_models.FaceDetectionResult]: - # pylint: disable=line-too-long - """Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, - and attributes. - - .. - - [!IMPORTANT] - To mitigate potential misuse that can subject people to stereotyping, discrimination, or - unfair denial of services, we are retiring Face API attributes that predict emotion, gender, - age, smile, facial hair, hair, and makeup. Read more about this decision - https://azure.microsoft.com/blog/responsible-ai-investments-and-safeguards-for-facial-recognition/. - - - * - - - * No image will be stored. Only the extracted face feature(s) will be stored on server. The - faceId is an identifier of the face feature and will be used in "Identify", "Verify", and "Find - Similar". The stored face features will expire and be deleted at the time specified by - faceIdTimeToLive after the original detection call. - * Optional parameters include faceId, landmarks, and attributes. Attributes include headPose, - glasses, occlusion, accessories, blur, exposure, noise, mask, and qualityForRecognition. Some - of the results returned for specific attributes may not be highly accurate. - * JPEG, PNG, GIF (the first frame), and BMP format are supported. The allowed image file size - is from 1KB to 6MB. - * The minimum detectable face size is 36x36 pixels in an image no larger than 1920x1080 pixels. - Images with dimensions higher than 1920x1080 pixels will need a proportionally larger minimum - face size. - * Up to 100 faces can be returned for an image. Faces are ranked by face rectangle size from - large to small. - * For optimal results when querying "Identify", "Verify", and "Find Similar" ('returnFaceId' is - true), please use faces that are: frontal, clear, and with a minimum size of 200x200 pixels - (100 pixels between eyes). - * Different 'detectionModel' values can be provided. To use and compare different detection - models, please refer to - https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/specify-detection-model - - * 'detection_02': Face attributes and landmarks are disabled if you choose this detection - model. - * 'detection_03': Face attributes (mask, blur, and headPose) and landmarks are supported if - you choose this detection model. - - * Different 'recognitionModel' values are provided. If follow-up operations like "Verify", - "Identify", "Find Similar" are needed, please specify the recognition model with - 'recognitionModel' parameter. The default value for 'recognitionModel' is 'recognition_01', if - latest model needed, please explicitly specify the model you need in this parameter. Once - specified, the detected faceIds will be associated with the specified recognition model. More - details, please refer to - https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/specify-recognition-model. - - :param image_content: The input image binary. Required. - :type image_content: bytes - :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported - 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default - value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". - Default value is None. - :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel - :keyword recognition_model: The 'recognitionModel' associated with the detected faceIds. - Supported 'recognitionModel' values include 'recognition_01', 'recognition_02', - 'recognition_03' or 'recognition_04'. The default value is 'recognition_01'. 'recognition_04' - is recommended since its accuracy is improved on faces wearing masks compared with - 'recognition_03', and its overall accuracy is improved compared with 'recognition_01' and - 'recognition_02'. Known values are: "recognition_01", "recognition_02", "recognition_03", and - "recognition_04". Default value is None. - :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel - :keyword return_face_id: Return faceIds of the detected faces or not. The default value is - true. Default value is None. - :paramtype return_face_id: bool - :keyword return_face_attributes: Analyze and return the one or more specified face attributes - in the comma-separated string like 'returnFaceAttributes=headPose,glasses'. Face attribute - analysis has additional computational and time cost. Default value is None. - :paramtype return_face_attributes: list[str or ~azure.ai.vision.face.models.FaceAttributeType] - :keyword return_face_landmarks: Return face landmarks of the detected faces or not. The default - value is false. Default value is None. - :paramtype return_face_landmarks: bool - :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is - false. This is only applicable when returnFaceId = true. Default value is None. - :paramtype return_recognition_model: bool - :keyword face_id_time_to_live: The number of seconds for the face ID being cached. Supported - range from 60 seconds up to 86400 seconds. The default value is 86400 (24 hours). Default value - is None. - :paramtype face_id_time_to_live: int - :return: list of FaceDetectionResult - :rtype: list[~azure.ai.vision.face.models.FaceDetectionResult] - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == [ - { - "faceRectangle": { - "height": 0, # The height of the rectangle, in pixels. - Required. - "left": 0, # The distance from the left edge if the image to - the left edge of the rectangle, in pixels. Required. - "top": 0, # The distance from the top edge if the image to - the top edge of the rectangle, in pixels. Required. - "width": 0 # The width of the rectangle, in pixels. - Required. - }, - "faceAttributes": { - "accessories": [ - { - "confidence": 0.0, # Confidence level of the - accessory type. Range between [0,1]. Required. - "type": "str" # Type of the accessory. - Required. Known values are: "headwear", "glasses", and "mask". - } - ], - "age": 0.0, # Optional. Age in years. - "blur": { - "blurLevel": "str", # An enum value indicating level - of blurriness. Required. Known values are: "low", "medium", and - "high". - "value": 0.0 # A number indicating level of - blurriness ranging from 0 to 1. Required. - }, - "exposure": { - "exposureLevel": "str", # An enum value indicating - level of exposure. Required. Known values are: "underExposure", - "goodExposure", and "overExposure". - "value": 0.0 # A number indicating level of exposure - level ranging from 0 to 1. [0, 0.25) is under exposure. [0.25, 0.75) - is good exposure. [0.75, 1] is over exposure. Required. - }, - "facialHair": { - "beard": 0.0, # A number ranging from 0 to 1 - indicating a level of confidence associated with a property. - Required. - "moustache": 0.0, # A number ranging from 0 to 1 - indicating a level of confidence associated with a property. - Required. - "sideburns": 0.0 # A number ranging from 0 to 1 - indicating a level of confidence associated with a property. - Required. - }, - "glasses": "str", # Optional. Glasses type if any of the - face. Known values are: "noGlasses", "readingGlasses", "sunglasses", and - "swimmingGoggles". - "hair": { - "bald": 0.0, # A number describing confidence level - of whether the person is bald. Required. - "hairColor": [ - { - "color": "str", # Name of the hair - color. Required. Known values are: "unknown", "white", - "gray", "blond", "brown", "red", "black", and "other". - "confidence": 0.0 # Confidence level - of the color. Range between [0,1]. Required. - } - ], - "invisible": bool # A boolean value describing - whether the hair is visible in the image. Required. - }, - "headPose": { - "pitch": 0.0, # Value of angles. Required. - "roll": 0.0, # Value of angles. Required. - "yaw": 0.0 # Value of angles. Required. - }, - "mask": { - "noseAndMouthCovered": bool, # A boolean value - indicating whether nose and mouth are covered. Required. - "type": "str" # Type of the mask. Required. Known - values are: "faceMask", "noMask", "otherMaskOrOcclusion", and - "uncertain". - }, - "noise": { - "noiseLevel": "str", # An enum value indicating - level of noise. Required. Known values are: "low", "medium", and - "high". - "value": 0.0 # A number indicating level of noise - level ranging from 0 to 1. [0, 0.25) is under exposure. [0.25, 0.75) - is good exposure. [0.75, 1] is over exposure. [0, 0.3) is low noise - level. [0.3, 0.7) is medium noise level. [0.7, 1] is high noise - level. Required. - }, - "occlusion": { - "eyeOccluded": bool, # A boolean value indicating - whether eyes are occluded. Required. - "foreheadOccluded": bool, # A boolean value - indicating whether forehead is occluded. Required. - "mouthOccluded": bool # A boolean value indicating - whether the mouth is occluded. Required. - }, - "qualityForRecognition": "str", # Optional. Properties - describing the overall image quality regarding whether the image being - used in the detection is of sufficient quality to attempt face - recognition on. Known values are: "low", "medium", and "high". - "smile": 0.0 # Optional. Smile intensity, a number between - [0,1]. - }, - "faceId": "str", # Optional. Unique faceId of the detected face, - created by detection API and it will expire 24 hours after the detection - call. To return this, it requires 'returnFaceId' parameter to be true. - "faceLandmarks": { - "eyeLeftBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeLeftInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeLeftOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeLeftTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowLeftInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowLeftOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowRightInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowRightOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "mouthLeft": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "mouthRight": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseLeftAlarOutTip": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseLeftAlarTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRightAlarOutTip": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRightAlarTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRootLeft": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRootRight": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseTip": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "pupilLeft": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "pupilRight": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "underLipBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "underLipTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "upperLipBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "upperLipTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - } - }, - "recognitionModel": "str" # Optional. The 'recognitionModel' - associated with this faceId. This is only returned when - 'returnRecognitionModel' is explicitly set as true. Known values are: - "recognition_01", "recognition_02", "recognition_03", and "recognition_04". - } - ] - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - _params = kwargs.pop("params", {}) or {} - - content_type: str = kwargs.pop("content_type", _headers.pop("content-type", "application/octet-stream")) - cls: ClsType[List[_models.FaceDetectionResult]] = kwargs.pop("cls", None) - - _content = image_content - - _request = build_face_detect_request( - detection_model=detection_model, - recognition_model=recognition_model, - return_face_id=return_face_id, - return_face_attributes=return_face_attributes, - return_face_landmarks=return_face_landmarks, - return_recognition_model=return_recognition_model, - face_id_time_to_live=face_id_time_to_live, - content_type=content_type, - content=_content, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(List[_models.FaceDetectionResult], response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore - - @overload - def find_similar( - self, body: JSON, *, content_type: str = "application/json", **kwargs: Any - ) -> List[_models.FaceFindSimilarResult]: - # pylint: disable=line-too-long - """Given query face's faceId, to search the similar-looking faces from a faceId array. A faceId - array contains the faces created by Detect. - - Depending on the input the returned similar faces list contains faceIds or persistedFaceIds - ranked by similarity. - - Find similar has two working modes, "matchPerson" and "matchFace". "matchPerson" is the default - mode that it tries to find faces of the same person as possible by using internal same-person - thresholds. It is useful to find a known person's other photos. Note that an empty list will be - returned if no faces pass the internal thresholds. "matchFace" mode ignores same-person - thresholds and returns ranked similar faces anyway, even the similarity is low. It can be used - in the cases like searching celebrity-looking faces. - - The 'recognitionModel' associated with the query faceId should be the same as the - 'recognitionModel' used by the target faceId array. - - :param body: Required. - :type body: JSON - :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. - Default value is "application/json". - :paramtype content_type: str - :return: list of FaceFindSimilarResult - :rtype: list[~azure.ai.vision.face.models.FaceFindSimilarResult] - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # JSON input template you can fill out and use as your body input. - body = { - "faceId": "str", # faceId of the query face. User needs to call "Detect" - first to get a valid faceId. Note that this faceId is not persisted and will - expire 24 hours after the detection call. Required. - "faceIds": [ - "str" # An array of candidate faceIds. All of them are created by - "Detect" and the faceIds will expire 24 hours after the detection call. The - number of faceIds is limited to 1000. Required. - ], - "maxNumOfCandidatesReturned": 0, # Optional. The number of top similar faces - returned. The valid range is [1, 1000]. Default value is 20. - "mode": "str" # Optional. Similar face searching mode. It can be - 'matchPerson' or 'matchFace'. Default value is 'matchPerson'. Known values are: - "matchPerson" and "matchFace". - } - - # response body for status code(s): 200 - response == [ - { - "confidence": 0.0, # Confidence value of the candidate. The higher - confidence, the more similar. Range between [0,1]. Required. - "faceId": "str", # Optional. faceId of candidate face when find by - faceIds. faceId is created by "Detect" and will expire 24 hours after the - detection call. - "persistedFaceId": "str" # Optional. persistedFaceId of candidate - face when find by faceListId or largeFaceListId. persistedFaceId in face - list/large face list is persisted and will not expire. - } - ] - """ - - @overload - def find_similar( - self, - *, - face_id: str, - face_ids: List[str], - content_type: str = "application/json", - max_num_of_candidates_returned: Optional[int] = None, - mode: Optional[Union[str, _models.FindSimilarMatchMode]] = None, - **kwargs: Any, - ) -> List[_models.FaceFindSimilarResult]: - # pylint: disable=line-too-long - """Given query face's faceId, to search the similar-looking faces from a faceId array. A faceId - array contains the faces created by Detect. - - Depending on the input the returned similar faces list contains faceIds or persistedFaceIds - ranked by similarity. - - Find similar has two working modes, "matchPerson" and "matchFace". "matchPerson" is the default - mode that it tries to find faces of the same person as possible by using internal same-person - thresholds. It is useful to find a known person's other photos. Note that an empty list will be - returned if no faces pass the internal thresholds. "matchFace" mode ignores same-person - thresholds and returns ranked similar faces anyway, even the similarity is low. It can be used - in the cases like searching celebrity-looking faces. - - The 'recognitionModel' associated with the query faceId should be the same as the - 'recognitionModel' used by the target faceId array. - - :keyword face_id: faceId of the query face. User needs to call "Detect" first to get a valid - faceId. Note that this faceId is not persisted and will expire 24 hours after the detection - call. Required. - :paramtype face_id: str - :keyword face_ids: An array of candidate faceIds. All of them are created by "Detect" and the - faceIds will expire 24 hours after the detection call. The number of faceIds is limited to - 1000. Required. - :paramtype face_ids: list[str] - :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. - Default value is "application/json". - :paramtype content_type: str - :keyword max_num_of_candidates_returned: The number of top similar faces returned. The valid - range is [1, 1000]. Default value is 20. Default value is None. - :paramtype max_num_of_candidates_returned: int - :keyword mode: Similar face searching mode. It can be 'matchPerson' or 'matchFace'. Default - value is 'matchPerson'. Known values are: "matchPerson" and "matchFace". Default value is None. - :paramtype mode: str or ~azure.ai.vision.face.models.FindSimilarMatchMode - :return: list of FaceFindSimilarResult - :rtype: list[~azure.ai.vision.face.models.FaceFindSimilarResult] - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == [ - { - "confidence": 0.0, # Confidence value of the candidate. The higher - confidence, the more similar. Range between [0,1]. Required. - "faceId": "str", # Optional. faceId of candidate face when find by - faceIds. faceId is created by "Detect" and will expire 24 hours after the - detection call. - "persistedFaceId": "str" # Optional. persistedFaceId of candidate - face when find by faceListId or largeFaceListId. persistedFaceId in face - list/large face list is persisted and will not expire. - } - ] - """ - - @overload - def find_similar( - self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any - ) -> List[_models.FaceFindSimilarResult]: - # pylint: disable=line-too-long - """Given query face's faceId, to search the similar-looking faces from a faceId array. A faceId - array contains the faces created by Detect. - - Depending on the input the returned similar faces list contains faceIds or persistedFaceIds - ranked by similarity. - - Find similar has two working modes, "matchPerson" and "matchFace". "matchPerson" is the default - mode that it tries to find faces of the same person as possible by using internal same-person - thresholds. It is useful to find a known person's other photos. Note that an empty list will be - returned if no faces pass the internal thresholds. "matchFace" mode ignores same-person - thresholds and returns ranked similar faces anyway, even the similarity is low. It can be used - in the cases like searching celebrity-looking faces. - - The 'recognitionModel' associated with the query faceId should be the same as the - 'recognitionModel' used by the target faceId array. - - :param body: Required. - :type body: IO[bytes] - :keyword content_type: Body Parameter content-type. Content type parameter for binary body. - Default value is "application/json". - :paramtype content_type: str - :return: list of FaceFindSimilarResult - :rtype: list[~azure.ai.vision.face.models.FaceFindSimilarResult] - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == [ - { - "confidence": 0.0, # Confidence value of the candidate. The higher - confidence, the more similar. Range between [0,1]. Required. - "faceId": "str", # Optional. faceId of candidate face when find by - faceIds. faceId is created by "Detect" and will expire 24 hours after the - detection call. - "persistedFaceId": "str" # Optional. persistedFaceId of candidate - face when find by faceListId or largeFaceListId. persistedFaceId in face - list/large face list is persisted and will not expire. - } - ] - """ - - @distributed_trace - def find_similar( - self, - body: Union[JSON, IO[bytes]] = _Unset, - *, - face_id: str = _Unset, - face_ids: List[str] = _Unset, - max_num_of_candidates_returned: Optional[int] = None, - mode: Optional[Union[str, _models.FindSimilarMatchMode]] = None, - **kwargs: Any, - ) -> List[_models.FaceFindSimilarResult]: - # pylint: disable=line-too-long - """Given query face's faceId, to search the similar-looking faces from a faceId array. A faceId - array contains the faces created by Detect. - - Depending on the input the returned similar faces list contains faceIds or persistedFaceIds - ranked by similarity. - - Find similar has two working modes, "matchPerson" and "matchFace". "matchPerson" is the default - mode that it tries to find faces of the same person as possible by using internal same-person - thresholds. It is useful to find a known person's other photos. Note that an empty list will be - returned if no faces pass the internal thresholds. "matchFace" mode ignores same-person - thresholds and returns ranked similar faces anyway, even the similarity is low. It can be used - in the cases like searching celebrity-looking faces. - - The 'recognitionModel' associated with the query faceId should be the same as the - 'recognitionModel' used by the target faceId array. - - :param body: Is either a JSON type or a IO[bytes] type. Required. - :type body: JSON or IO[bytes] - :keyword face_id: faceId of the query face. User needs to call "Detect" first to get a valid - faceId. Note that this faceId is not persisted and will expire 24 hours after the detection - call. Required. - :paramtype face_id: str - :keyword face_ids: An array of candidate faceIds. All of them are created by "Detect" and the - faceIds will expire 24 hours after the detection call. The number of faceIds is limited to - 1000. Required. - :paramtype face_ids: list[str] - :keyword max_num_of_candidates_returned: The number of top similar faces returned. The valid - range is [1, 1000]. Default value is 20. Default value is None. - :paramtype max_num_of_candidates_returned: int - :keyword mode: Similar face searching mode. It can be 'matchPerson' or 'matchFace'. Default - value is 'matchPerson'. Known values are: "matchPerson" and "matchFace". Default value is None. - :paramtype mode: str or ~azure.ai.vision.face.models.FindSimilarMatchMode - :return: list of FaceFindSimilarResult - :rtype: list[~azure.ai.vision.face.models.FaceFindSimilarResult] - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # JSON input template you can fill out and use as your body input. - body = { - "faceId": "str", # faceId of the query face. User needs to call "Detect" - first to get a valid faceId. Note that this faceId is not persisted and will - expire 24 hours after the detection call. Required. - "faceIds": [ - "str" # An array of candidate faceIds. All of them are created by - "Detect" and the faceIds will expire 24 hours after the detection call. The - number of faceIds is limited to 1000. Required. - ], - "maxNumOfCandidatesReturned": 0, # Optional. The number of top similar faces - returned. The valid range is [1, 1000]. Default value is 20. - "mode": "str" # Optional. Similar face searching mode. It can be - 'matchPerson' or 'matchFace'. Default value is 'matchPerson'. Known values are: - "matchPerson" and "matchFace". - } - - # response body for status code(s): 200 - response == [ - { - "confidence": 0.0, # Confidence value of the candidate. The higher - confidence, the more similar. Range between [0,1]. Required. - "faceId": "str", # Optional. faceId of candidate face when find by - faceIds. faceId is created by "Detect" and will expire 24 hours after the - detection call. - "persistedFaceId": "str" # Optional. persistedFaceId of candidate - face when find by faceListId or largeFaceListId. persistedFaceId in face - list/large face list is persisted and will not expire. - } - ] - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - _params = kwargs.pop("params", {}) or {} - - content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) - cls: ClsType[List[_models.FaceFindSimilarResult]] = kwargs.pop("cls", None) - - if body is _Unset: - if face_id is _Unset: - raise TypeError("missing required argument: face_id") - if face_ids is _Unset: - raise TypeError("missing required argument: face_ids") - body = { - "faceid": face_id, - "faceids": face_ids, - "maxnumofcandidatesreturned": max_num_of_candidates_returned, - "mode": mode, - } - body = {k: v for k, v in body.items() if v is not None} - content_type = content_type or "application/json" - _content = None - if isinstance(body, (IOBase, bytes)): - _content = body - else: - _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore - - _request = build_face_find_similar_request( - content_type=content_type, - content=_content, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(List[_models.FaceFindSimilarResult], response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore - - @overload - def verify_face_to_face( - self, body: JSON, *, content_type: str = "application/json", **kwargs: Any - ) -> _models.FaceVerificationResult: - # pylint: disable=line-too-long - """Verify whether two faces belong to a same person. - - .. - - [!NOTE] - - * - - - * Higher face image quality means better identification precision. Please consider - high-quality faces: frontal, clear, and face size is 200x200 pixels (100 pixels between eyes) - or bigger. - * For the scenarios that are sensitive to accuracy please make your own judgment. - * The 'recognitionModel' associated with the both faces should be the same. - - :param body: Required. - :type body: JSON - :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. - Default value is "application/json". - :paramtype content_type: str - :return: FaceVerificationResult. The FaceVerificationResult is compatible with MutableMapping - :rtype: ~azure.ai.vision.face.models.FaceVerificationResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # JSON input template you can fill out and use as your body input. - body = { - "faceId1": "str", # The faceId of one face, come from "Detect". Required. - "faceId2": "str" # The faceId of another face, come from "Detect". Required. - } - - # response body for status code(s): 200 - response == { - "confidence": 0.0, # A number indicates the similarity confidence of whether - two faces belong to the same person, or whether the face belongs to the person. - By default, isIdentical is set to True if similarity confidence is greater than - or equal to 0.5. This is useful for advanced users to override 'isIdentical' and - fine-tune the result on their own data. Required. - "isIdentical": bool # True if the two faces belong to the same person or the - face belongs to the person, otherwise false. Required. - } - """ - - @overload - def verify_face_to_face( - self, *, face_id1: str, face_id2: str, content_type: str = "application/json", **kwargs: Any - ) -> _models.FaceVerificationResult: - # pylint: disable=line-too-long - """Verify whether two faces belong to a same person. - - .. - - [!NOTE] - - * - - - * Higher face image quality means better identification precision. Please consider - high-quality faces: frontal, clear, and face size is 200x200 pixels (100 pixels between eyes) - or bigger. - * For the scenarios that are sensitive to accuracy please make your own judgment. - * The 'recognitionModel' associated with the both faces should be the same. - - :keyword face_id1: The faceId of one face, come from "Detect". Required. - :paramtype face_id1: str - :keyword face_id2: The faceId of another face, come from "Detect". Required. - :paramtype face_id2: str - :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. - Default value is "application/json". - :paramtype content_type: str - :return: FaceVerificationResult. The FaceVerificationResult is compatible with MutableMapping - :rtype: ~azure.ai.vision.face.models.FaceVerificationResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == { - "confidence": 0.0, # A number indicates the similarity confidence of whether - two faces belong to the same person, or whether the face belongs to the person. - By default, isIdentical is set to True if similarity confidence is greater than - or equal to 0.5. This is useful for advanced users to override 'isIdentical' and - fine-tune the result on their own data. Required. - "isIdentical": bool # True if the two faces belong to the same person or the - face belongs to the person, otherwise false. Required. - } - """ - - @overload - def verify_face_to_face( - self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any - ) -> _models.FaceVerificationResult: - # pylint: disable=line-too-long - """Verify whether two faces belong to a same person. - - .. - - [!NOTE] - - * - - - * Higher face image quality means better identification precision. Please consider - high-quality faces: frontal, clear, and face size is 200x200 pixels (100 pixels between eyes) - or bigger. - * For the scenarios that are sensitive to accuracy please make your own judgment. - * The 'recognitionModel' associated with the both faces should be the same. - - :param body: Required. - :type body: IO[bytes] - :keyword content_type: Body Parameter content-type. Content type parameter for binary body. - Default value is "application/json". - :paramtype content_type: str - :return: FaceVerificationResult. The FaceVerificationResult is compatible with MutableMapping - :rtype: ~azure.ai.vision.face.models.FaceVerificationResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == { - "confidence": 0.0, # A number indicates the similarity confidence of whether - two faces belong to the same person, or whether the face belongs to the person. - By default, isIdentical is set to True if similarity confidence is greater than - or equal to 0.5. This is useful for advanced users to override 'isIdentical' and - fine-tune the result on their own data. Required. - "isIdentical": bool # True if the two faces belong to the same person or the - face belongs to the person, otherwise false. Required. - } - """ - - @distributed_trace - def verify_face_to_face( - self, body: Union[JSON, IO[bytes]] = _Unset, *, face_id1: str = _Unset, face_id2: str = _Unset, **kwargs: Any - ) -> _models.FaceVerificationResult: - # pylint: disable=line-too-long - """Verify whether two faces belong to a same person. - - .. - - [!NOTE] - - * - - - * Higher face image quality means better identification precision. Please consider - high-quality faces: frontal, clear, and face size is 200x200 pixels (100 pixels between eyes) - or bigger. - * For the scenarios that are sensitive to accuracy please make your own judgment. - * The 'recognitionModel' associated with the both faces should be the same. - - :param body: Is either a JSON type or a IO[bytes] type. Required. - :type body: JSON or IO[bytes] - :keyword face_id1: The faceId of one face, come from "Detect". Required. - :paramtype face_id1: str - :keyword face_id2: The faceId of another face, come from "Detect". Required. - :paramtype face_id2: str - :return: FaceVerificationResult. The FaceVerificationResult is compatible with MutableMapping - :rtype: ~azure.ai.vision.face.models.FaceVerificationResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # JSON input template you can fill out and use as your body input. - body = { - "faceId1": "str", # The faceId of one face, come from "Detect". Required. - "faceId2": "str" # The faceId of another face, come from "Detect". Required. - } - - # response body for status code(s): 200 - response == { - "confidence": 0.0, # A number indicates the similarity confidence of whether - two faces belong to the same person, or whether the face belongs to the person. - By default, isIdentical is set to True if similarity confidence is greater than - or equal to 0.5. This is useful for advanced users to override 'isIdentical' and - fine-tune the result on their own data. Required. - "isIdentical": bool # True if the two faces belong to the same person or the - face belongs to the person, otherwise false. Required. - } - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - _params = kwargs.pop("params", {}) or {} - - content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) - cls: ClsType[_models.FaceVerificationResult] = kwargs.pop("cls", None) - - if body is _Unset: - if face_id1 is _Unset: - raise TypeError("missing required argument: face_id1") - if face_id2 is _Unset: - raise TypeError("missing required argument: face_id2") - body = {"faceid1": face_id1, "faceid2": face_id2} - body = {k: v for k, v in body.items() if v is not None} - content_type = content_type or "application/json" - _content = None - if isinstance(body, (IOBase, bytes)): - _content = body - else: - _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore - - _request = build_face_verify_face_to_face_request( - content_type=content_type, - content=_content, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(_models.FaceVerificationResult, response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore - - @overload - def group(self, body: JSON, *, content_type: str = "application/json", **kwargs: Any) -> _models.FaceGroupingResult: - # pylint: disable=line-too-long - """Divide candidate faces into groups based on face similarity. - - > - * - - - * The output is one or more disjointed face groups and a messyGroup. A face group contains - faces that have similar looking, often of the same person. Face groups are ranked by group - size, i.e. number of faces. Notice that faces belonging to a same person might be split into - several groups in the result. - * MessyGroup is a special face group containing faces that cannot find any similar counterpart - face from original faces. The messyGroup will not appear in the result if all faces found their - counterparts. - * Group API needs at least 2 candidate faces and 1000 at most. We suggest to try "Verify Face - To Face" when you only have 2 candidate faces. - * The 'recognitionModel' associated with the query faces' faceIds should be the same. - - :param body: Required. - :type body: JSON - :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. - Default value is "application/json". - :paramtype content_type: str - :return: FaceGroupingResult. The FaceGroupingResult is compatible with MutableMapping - :rtype: ~azure.ai.vision.face.models.FaceGroupingResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # JSON input template you can fill out and use as your body input. - body = { - "faceIds": [ - "str" # Array of candidate faceIds created by "Detect". The maximum - is 1000 faces. Required. - ] - } - - # response body for status code(s): 200 - response == { - "groups": [ - [ - "str" # A partition of the original faces based on face - similarity. Groups are ranked by number of faces. Required. - ] - ], - "messyGroup": [ - "str" # Face ids array of faces that cannot find any similar faces - from original faces. Required. - ] - } - """ - - @overload - def group( - self, *, face_ids: List[str], content_type: str = "application/json", **kwargs: Any - ) -> _models.FaceGroupingResult: - # pylint: disable=line-too-long - """Divide candidate faces into groups based on face similarity. - - > - * - - - * The output is one or more disjointed face groups and a messyGroup. A face group contains - faces that have similar looking, often of the same person. Face groups are ranked by group - size, i.e. number of faces. Notice that faces belonging to a same person might be split into - several groups in the result. - * MessyGroup is a special face group containing faces that cannot find any similar counterpart - face from original faces. The messyGroup will not appear in the result if all faces found their - counterparts. - * Group API needs at least 2 candidate faces and 1000 at most. We suggest to try "Verify Face - To Face" when you only have 2 candidate faces. - * The 'recognitionModel' associated with the query faces' faceIds should be the same. - - :keyword face_ids: Array of candidate faceIds created by "Detect". The maximum is 1000 faces. - Required. - :paramtype face_ids: list[str] - :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. - Default value is "application/json". - :paramtype content_type: str - :return: FaceGroupingResult. The FaceGroupingResult is compatible with MutableMapping - :rtype: ~azure.ai.vision.face.models.FaceGroupingResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == { - "groups": [ - [ - "str" # A partition of the original faces based on face - similarity. Groups are ranked by number of faces. Required. - ] - ], - "messyGroup": [ - "str" # Face ids array of faces that cannot find any similar faces - from original faces. Required. - ] - } - """ - - @overload - def group( - self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any - ) -> _models.FaceGroupingResult: - # pylint: disable=line-too-long - """Divide candidate faces into groups based on face similarity. - - > - * - - - * The output is one or more disjointed face groups and a messyGroup. A face group contains - faces that have similar looking, often of the same person. Face groups are ranked by group - size, i.e. number of faces. Notice that faces belonging to a same person might be split into - several groups in the result. - * MessyGroup is a special face group containing faces that cannot find any similar counterpart - face from original faces. The messyGroup will not appear in the result if all faces found their - counterparts. - * Group API needs at least 2 candidate faces and 1000 at most. We suggest to try "Verify Face - To Face" when you only have 2 candidate faces. - * The 'recognitionModel' associated with the query faces' faceIds should be the same. - - :param body: Required. - :type body: IO[bytes] - :keyword content_type: Body Parameter content-type. Content type parameter for binary body. - Default value is "application/json". - :paramtype content_type: str - :return: FaceGroupingResult. The FaceGroupingResult is compatible with MutableMapping - :rtype: ~azure.ai.vision.face.models.FaceGroupingResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == { - "groups": [ - [ - "str" # A partition of the original faces based on face - similarity. Groups are ranked by number of faces. Required. - ] - ], - "messyGroup": [ - "str" # Face ids array of faces that cannot find any similar faces - from original faces. Required. - ] - } - """ - - @distributed_trace - def group( - self, body: Union[JSON, IO[bytes]] = _Unset, *, face_ids: List[str] = _Unset, **kwargs: Any - ) -> _models.FaceGroupingResult: - # pylint: disable=line-too-long - """Divide candidate faces into groups based on face similarity. - - > - * - - - * The output is one or more disjointed face groups and a messyGroup. A face group contains - faces that have similar looking, often of the same person. Face groups are ranked by group - size, i.e. number of faces. Notice that faces belonging to a same person might be split into - several groups in the result. - * MessyGroup is a special face group containing faces that cannot find any similar counterpart - face from original faces. The messyGroup will not appear in the result if all faces found their - counterparts. - * Group API needs at least 2 candidate faces and 1000 at most. We suggest to try "Verify Face - To Face" when you only have 2 candidate faces. - * The 'recognitionModel' associated with the query faces' faceIds should be the same. - - :param body: Is either a JSON type or a IO[bytes] type. Required. - :type body: JSON or IO[bytes] - :keyword face_ids: Array of candidate faceIds created by "Detect". The maximum is 1000 faces. - Required. - :paramtype face_ids: list[str] - :return: FaceGroupingResult. The FaceGroupingResult is compatible with MutableMapping - :rtype: ~azure.ai.vision.face.models.FaceGroupingResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # JSON input template you can fill out and use as your body input. - body = { - "faceIds": [ - "str" # Array of candidate faceIds created by "Detect". The maximum - is 1000 faces. Required. - ] - } - - # response body for status code(s): 200 - response == { - "groups": [ - [ - "str" # A partition of the original faces based on face - similarity. Groups are ranked by number of faces. Required. - ] - ], - "messyGroup": [ - "str" # Face ids array of faces that cannot find any similar faces - from original faces. Required. - ] - } - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - _params = kwargs.pop("params", {}) or {} - - content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) - cls: ClsType[_models.FaceGroupingResult] = kwargs.pop("cls", None) - - if body is _Unset: - if face_ids is _Unset: - raise TypeError("missing required argument: face_ids") - body = {"faceids": face_ids} - body = {k: v for k, v in body.items() if v is not None} - content_type = content_type or "application/json" - _content = None - if isinstance(body, (IOBase, bytes)): - _content = body - else: - _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore - - _request = build_face_group_request( - content_type=content_type, - content=_content, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(_models.FaceGroupingResult, response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore - - -class FaceSessionClientOperationsMixin(FaceSessionClientMixinABC): - - @overload - def create_liveness_session( - self, body: _models.CreateLivenessSessionContent, *, content_type: str = "application/json", **kwargs: Any - ) -> _models.CreateLivenessSessionResult: - # pylint: disable=line-too-long - """Create a new detect liveness session. - - A session is best for client device scenarios where developers want to authorize a client - device to perform only a liveness detection without granting full access to their resource. - Created sessions have a limited life span and only authorize clients to perform the desired - action before access is expired. - - Permissions includes... - > - * - - - * Ability to call /detectLiveness/singleModal for up to 3 retries. - * A token lifetime of 10 minutes. - - .. - - [!NOTE] - Client access can be revoked by deleting the session using the Delete Liveness Session - operation. To retrieve a result, use the Get Liveness Session. To audit the individual requests - that a client has made to your resource, use the List Liveness Session Audit Entries. - - :param body: Required. - :type body: ~azure.ai.vision.face.models.CreateLivenessSessionContent - :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. - Default value is "application/json". - :paramtype content_type: str - :return: CreateLivenessSessionResult. The CreateLivenessSessionResult is compatible with - MutableMapping - :rtype: ~azure.ai.vision.face.models.CreateLivenessSessionResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # JSON input template you can fill out and use as your body input. - body = { - "livenessOperationMode": "str", # Type of liveness mode the client should - follow. Required. Known values are: "Passive" and "PassiveActive". - "authTokenTimeToLiveInSeconds": 0, # Optional. Seconds the session should - last for. Range is 60 to 86400 seconds. Default value is 600. - "deviceCorrelationId": "str", # Optional. Unique Guid per each end-user - device. This is to provide rate limiting and anti-hammering. If - 'deviceCorrelationIdSetInClient' is true in this request, this - 'deviceCorrelationId' must be null. - "deviceCorrelationIdSetInClient": bool, # Optional. Whether or not to allow - client to set their own 'deviceCorrelationId' via the Vision SDK. Default is - false, and 'deviceCorrelationId' must be set in this request body. - "sendResultsToClient": bool # Optional. Whether or not to allow a '200 - - Success' response body to be sent to the client, which may be undesirable for - security reasons. Default is false, clients will receive a '204 - NoContent' - empty body response. Regardless of selection, calling Session GetResult will - always contain a response body enabling business logic to be implemented. - } - - # response body for status code(s): 200 - response == { - "authToken": "str", # Bearer token to provide authentication for the Vision - SDK running on a client application. This Bearer token has limited permissions to - perform only the required action and expires after the TTL time. It is also - auditable. Required. - "sessionId": "str" # The unique session ID of the created session. It will - expire 48 hours after it was created or may be deleted sooner using the - corresponding Session DELETE operation. Required. - } - """ - - @overload - def create_liveness_session( - self, body: JSON, *, content_type: str = "application/json", **kwargs: Any - ) -> _models.CreateLivenessSessionResult: - # pylint: disable=line-too-long - """Create a new detect liveness session. - - A session is best for client device scenarios where developers want to authorize a client - device to perform only a liveness detection without granting full access to their resource. - Created sessions have a limited life span and only authorize clients to perform the desired - action before access is expired. - - Permissions includes... - > - * - - - * Ability to call /detectLiveness/singleModal for up to 3 retries. - * A token lifetime of 10 minutes. - - .. - - [!NOTE] - Client access can be revoked by deleting the session using the Delete Liveness Session - operation. To retrieve a result, use the Get Liveness Session. To audit the individual requests - that a client has made to your resource, use the List Liveness Session Audit Entries. - - :param body: Required. - :type body: JSON - :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. - Default value is "application/json". - :paramtype content_type: str - :return: CreateLivenessSessionResult. The CreateLivenessSessionResult is compatible with - MutableMapping - :rtype: ~azure.ai.vision.face.models.CreateLivenessSessionResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == { - "authToken": "str", # Bearer token to provide authentication for the Vision - SDK running on a client application. This Bearer token has limited permissions to - perform only the required action and expires after the TTL time. It is also - auditable. Required. - "sessionId": "str" # The unique session ID of the created session. It will - expire 48 hours after it was created or may be deleted sooner using the - corresponding Session DELETE operation. Required. - } - """ - - @overload - def create_liveness_session( - self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any - ) -> _models.CreateLivenessSessionResult: - # pylint: disable=line-too-long - """Create a new detect liveness session. - - A session is best for client device scenarios where developers want to authorize a client - device to perform only a liveness detection without granting full access to their resource. - Created sessions have a limited life span and only authorize clients to perform the desired - action before access is expired. - - Permissions includes... - > - * - - - * Ability to call /detectLiveness/singleModal for up to 3 retries. - * A token lifetime of 10 minutes. - - .. - - [!NOTE] - Client access can be revoked by deleting the session using the Delete Liveness Session - operation. To retrieve a result, use the Get Liveness Session. To audit the individual requests - that a client has made to your resource, use the List Liveness Session Audit Entries. - - :param body: Required. - :type body: IO[bytes] - :keyword content_type: Body Parameter content-type. Content type parameter for binary body. - Default value is "application/json". - :paramtype content_type: str - :return: CreateLivenessSessionResult. The CreateLivenessSessionResult is compatible with - MutableMapping - :rtype: ~azure.ai.vision.face.models.CreateLivenessSessionResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == { - "authToken": "str", # Bearer token to provide authentication for the Vision - SDK running on a client application. This Bearer token has limited permissions to - perform only the required action and expires after the TTL time. It is also - auditable. Required. - "sessionId": "str" # The unique session ID of the created session. It will - expire 48 hours after it was created or may be deleted sooner using the - corresponding Session DELETE operation. Required. - } - """ - - @distributed_trace - def create_liveness_session( - self, body: Union[_models.CreateLivenessSessionContent, JSON, IO[bytes]], **kwargs: Any - ) -> _models.CreateLivenessSessionResult: - # pylint: disable=line-too-long - """Create a new detect liveness session. - - A session is best for client device scenarios where developers want to authorize a client - device to perform only a liveness detection without granting full access to their resource. - Created sessions have a limited life span and only authorize clients to perform the desired - action before access is expired. - - Permissions includes... - > - * - - - * Ability to call /detectLiveness/singleModal for up to 3 retries. - * A token lifetime of 10 minutes. - - .. - - [!NOTE] - Client access can be revoked by deleting the session using the Delete Liveness Session - operation. To retrieve a result, use the Get Liveness Session. To audit the individual requests - that a client has made to your resource, use the List Liveness Session Audit Entries. - - :param body: Is one of the following types: CreateLivenessSessionContent, JSON, IO[bytes] - Required. - :type body: ~azure.ai.vision.face.models.CreateLivenessSessionContent or JSON or IO[bytes] - :return: CreateLivenessSessionResult. The CreateLivenessSessionResult is compatible with - MutableMapping - :rtype: ~azure.ai.vision.face.models.CreateLivenessSessionResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # JSON input template you can fill out and use as your body input. - body = { - "livenessOperationMode": "str", # Type of liveness mode the client should - follow. Required. Known values are: "Passive" and "PassiveActive". - "authTokenTimeToLiveInSeconds": 0, # Optional. Seconds the session should - last for. Range is 60 to 86400 seconds. Default value is 600. - "deviceCorrelationId": "str", # Optional. Unique Guid per each end-user - device. This is to provide rate limiting and anti-hammering. If - 'deviceCorrelationIdSetInClient' is true in this request, this - 'deviceCorrelationId' must be null. - "deviceCorrelationIdSetInClient": bool, # Optional. Whether or not to allow - client to set their own 'deviceCorrelationId' via the Vision SDK. Default is - false, and 'deviceCorrelationId' must be set in this request body. - "sendResultsToClient": bool # Optional. Whether or not to allow a '200 - - Success' response body to be sent to the client, which may be undesirable for - security reasons. Default is false, clients will receive a '204 - NoContent' - empty body response. Regardless of selection, calling Session GetResult will - always contain a response body enabling business logic to be implemented. - } - - # response body for status code(s): 200 - response == { - "authToken": "str", # Bearer token to provide authentication for the Vision - SDK running on a client application. This Bearer token has limited permissions to - perform only the required action and expires after the TTL time. It is also - auditable. Required. - "sessionId": "str" # The unique session ID of the created session. It will - expire 48 hours after it was created or may be deleted sooner using the - corresponding Session DELETE operation. Required. - } - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - _params = kwargs.pop("params", {}) or {} - - content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) - cls: ClsType[_models.CreateLivenessSessionResult] = kwargs.pop("cls", None) - - content_type = content_type or "application/json" - _content = None - if isinstance(body, (IOBase, bytes)): - _content = body - else: - _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore - - _request = build_face_session_create_liveness_session_request( - content_type=content_type, - content=_content, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(_models.CreateLivenessSessionResult, response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore - - @distributed_trace - def delete_liveness_session( # pylint: disable=inconsistent-return-statements - self, session_id: str, **kwargs: Any - ) -> None: - """Delete all session related information for matching the specified session id. - - .. - - [!NOTE] - Deleting a session deactivates the Session Auth Token by blocking future API calls made with - that Auth Token. While this can be used to remove any access for that token, those requests - will still count towards overall resource rate limits. It's best to leverage TokenTTL to limit - length of tokens in the case that it is misused. - - :param session_id: The unique ID to reference this session. Required. - :type session_id: str - :return: None - :rtype: None - :raises ~azure.core.exceptions.HttpResponseError: - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = kwargs.pop("headers", {}) or {} - _params = kwargs.pop("params", {}) or {} - - cls: ClsType[None] = kwargs.pop("cls", None) - - _request = build_face_session_delete_liveness_session_request( - session_id=session_id, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = False - pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if cls: - return cls(pipeline_response, None, {}) # type: ignore - - @distributed_trace - def get_liveness_session_result(self, session_id: str, **kwargs: Any) -> _models.LivenessSession: - # pylint: disable=line-too-long - """Get session result of detectLiveness/singleModal call. - - :param session_id: The unique ID to reference this session. Required. - :type session_id: str - :return: LivenessSession. The LivenessSession is compatible with MutableMapping - :rtype: ~azure.ai.vision.face.models.LivenessSession - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == { - "createdDateTime": "2020-02-20 00:00:00", # DateTime when this session was - created. Required. - "id": "str", # The unique ID to reference this session. Required. - "sessionExpired": bool, # Whether or not the session is expired. Required. - "status": "str", # The current status of the session. Required. Known values - are: "NotStarted", "Started", and "ResultAvailable". - "authTokenTimeToLiveInSeconds": 0, # Optional. Seconds the session should - last for. Range is 60 to 86400 seconds. Default value is 600. - "deviceCorrelationId": "str", # Optional. Unique Guid per each end-user - device. This is to provide rate limiting and anti-hammering. If - 'deviceCorrelationIdSetInClient' is true in this request, this - 'deviceCorrelationId' must be null. - "result": { - "clientRequestId": "str", # The unique clientRequestId that is sent - by the client in the 'client-request-id' header. Required. - "digest": "str", # The server calculated digest for this request. If - the client reported digest differs from the server calculated digest, then - the message integrity between the client and service has been compromised and - the result should not be trusted. For more information, see how to guides on - how to leverage this value to secure your end-to-end solution. Required. - "id": 0, # The unique id to refer to this audit request. Use this id - with the 'start' query parameter to continue on to the next page of audit - results. Required. - "receivedDateTime": "2020-02-20 00:00:00", # The UTC DateTime that - the request was received. Required. - "request": { - "contentType": "str", # The content type of the request. - Required. - "method": "str", # The HTTP method of the request (i.e., - GET, POST, DELETE). Required. - "url": "str", # The relative URL and query of the liveness - request. Required. - "contentLength": 0, # Optional. The length of the request - body in bytes. - "userAgent": "str" # Optional. The user agent used to submit - the request. - }, - "requestId": "str", # The unique requestId that is returned by the - service to the client in the 'apim-request-id' header. Required. - "response": { - "body": { - "livenessDecision": "str", # Optional. The liveness - classification for the target face. Known values are: "uncertain", - "realface", and "spoofface". - "modelVersionUsed": "str", # Optional. The model - version used for liveness classification. Known values are: - "2020-02-15-preview.01", "2021-11-12-preview.03", - "2022-10-15-preview.04", and "2023-03-02-preview.05". - "target": { - "faceRectangle": { - "height": 0, # The height of the - rectangle, in pixels. Required. - "left": 0, # The distance from the - left edge if the image to the left edge of the rectangle, in - pixels. Required. - "top": 0, # The distance from the - top edge if the image to the top edge of the rectangle, in - pixels. Required. - "width": 0 # The width of the - rectangle, in pixels. Required. - }, - "fileName": "str", # The file name which - contains the face rectangle where the liveness classification was - made on. Required. - "imageType": "str", # The image type which - contains the face rectangle where the liveness classification was - made on. Required. Known values are: "Color", "Infrared", and - "Depth". - "timeOffsetWithinFile": 0 # The time offset - within the file of the frame which contains the face rectangle - where the liveness classification was made on. Required. - }, - "verifyResult": { - "isIdentical": bool, # Whether the target - liveness face and comparison image face match. Required. - "matchConfidence": 0.0, # The target face - liveness face and comparison image face verification confidence. - Required. - "verifyImage": { - "faceRectangle": { - "height": 0, # The height of - the rectangle, in pixels. Required. - "left": 0, # The distance - from the left edge if the image to the left edge of the - rectangle, in pixels. Required. - "top": 0, # The distance - from the top edge if the image to the top edge of the - rectangle, in pixels. Required. - "width": 0 # The width of - the rectangle, in pixels. Required. - }, - "qualityForRecognition": "str" # - Quality of face image for recognition. Required. Known values - are: "low", "medium", and "high". - } - } - }, - "latencyInMilliseconds": 0, # The server measured latency - for this request in milliseconds. Required. - "statusCode": 0 # The HTTP status code returned to the - client. Required. - }, - "sessionId": "str" # The unique sessionId of the created session. It - will expire 48 hours after it was created or may be deleted sooner using the - corresponding session DELETE operation. Required. - }, - "sessionStartDateTime": "2020-02-20 00:00:00" # Optional. DateTime when this - session was started by the client. - } - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = kwargs.pop("headers", {}) or {} - _params = kwargs.pop("params", {}) or {} - - cls: ClsType[_models.LivenessSession] = kwargs.pop("cls", None) - - _request = build_face_session_get_liveness_session_result_request( - session_id=session_id, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(_models.LivenessSession, response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore - - @distributed_trace - def get_liveness_sessions( - self, *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any - ) -> List[_models.LivenessSessionItem]: - # pylint: disable=line-too-long - """Lists sessions for /detectLiveness/SingleModal. - - List sessions from the last sessionId greater than the 'start'. - - The result should be ordered by sessionId in ascending order. - - :keyword start: List resources greater than the "start". It contains no more than 64 - characters. Default is empty. Default value is None. - :paramtype start: str - :keyword top: The number of items to list, ranging in [1, 1000]. Default is 1000. Default value - is None. - :paramtype top: int - :return: list of LivenessSessionItem - :rtype: list[~azure.ai.vision.face.models.LivenessSessionItem] - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == [ - { - "createdDateTime": "2020-02-20 00:00:00", # DateTime when this - session was created. Required. - "id": "str", # The unique ID to reference this session. Required. - "sessionExpired": bool, # Whether or not the session is expired. - Required. - "authTokenTimeToLiveInSeconds": 0, # Optional. Seconds the session - should last for. Range is 60 to 86400 seconds. Default value is 600. - "deviceCorrelationId": "str", # Optional. Unique Guid per each - end-user device. This is to provide rate limiting and anti-hammering. If - 'deviceCorrelationIdSetInClient' is true in this request, this - 'deviceCorrelationId' must be null. - "sessionStartDateTime": "2020-02-20 00:00:00" # Optional. DateTime - when this session was started by the client. - } - ] - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = kwargs.pop("headers", {}) or {} - _params = kwargs.pop("params", {}) or {} - - cls: ClsType[List[_models.LivenessSessionItem]] = kwargs.pop("cls", None) - - _request = build_face_session_get_liveness_sessions_request( - start=start, - top=top, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(List[_models.LivenessSessionItem], response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore - - @distributed_trace - def get_liveness_session_audit_entries( - self, session_id: str, *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any - ) -> List[_models.LivenessSessionAuditEntry]: - # pylint: disable=line-too-long - """Gets session requests and response body for the session. - - :param session_id: The unique ID to reference this session. Required. - :type session_id: str - :keyword start: List resources greater than the "start". It contains no more than 64 - characters. Default is empty. Default value is None. - :paramtype start: str - :keyword top: The number of items to list, ranging in [1, 1000]. Default is 1000. Default value - is None. - :paramtype top: int - :return: list of LivenessSessionAuditEntry - :rtype: list[~azure.ai.vision.face.models.LivenessSessionAuditEntry] - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == [ - { - "clientRequestId": "str", # The unique clientRequestId that is sent - by the client in the 'client-request-id' header. Required. - "digest": "str", # The server calculated digest for this request. If - the client reported digest differs from the server calculated digest, then - the message integrity between the client and service has been compromised and - the result should not be trusted. For more information, see how to guides on - how to leverage this value to secure your end-to-end solution. Required. - "id": 0, # The unique id to refer to this audit request. Use this id - with the 'start' query parameter to continue on to the next page of audit - results. Required. - "receivedDateTime": "2020-02-20 00:00:00", # The UTC DateTime that - the request was received. Required. - "request": { - "contentType": "str", # The content type of the request. - Required. - "method": "str", # The HTTP method of the request (i.e., - GET, POST, DELETE). Required. - "url": "str", # The relative URL and query of the liveness - request. Required. - "contentLength": 0, # Optional. The length of the request - body in bytes. - "userAgent": "str" # Optional. The user agent used to submit - the request. - }, - "requestId": "str", # The unique requestId that is returned by the - service to the client in the 'apim-request-id' header. Required. - "response": { - "body": { - "livenessDecision": "str", # Optional. The liveness - classification for the target face. Known values are: "uncertain", - "realface", and "spoofface". - "modelVersionUsed": "str", # Optional. The model - version used for liveness classification. Known values are: - "2020-02-15-preview.01", "2021-11-12-preview.03", - "2022-10-15-preview.04", and "2023-03-02-preview.05". - "target": { - "faceRectangle": { - "height": 0, # The height of the - rectangle, in pixels. Required. - "left": 0, # The distance from the - left edge if the image to the left edge of the rectangle, in - pixels. Required. - "top": 0, # The distance from the - top edge if the image to the top edge of the rectangle, in - pixels. Required. - "width": 0 # The width of the - rectangle, in pixels. Required. - }, - "fileName": "str", # The file name which - contains the face rectangle where the liveness classification was - made on. Required. - "imageType": "str", # The image type which - contains the face rectangle where the liveness classification was - made on. Required. Known values are: "Color", "Infrared", and - "Depth". - "timeOffsetWithinFile": 0 # The time offset - within the file of the frame which contains the face rectangle - where the liveness classification was made on. Required. - }, - "verifyResult": { - "isIdentical": bool, # Whether the target - liveness face and comparison image face match. Required. - "matchConfidence": 0.0, # The target face - liveness face and comparison image face verification confidence. - Required. - "verifyImage": { - "faceRectangle": { - "height": 0, # The height of - the rectangle, in pixels. Required. - "left": 0, # The distance - from the left edge if the image to the left edge of the - rectangle, in pixels. Required. - "top": 0, # The distance - from the top edge if the image to the top edge of the - rectangle, in pixels. Required. - "width": 0 # The width of - the rectangle, in pixels. Required. - }, - "qualityForRecognition": "str" # - Quality of face image for recognition. Required. Known values - are: "low", "medium", and "high". - } - } - }, - "latencyInMilliseconds": 0, # The server measured latency - for this request in milliseconds. Required. - "statusCode": 0 # The HTTP status code returned to the - client. Required. - }, - "sessionId": "str" # The unique sessionId of the created session. It - will expire 48 hours after it was created or may be deleted sooner using the - corresponding session DELETE operation. Required. - } - ] - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = kwargs.pop("headers", {}) or {} - _params = kwargs.pop("params", {}) or {} - - cls: ClsType[List[_models.LivenessSessionAuditEntry]] = kwargs.pop("cls", None) - - _request = build_face_session_get_liveness_session_audit_entries_request( - session_id=session_id, - start=start, - top=top, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(List[_models.LivenessSessionAuditEntry], response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore - - @overload - def _create_liveness_with_verify_session( - self, body: _models.CreateLivenessSessionContent, *, content_type: str = "application/json", **kwargs: Any - ) -> _models.CreateLivenessWithVerifySessionResult: ... - @overload - def _create_liveness_with_verify_session( - self, body: JSON, *, content_type: str = "application/json", **kwargs: Any - ) -> _models.CreateLivenessWithVerifySessionResult: ... - @overload - def _create_liveness_with_verify_session( - self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any - ) -> _models.CreateLivenessWithVerifySessionResult: ... - - @distributed_trace - def _create_liveness_with_verify_session( - self, body: Union[_models.CreateLivenessSessionContent, JSON, IO[bytes]], **kwargs: Any - ) -> _models.CreateLivenessWithVerifySessionResult: - # pylint: disable=line-too-long - """Create a new liveness session with verify. Client device submits VerifyImage during the - /detectLivenessWithVerify/singleModal call. - - A session is best for client device scenarios where developers want to authorize a client - device to perform only a liveness detection without granting full access to their resource. - Created sessions have a limited life span and only authorize clients to perform the desired - action before access is expired. - - Permissions includes... - > - * - - - * Ability to call /detectLivenessWithVerify/singleModal for up to 3 retries. - * A token lifetime of 10 minutes. - - .. - - [!NOTE] - - * - - - * Client access can be revoked by deleting the session using the Delete Liveness With Verify - Session operation. - * To retrieve a result, use the Get Liveness With Verify Session. - * To audit the individual requests that a client has made to your resource, use the List - Liveness With Verify Session Audit Entries. - - - Alternative Option: Client device submits VerifyImage during the - /detectLivenessWithVerify/singleModal call. - - .. - - [!NOTE] - Extra measures should be taken to validate that the client is sending the expected - VerifyImage. - - :param body: Is one of the following types: CreateLivenessSessionContent, JSON, IO[bytes] - Required. - :type body: ~azure.ai.vision.face.models.CreateLivenessSessionContent or JSON or IO[bytes] - :return: CreateLivenessWithVerifySessionResult. The CreateLivenessWithVerifySessionResult is - compatible with MutableMapping - :rtype: ~azure.ai.vision.face.models.CreateLivenessWithVerifySessionResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # JSON input template you can fill out and use as your body input. - body = { - "livenessOperationMode": "str", # Type of liveness mode the client should - follow. Required. Known values are: "Passive" and "PassiveActive". - "authTokenTimeToLiveInSeconds": 0, # Optional. Seconds the session should - last for. Range is 60 to 86400 seconds. Default value is 600. - "deviceCorrelationId": "str", # Optional. Unique Guid per each end-user - device. This is to provide rate limiting and anti-hammering. If - 'deviceCorrelationIdSetInClient' is true in this request, this - 'deviceCorrelationId' must be null. - "deviceCorrelationIdSetInClient": bool, # Optional. Whether or not to allow - client to set their own 'deviceCorrelationId' via the Vision SDK. Default is - false, and 'deviceCorrelationId' must be set in this request body. - "sendResultsToClient": bool # Optional. Whether or not to allow a '200 - - Success' response body to be sent to the client, which may be undesirable for - security reasons. Default is false, clients will receive a '204 - NoContent' - empty body response. Regardless of selection, calling Session GetResult will - always contain a response body enabling business logic to be implemented. - } - - # response body for status code(s): 200 - response == { - "authToken": "str", # Bearer token to provide authentication for the Vision - SDK running on a client application. This Bearer token has limited permissions to - perform only the required action and expires after the TTL time. It is also - auditable. Required. - "sessionId": "str", # The unique session ID of the created session. It will - expire 48 hours after it was created or may be deleted sooner using the - corresponding Session DELETE operation. Required. - "verifyImage": { - "faceRectangle": { - "height": 0, # The height of the rectangle, in pixels. - Required. - "left": 0, # The distance from the left edge if the image to - the left edge of the rectangle, in pixels. Required. - "top": 0, # The distance from the top edge if the image to - the top edge of the rectangle, in pixels. Required. - "width": 0 # The width of the rectangle, in pixels. - Required. - }, - "qualityForRecognition": "str" # Quality of face image for - recognition. Required. Known values are: "low", "medium", and "high". - } - } - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - _params = kwargs.pop("params", {}) or {} - - content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) - cls: ClsType[_models.CreateLivenessWithVerifySessionResult] = kwargs.pop("cls", None) - - content_type = content_type or "application/json" - _content = None - if isinstance(body, (IOBase, bytes)): - _content = body - else: - _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore - - _request = build_face_session_create_liveness_with_verify_session_request( - content_type=content_type, - content=_content, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(_models.CreateLivenessWithVerifySessionResult, response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore - - @overload - def _create_liveness_with_verify_session_with_verify_image( # pylint: disable=protected-access,name-too-long - self, body: _models._models.CreateLivenessWithVerifySessionContent, **kwargs: Any - ) -> _models.CreateLivenessWithVerifySessionResult: ... - @overload - def _create_liveness_with_verify_session_with_verify_image( # pylint: disable=name-too-long - self, body: JSON, **kwargs: Any - ) -> _models.CreateLivenessWithVerifySessionResult: ... - - @distributed_trace - def _create_liveness_with_verify_session_with_verify_image( # pylint: disable=name-too-long - self, body: Union[_models._models.CreateLivenessWithVerifySessionContent, JSON], **kwargs: Any - ) -> _models.CreateLivenessWithVerifySessionResult: - # pylint: disable=line-too-long - """Create a new liveness session with verify. Provide the verify image during session creation. - - A session is best for client device scenarios where developers want to authorize a client - device to perform only a liveness detection without granting full access to their resource. - Created sessions have a limited life span and only authorize clients to perform the desired - action before access is expired. - - Permissions includes... - > - * - - - * Ability to call /detectLivenessWithVerify/singleModal for up to 3 retries. - * A token lifetime of 10 minutes. - - .. - - [!NOTE] - - * - - - * Client access can be revoked by deleting the session using the Delete Liveness With Verify - Session operation. - * To retrieve a result, use the Get Liveness With Verify Session. - * To audit the individual requests that a client has made to your resource, use the List - Liveness With Verify Session Audit Entries. - - - Recommended Option: VerifyImage is provided during session creation. - - :param body: Is either a CreateLivenessWithVerifySessionContent type or a JSON type. Required. - :type body: ~azure.ai.vision.face.models._models.CreateLivenessWithVerifySessionContent or JSON - :return: CreateLivenessWithVerifySessionResult. The CreateLivenessWithVerifySessionResult is - compatible with MutableMapping - :rtype: ~azure.ai.vision.face.models.CreateLivenessWithVerifySessionResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # JSON input template you can fill out and use as your body input. - body = { - "Parameters": { - "livenessOperationMode": "str", # Type of liveness mode the client - should follow. Required. Known values are: "Passive" and "PassiveActive". - "authTokenTimeToLiveInSeconds": 0, # Optional. Seconds the session - should last for. Range is 60 to 86400 seconds. Default value is 600. - "deviceCorrelationId": "str", # Optional. Unique Guid per each - end-user device. This is to provide rate limiting and anti-hammering. If - 'deviceCorrelationIdSetInClient' is true in this request, this - 'deviceCorrelationId' must be null. - "deviceCorrelationIdSetInClient": bool, # Optional. Whether or not - to allow client to set their own 'deviceCorrelationId' via the Vision SDK. - Default is false, and 'deviceCorrelationId' must be set in this request body. - "sendResultsToClient": bool # Optional. Whether or not to allow a - '200 - Success' response body to be sent to the client, which may be - undesirable for security reasons. Default is false, clients will receive a - '204 - NoContent' empty body response. Regardless of selection, calling - Session GetResult will always contain a response body enabling business logic - to be implemented. - }, - "VerifyImage": filetype - } - - # response body for status code(s): 200 - response == { - "authToken": "str", # Bearer token to provide authentication for the Vision - SDK running on a client application. This Bearer token has limited permissions to - perform only the required action and expires after the TTL time. It is also - auditable. Required. - "sessionId": "str", # The unique session ID of the created session. It will - expire 48 hours after it was created or may be deleted sooner using the - corresponding Session DELETE operation. Required. - "verifyImage": { - "faceRectangle": { - "height": 0, # The height of the rectangle, in pixels. - Required. - "left": 0, # The distance from the left edge if the image to - the left edge of the rectangle, in pixels. Required. - "top": 0, # The distance from the top edge if the image to - the top edge of the rectangle, in pixels. Required. - "width": 0 # The width of the rectangle, in pixels. - Required. - }, - "qualityForRecognition": "str" # Quality of face image for - recognition. Required. Known values are: "low", "medium", and "high". - } - } - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = kwargs.pop("headers", {}) or {} - _params = kwargs.pop("params", {}) or {} - - cls: ClsType[_models.CreateLivenessWithVerifySessionResult] = kwargs.pop("cls", None) - - _body = body.as_dict() if isinstance(body, _model_base.Model) else body - _file_fields: List[str] = ["VerifyImage"] - _data_fields: List[str] = ["Parameters"] - _files, _data = prepare_multipart_form_data(_body, _file_fields, _data_fields) - - _request = build_face_session_create_liveness_with_verify_session_with_verify_image_request( - files=_files, - data=_data, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(_models.CreateLivenessWithVerifySessionResult, response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore - - @distributed_trace - def delete_liveness_with_verify_session( # pylint: disable=inconsistent-return-statements - self, session_id: str, **kwargs: Any - ) -> None: - """Delete all session related information for matching the specified session id. - - .. - - [!NOTE] - Deleting a session deactivates the Session Auth Token by blocking future API calls made with - that Auth Token. While this can be used to remove any access for that token, those requests - will still count towards overall resource rate limits. It's best to leverage TokenTTL to limit - length of tokens in the case that it is misused. - - :param session_id: The unique ID to reference this session. Required. - :type session_id: str - :return: None - :rtype: None - :raises ~azure.core.exceptions.HttpResponseError: - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = kwargs.pop("headers", {}) or {} - _params = kwargs.pop("params", {}) or {} - - cls: ClsType[None] = kwargs.pop("cls", None) - - _request = build_face_session_delete_liveness_with_verify_session_request( - session_id=session_id, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = False - pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if cls: - return cls(pipeline_response, None, {}) # type: ignore - - @distributed_trace - def get_liveness_with_verify_session_result( - self, session_id: str, **kwargs: Any - ) -> _models.LivenessWithVerifySession: - # pylint: disable=line-too-long - """Get session result of detectLivenessWithVerify/singleModal call. - - :param session_id: The unique ID to reference this session. Required. - :type session_id: str - :return: LivenessWithVerifySession. The LivenessWithVerifySession is compatible with - MutableMapping - :rtype: ~azure.ai.vision.face.models.LivenessWithVerifySession - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == { - "createdDateTime": "2020-02-20 00:00:00", # DateTime when this session was - created. Required. - "id": "str", # The unique ID to reference this session. Required. - "sessionExpired": bool, # Whether or not the session is expired. Required. - "status": "str", # The current status of the session. Required. Known values - are: "NotStarted", "Started", and "ResultAvailable". - "authTokenTimeToLiveInSeconds": 0, # Optional. Seconds the session should - last for. Range is 60 to 86400 seconds. Default value is 600. - "deviceCorrelationId": "str", # Optional. Unique Guid per each end-user - device. This is to provide rate limiting and anti-hammering. If - 'deviceCorrelationIdSetInClient' is true in this request, this - 'deviceCorrelationId' must be null. - "result": { - "clientRequestId": "str", # The unique clientRequestId that is sent - by the client in the 'client-request-id' header. Required. - "digest": "str", # The server calculated digest for this request. If - the client reported digest differs from the server calculated digest, then - the message integrity between the client and service has been compromised and - the result should not be trusted. For more information, see how to guides on - how to leverage this value to secure your end-to-end solution. Required. - "id": 0, # The unique id to refer to this audit request. Use this id - with the 'start' query parameter to continue on to the next page of audit - results. Required. - "receivedDateTime": "2020-02-20 00:00:00", # The UTC DateTime that - the request was received. Required. - "request": { - "contentType": "str", # The content type of the request. - Required. - "method": "str", # The HTTP method of the request (i.e., - GET, POST, DELETE). Required. - "url": "str", # The relative URL and query of the liveness - request. Required. - "contentLength": 0, # Optional. The length of the request - body in bytes. - "userAgent": "str" # Optional. The user agent used to submit - the request. - }, - "requestId": "str", # The unique requestId that is returned by the - service to the client in the 'apim-request-id' header. Required. - "response": { - "body": { - "livenessDecision": "str", # Optional. The liveness - classification for the target face. Known values are: "uncertain", - "realface", and "spoofface". - "modelVersionUsed": "str", # Optional. The model - version used for liveness classification. Known values are: - "2020-02-15-preview.01", "2021-11-12-preview.03", - "2022-10-15-preview.04", and "2023-03-02-preview.05". - "target": { - "faceRectangle": { - "height": 0, # The height of the - rectangle, in pixels. Required. - "left": 0, # The distance from the - left edge if the image to the left edge of the rectangle, in - pixels. Required. - "top": 0, # The distance from the - top edge if the image to the top edge of the rectangle, in - pixels. Required. - "width": 0 # The width of the - rectangle, in pixels. Required. - }, - "fileName": "str", # The file name which - contains the face rectangle where the liveness classification was - made on. Required. - "imageType": "str", # The image type which - contains the face rectangle where the liveness classification was - made on. Required. Known values are: "Color", "Infrared", and - "Depth". - "timeOffsetWithinFile": 0 # The time offset - within the file of the frame which contains the face rectangle - where the liveness classification was made on. Required. - }, - "verifyResult": { - "isIdentical": bool, # Whether the target - liveness face and comparison image face match. Required. - "matchConfidence": 0.0, # The target face - liveness face and comparison image face verification confidence. - Required. - "verifyImage": { - "faceRectangle": { - "height": 0, # The height of - the rectangle, in pixels. Required. - "left": 0, # The distance - from the left edge if the image to the left edge of the - rectangle, in pixels. Required. - "top": 0, # The distance - from the top edge if the image to the top edge of the - rectangle, in pixels. Required. - "width": 0 # The width of - the rectangle, in pixels. Required. - }, - "qualityForRecognition": "str" # - Quality of face image for recognition. Required. Known values - are: "low", "medium", and "high". - } - } - }, - "latencyInMilliseconds": 0, # The server measured latency - for this request in milliseconds. Required. - "statusCode": 0 # The HTTP status code returned to the - client. Required. - }, - "sessionId": "str" # The unique sessionId of the created session. It - will expire 48 hours after it was created or may be deleted sooner using the - corresponding session DELETE operation. Required. - }, - "sessionStartDateTime": "2020-02-20 00:00:00" # Optional. DateTime when this - session was started by the client. - } - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = kwargs.pop("headers", {}) or {} - _params = kwargs.pop("params", {}) or {} - - cls: ClsType[_models.LivenessWithVerifySession] = kwargs.pop("cls", None) - - _request = build_face_session_get_liveness_with_verify_session_result_request( - session_id=session_id, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(_models.LivenessWithVerifySession, response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore - - @distributed_trace - def get_liveness_with_verify_sessions( - self, *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any - ) -> List[_models.LivenessSessionItem]: - # pylint: disable=line-too-long - """Lists sessions for /detectLivenessWithVerify/SingleModal. - - List sessions from the last sessionId greater than the "start". - - The result should be ordered by sessionId in ascending order. - - :keyword start: List resources greater than the "start". It contains no more than 64 - characters. Default is empty. Default value is None. - :paramtype start: str - :keyword top: The number of items to list, ranging in [1, 1000]. Default is 1000. Default value - is None. - :paramtype top: int - :return: list of LivenessSessionItem - :rtype: list[~azure.ai.vision.face.models.LivenessSessionItem] - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == [ - { - "createdDateTime": "2020-02-20 00:00:00", # DateTime when this - session was created. Required. - "id": "str", # The unique ID to reference this session. Required. - "sessionExpired": bool, # Whether or not the session is expired. - Required. - "authTokenTimeToLiveInSeconds": 0, # Optional. Seconds the session - should last for. Range is 60 to 86400 seconds. Default value is 600. - "deviceCorrelationId": "str", # Optional. Unique Guid per each - end-user device. This is to provide rate limiting and anti-hammering. If - 'deviceCorrelationIdSetInClient' is true in this request, this - 'deviceCorrelationId' must be null. - "sessionStartDateTime": "2020-02-20 00:00:00" # Optional. DateTime - when this session was started by the client. - } - ] - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = kwargs.pop("headers", {}) or {} - _params = kwargs.pop("params", {}) or {} - - cls: ClsType[List[_models.LivenessSessionItem]] = kwargs.pop("cls", None) - - _request = build_face_session_get_liveness_with_verify_sessions_request( - start=start, - top=top, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(List[_models.LivenessSessionItem], response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore - - @distributed_trace - def get_liveness_with_verify_session_audit_entries( # pylint: disable=name-too-long - self, session_id: str, *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any - ) -> List[_models.LivenessSessionAuditEntry]: - # pylint: disable=line-too-long - """Gets session requests and response body for the session. - - :param session_id: The unique ID to reference this session. Required. - :type session_id: str - :keyword start: List resources greater than the "start". It contains no more than 64 - characters. Default is empty. Default value is None. - :paramtype start: str - :keyword top: The number of items to list, ranging in [1, 1000]. Default is 1000. Default value - is None. - :paramtype top: int - :return: list of LivenessSessionAuditEntry - :rtype: list[~azure.ai.vision.face.models.LivenessSessionAuditEntry] - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == [ - { - "clientRequestId": "str", # The unique clientRequestId that is sent - by the client in the 'client-request-id' header. Required. - "digest": "str", # The server calculated digest for this request. If - the client reported digest differs from the server calculated digest, then - the message integrity between the client and service has been compromised and - the result should not be trusted. For more information, see how to guides on - how to leverage this value to secure your end-to-end solution. Required. - "id": 0, # The unique id to refer to this audit request. Use this id - with the 'start' query parameter to continue on to the next page of audit - results. Required. - "receivedDateTime": "2020-02-20 00:00:00", # The UTC DateTime that - the request was received. Required. - "request": { - "contentType": "str", # The content type of the request. - Required. - "method": "str", # The HTTP method of the request (i.e., - GET, POST, DELETE). Required. - "url": "str", # The relative URL and query of the liveness - request. Required. - "contentLength": 0, # Optional. The length of the request - body in bytes. - "userAgent": "str" # Optional. The user agent used to submit - the request. - }, - "requestId": "str", # The unique requestId that is returned by the - service to the client in the 'apim-request-id' header. Required. - "response": { - "body": { - "livenessDecision": "str", # Optional. The liveness - classification for the target face. Known values are: "uncertain", - "realface", and "spoofface". - "modelVersionUsed": "str", # Optional. The model - version used for liveness classification. Known values are: - "2020-02-15-preview.01", "2021-11-12-preview.03", - "2022-10-15-preview.04", and "2023-03-02-preview.05". - "target": { - "faceRectangle": { - "height": 0, # The height of the - rectangle, in pixels. Required. - "left": 0, # The distance from the - left edge if the image to the left edge of the rectangle, in - pixels. Required. - "top": 0, # The distance from the - top edge if the image to the top edge of the rectangle, in - pixels. Required. - "width": 0 # The width of the - rectangle, in pixels. Required. - }, - "fileName": "str", # The file name which - contains the face rectangle where the liveness classification was - made on. Required. - "imageType": "str", # The image type which - contains the face rectangle where the liveness classification was - made on. Required. Known values are: "Color", "Infrared", and - "Depth". - "timeOffsetWithinFile": 0 # The time offset - within the file of the frame which contains the face rectangle - where the liveness classification was made on. Required. - }, - "verifyResult": { - "isIdentical": bool, # Whether the target - liveness face and comparison image face match. Required. - "matchConfidence": 0.0, # The target face - liveness face and comparison image face verification confidence. - Required. - "verifyImage": { - "faceRectangle": { - "height": 0, # The height of - the rectangle, in pixels. Required. - "left": 0, # The distance - from the left edge if the image to the left edge of the - rectangle, in pixels. Required. - "top": 0, # The distance - from the top edge if the image to the top edge of the - rectangle, in pixels. Required. - "width": 0 # The width of - the rectangle, in pixels. Required. - }, - "qualityForRecognition": "str" # - Quality of face image for recognition. Required. Known values - are: "low", "medium", and "high". - } - } - }, - "latencyInMilliseconds": 0, # The server measured latency - for this request in milliseconds. Required. - "statusCode": 0 # The HTTP status code returned to the - client. Required. - }, - "sessionId": "str" # The unique sessionId of the created session. It - will expire 48 hours after it was created or may be deleted sooner using the - corresponding session DELETE operation. Required. - } - ] - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = kwargs.pop("headers", {}) or {} - _params = kwargs.pop("params", {}) or {} - - cls: ClsType[List[_models.LivenessSessionAuditEntry]] = kwargs.pop("cls", None) - - _request = build_face_session_get_liveness_with_verify_session_audit_entries_request( - session_id=session_id, - start=start, - top=top, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(List[_models.LivenessSessionAuditEntry], response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore diff --git a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_patch.py b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_patch.py index 54c9b916747b..98b897fa68e2 100644 --- a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_patch.py +++ b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_patch.py @@ -14,7 +14,7 @@ from . import models as _models from ._client import FaceClient as FaceClientGenerated from ._client import FaceSessionClient as FaceSessionClientGenerated -from ._operations._operations import JSON, _Unset +from .operations._operations import JSON, _Unset class FaceClient(FaceClientGenerated): @@ -27,7 +27,7 @@ class FaceClient(FaceClientGenerated): AzureKeyCredential type or a TokenCredential type. Required. :type credential: ~azure.core.credentials.AzureKeyCredential or ~azure.core.credentials.TokenCredential - :keyword api_version: API Version. Default value is "v1.1-preview.1". Note that overriding this + :keyword api_version: API Version. Default value is "v1.2-preview.1". Note that overriding this default value may result in unsupported behavior. :paramtype api_version: str or ~azure.ai.vision.face.models.Versions """ @@ -95,64 +95,22 @@ def detect_from_url( face_id_time_to_live: Optional[int] = None, **kwargs: Any, ) -> List[_models.FaceDetectionResult]: - # pylint: disable=line-too-long """Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, and attributes. - .. - - [!IMPORTANT] - To mitigate potential misuse that can subject people to stereotyping, discrimination, or - unfair denial of services, we are retiring Face API attributes that predict emotion, gender, - age, smile, facial hair, hair, and makeup. Read more about this decision - https://azure.microsoft.com/blog/responsible-ai-investments-and-safeguards-for-facial-recognition/. - - - * - - - * No image will be stored. Only the extracted face feature(s) will be stored on server. The - faceId is an identifier of the face feature and will be used in Face - Identify, Face - Verify, - and Face - Find Similar. The stored face features will expire and be deleted at the time - specified by faceIdTimeToLive after the original detection call. - * Optional parameters include faceId, landmarks, and attributes. Attributes include headPose, - glasses, occlusion, accessories, blur, exposure, noise, mask, and qualityForRecognition. Some - of the results returned for specific attributes may not be highly accurate. - * JPEG, PNG, GIF (the first frame), and BMP format are supported. The allowed image file size - is from 1KB to 6MB. - * The minimum detectable face size is 36x36 pixels in an image no larger than 1920x1080 pixels. - Images with dimensions higher than 1920x1080 pixels will need a proportionally larger minimum - face size. - * Up to 100 faces can be returned for an image. Faces are ranked by face rectangle size from - large to small. - * For optimal results when querying Face - Identify, Face - Verify, and Face - Find Similar - ('returnFaceId' is true), please use faces that are: frontal, clear, and with a minimum size of - 200x200 pixels (100 pixels between eyes). - * Different 'detectionModel' values can be provided. To use and compare different detection - models, please refer to - https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/specify-detection-model - - * 'detection_02': Face attributes and landmarks are disabled if you choose this detection - model. - * 'detection_03': Face attributes (mask and headPose only) and landmarks are supported if you - choose this detection model. - - * Different 'recognitionModel' values are provided. If follow-up operations like Verify, - Identify, Find Similar are needed, please specify the recognition model with 'recognitionModel' - parameter. The default value for 'recognitionModel' is 'recognition_01', if latest model - needed, please explicitly specify the model you need in this parameter. Once specified, the - detected faceIds will be associated with the specified recognition model. More details, please - refer to - https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/specify-recognition-model. + Please refer to + https://learn.microsoft.com/rest/api/face/face-detection-operations/detect-from-url for more + details. :param body: Is either a JSON type or a IO[bytes] type. Required. :type body: JSON or IO[bytes] - :keyword url: URL of input image. Required when body is not set. + :keyword url: URL of input image. Required. :paramtype url: str :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default - value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". - Required. + value is 'detection_01'. 'detection_03' is recommended since its accuracy is improved on + smaller faces (64x64 pixels) and rotated face orientations. Known values are: "detection_01", + "detection_02", and "detection_03". Default value is None. :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel :keyword recognition_model: The 'recognitionModel' associated with the detected faceIds. Supported 'recognitionModel' values include 'recognition_01', 'recognition_02', @@ -160,9 +118,10 @@ def detect_from_url( is recommended since its accuracy is improved on faces wearing masks compared with 'recognition_03', and its overall accuracy is improved compared with 'recognition_01' and 'recognition_02'. Known values are: "recognition_01", "recognition_02", "recognition_03", and - "recognition_04". Required. + "recognition_04". Default value is None. :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel - :keyword return_face_id: Return faceIds of the detected faces or not. Required. + :keyword return_face_id: Return faceIds of the detected faces or not. The default value is + true. Default value is None. :paramtype return_face_id: bool :keyword return_face_attributes: Analyze and return the one or more specified face attributes in the comma-separated string like 'returnFaceAttributes=headPose,glasses'. Face attribute @@ -172,7 +131,7 @@ def detect_from_url( value is false. Default value is None. :paramtype return_face_landmarks: bool :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is - false. Default value is None. + false. This is only applicable when returnFaceId = true. Default value is None. :paramtype return_recognition_model: bool :keyword face_id_time_to_live: The number of seconds for the face ID being cached. Supported range from 60 seconds up to 86400 seconds. The default value is 86400 (24 hours). Default value @@ -181,292 +140,6 @@ def detect_from_url( :return: list of FaceDetectionResult :rtype: list[~azure.ai.vision.face.models.FaceDetectionResult] :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # JSON input template you can fill out and use as your body input. - body = { - "url": "str" # URL of input image. Required. - } - - # response body for status code(s): 200 - response == [ - { - "faceRectangle": { - "height": 0, # The height of the rectangle, in pixels. - Required. - "left": 0, # The distance from the left edge if the image to - the left edge of the rectangle, in pixels. Required. - "top": 0, # The distance from the top edge if the image to - the top edge of the rectangle, in pixels. Required. - "width": 0 # The width of the rectangle, in pixels. - Required. - }, - "faceAttributes": { - "accessories": [ - { - "confidence": 0.0, # Confidence level of the - accessory type. Range between [0,1]. Required. - "type": "str" # Type of the accessory. - Required. Known values are: "headwear", "glasses", and "mask". - } - ], - "age": 0.0, # Optional. Age in years. - "blur": { - "blurLevel": "str", # An enum value indicating level - of blurriness. Required. Known values are: "low", "medium", and - "high". - "value": 0.0 # A number indicating level of - blurriness ranging from 0 to 1. Required. - }, - "exposure": { - "exposureLevel": "str", # An enum value indicating - level of exposure. Required. Known values are: "underExposure", - "goodExposure", and "overExposure". - "value": 0.0 # A number indicating level of exposure - level ranging from 0 to 1. [0, 0.25) is under exposure. [0.25, 0.75) - is good exposure. [0.75, 1] is over exposure. Required. - }, - "facialHair": { - "beard": 0.0, # A number ranging from 0 to 1 - indicating a level of confidence associated with a property. - Required. - "moustache": 0.0, # A number ranging from 0 to 1 - indicating a level of confidence associated with a property. - Required. - "sideburns": 0.0 # A number ranging from 0 to 1 - indicating a level of confidence associated with a property. - Required. - }, - "glasses": "str", # Optional. Glasses type if any of the - face. Known values are: "noGlasses", "readingGlasses", "sunglasses", and - "swimmingGoggles". - "hair": { - "bald": 0.0, # A number describing confidence level - of whether the person is bald. Required. - "hairColor": [ - { - "color": "str", # Name of the hair - color. Required. Known values are: "unknown", "white", - "gray", "blond", "brown", "red", "black", and "other". - "confidence": 0.0 # Confidence level - of the color. Range between [0,1]. Required. - } - ], - "invisible": bool # A boolean value describing - whether the hair is visible in the image. Required. - }, - "headPose": { - "pitch": 0.0, # Value of angles. Required. - "roll": 0.0, # Value of angles. Required. - "yaw": 0.0 # Value of angles. Required. - }, - "mask": { - "noseAndMouthCovered": bool, # A boolean value - indicating whether nose and mouth are covered. Required. - "type": "str" # Type of the mask. Required. Known - values are: "faceMask", "noMask", "otherMaskOrOcclusion", and - "uncertain". - }, - "noise": { - "noiseLevel": "str", # An enum value indicating - level of noise. Required. Known values are: "low", "medium", and - "high". - "value": 0.0 # A number indicating level of noise - level ranging from 0 to 1. [0, 0.25) is under exposure. [0.25, 0.75) - is good exposure. [0.75, 1] is over exposure. [0, 0.3) is low noise - level. [0.3, 0.7) is medium noise level. [0.7, 1] is high noise - level. Required. - }, - "occlusion": { - "eyeOccluded": bool, # A boolean value indicating - whether eyes are occluded. Required. - "foreheadOccluded": bool, # A boolean value - indicating whether forehead is occluded. Required. - "mouthOccluded": bool # A boolean value indicating - whether the mouth is occluded. Required. - }, - "qualityForRecognition": "str", # Optional. Properties - describing the overall image quality regarding whether the image being - used in the detection is of sufficient quality to attempt face - recognition on. Known values are: "low", "medium", and "high". - "smile": 0.0 # Optional. Smile intensity, a number between - [0,1]. - }, - "faceId": "str", # Optional. Unique faceId of the detected face, - created by detection API and it will expire 24 hours after the detection - call. To return this, it requires 'returnFaceId' parameter to be true. - "faceLandmarks": { - "eyeLeftBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeLeftInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeLeftOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeLeftTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowLeftInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowLeftOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowRightInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowRightOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "mouthLeft": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "mouthRight": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseLeftAlarOutTip": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseLeftAlarTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRightAlarOutTip": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRightAlarTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRootLeft": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRootRight": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseTip": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "pupilLeft": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "pupilRight": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "underLipBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "underLipTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "upperLipBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "upperLipTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - } - }, - "recognitionModel": "str" # Optional. The 'recognitionModel' - associated with this faceId. This is only returned when - 'returnRecognitionModel' is explicitly set as true. Known values are: - "recognition_01", "recognition_02", "recognition_03", and "recognition_04". - } - ] """ return super()._detect_from_url( body, @@ -495,62 +168,19 @@ def detect( face_id_time_to_live: Optional[int] = None, **kwargs: Any, ) -> List[_models.FaceDetectionResult]: - # pylint: disable=line-too-long """Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, and attributes. - .. - - [!IMPORTANT] - To mitigate potential misuse that can subject people to stereotyping, discrimination, or - unfair denial of services, we are retiring Face API attributes that predict emotion, gender, - age, smile, facial hair, hair, and makeup. Read more about this decision - https://azure.microsoft.com/blog/responsible-ai-investments-and-safeguards-for-facial-recognition/. - - - * - - - * No image will be stored. Only the extracted face feature(s) will be stored on server. The - faceId is an identifier of the face feature and will be used in Face - Identify, Face - Verify, - and Face - Find Similar. The stored face features will expire and be deleted at the time - specified by faceIdTimeToLive after the original detection call. - * Optional parameters include faceId, landmarks, and attributes. Attributes include headPose, - glasses, occlusion, accessories, blur, exposure, noise, mask, and qualityForRecognition. Some - of the results returned for specific attributes may not be highly accurate. - * JPEG, PNG, GIF (the first frame), and BMP format are supported. The allowed image file size - is from 1KB to 6MB. - * The minimum detectable face size is 36x36 pixels in an image no larger than 1920x1080 pixels. - Images with dimensions higher than 1920x1080 pixels will need a proportionally larger minimum - face size. - * Up to 100 faces can be returned for an image. Faces are ranked by face rectangle size from - large to small. - * For optimal results when querying Face - Identify, Face - Verify, and Face - Find Similar - ('returnFaceId' is true), please use faces that are: frontal, clear, and with a minimum size of - 200x200 pixels (100 pixels between eyes). - * Different 'detectionModel' values can be provided. To use and compare different detection - models, please refer to - https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/specify-detection-model - - * 'detection_02': Face attributes and landmarks are disabled if you choose this detection - model. - * 'detection_03': Face attributes (mask and headPose only) and landmarks are supported if you - choose this detection model. - - * Different 'recognitionModel' values are provided. If follow-up operations like Verify, - Identify, Find Similar are needed, please specify the recognition model with 'recognitionModel' - parameter. The default value for 'recognitionModel' is 'recognition_01', if latest model - needed, please explicitly specify the model you need in this parameter. Once specified, the - detected faceIds will be associated with the specified recognition model. More details, please - refer to - https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/specify-recognition-model. + Please refer to https://learn.microsoft.com/rest/api/face/face-detection-operations/detect for + more details. :param image_content: The input image binary. Required. :type image_content: bytes :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default - value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". - Required. + value is 'detection_01'. 'detection_03' is recommended since its accuracy is improved on + smaller faces (64x64 pixels) and rotated face orientations. Known values are: "detection_01", + "detection_02", and "detection_03". Default value is None. :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel :keyword recognition_model: The 'recognitionModel' associated with the detected faceIds. Supported 'recognitionModel' values include 'recognition_01', 'recognition_02', @@ -558,9 +188,10 @@ def detect( is recommended since its accuracy is improved on faces wearing masks compared with 'recognition_03', and its overall accuracy is improved compared with 'recognition_01' and 'recognition_02'. Known values are: "recognition_01", "recognition_02", "recognition_03", and - "recognition_04". Required. + "recognition_04". Default value is None. :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel - :keyword return_face_id: Return faceIds of the detected faces or not. Required. + :keyword return_face_id: Return faceIds of the detected faces or not. The default value is + true. Default value is None. :paramtype return_face_id: bool :keyword return_face_attributes: Analyze and return the one or more specified face attributes in the comma-separated string like 'returnFaceAttributes=headPose,glasses'. Face attribute @@ -570,7 +201,7 @@ def detect( value is false. Default value is None. :paramtype return_face_landmarks: bool :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is - false. Default value is None. + false. This is only applicable when returnFaceId = true. Default value is None. :paramtype return_recognition_model: bool :keyword face_id_time_to_live: The number of seconds for the face ID being cached. Supported range from 60 seconds up to 86400 seconds. The default value is 86400 (24 hours). Default value @@ -579,287 +210,6 @@ def detect( :return: list of FaceDetectionResult :rtype: list[~azure.ai.vision.face.models.FaceDetectionResult] :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == [ - { - "faceRectangle": { - "height": 0, # The height of the rectangle, in pixels. - Required. - "left": 0, # The distance from the left edge if the image to - the left edge of the rectangle, in pixels. Required. - "top": 0, # The distance from the top edge if the image to - the top edge of the rectangle, in pixels. Required. - "width": 0 # The width of the rectangle, in pixels. - Required. - }, - "faceAttributes": { - "accessories": [ - { - "confidence": 0.0, # Confidence level of the - accessory type. Range between [0,1]. Required. - "type": "str" # Type of the accessory. - Required. Known values are: "headwear", "glasses", and "mask". - } - ], - "age": 0.0, # Optional. Age in years. - "blur": { - "blurLevel": "str", # An enum value indicating level - of blurriness. Required. Known values are: "low", "medium", and - "high". - "value": 0.0 # A number indicating level of - blurriness ranging from 0 to 1. Required. - }, - "exposure": { - "exposureLevel": "str", # An enum value indicating - level of exposure. Required. Known values are: "underExposure", - "goodExposure", and "overExposure". - "value": 0.0 # A number indicating level of exposure - level ranging from 0 to 1. [0, 0.25) is under exposure. [0.25, 0.75) - is good exposure. [0.75, 1] is over exposure. Required. - }, - "facialHair": { - "beard": 0.0, # A number ranging from 0 to 1 - indicating a level of confidence associated with a property. - Required. - "moustache": 0.0, # A number ranging from 0 to 1 - indicating a level of confidence associated with a property. - Required. - "sideburns": 0.0 # A number ranging from 0 to 1 - indicating a level of confidence associated with a property. - Required. - }, - "glasses": "str", # Optional. Glasses type if any of the - face. Known values are: "noGlasses", "readingGlasses", "sunglasses", and - "swimmingGoggles". - "hair": { - "bald": 0.0, # A number describing confidence level - of whether the person is bald. Required. - "hairColor": [ - { - "color": "str", # Name of the hair - color. Required. Known values are: "unknown", "white", - "gray", "blond", "brown", "red", "black", and "other". - "confidence": 0.0 # Confidence level - of the color. Range between [0,1]. Required. - } - ], - "invisible": bool # A boolean value describing - whether the hair is visible in the image. Required. - }, - "headPose": { - "pitch": 0.0, # Value of angles. Required. - "roll": 0.0, # Value of angles. Required. - "yaw": 0.0 # Value of angles. Required. - }, - "mask": { - "noseAndMouthCovered": bool, # A boolean value - indicating whether nose and mouth are covered. Required. - "type": "str" # Type of the mask. Required. Known - values are: "faceMask", "noMask", "otherMaskOrOcclusion", and - "uncertain". - }, - "noise": { - "noiseLevel": "str", # An enum value indicating - level of noise. Required. Known values are: "low", "medium", and - "high". - "value": 0.0 # A number indicating level of noise - level ranging from 0 to 1. [0, 0.25) is under exposure. [0.25, 0.75) - is good exposure. [0.75, 1] is over exposure. [0, 0.3) is low noise - level. [0.3, 0.7) is medium noise level. [0.7, 1] is high noise - level. Required. - }, - "occlusion": { - "eyeOccluded": bool, # A boolean value indicating - whether eyes are occluded. Required. - "foreheadOccluded": bool, # A boolean value - indicating whether forehead is occluded. Required. - "mouthOccluded": bool # A boolean value indicating - whether the mouth is occluded. Required. - }, - "qualityForRecognition": "str", # Optional. Properties - describing the overall image quality regarding whether the image being - used in the detection is of sufficient quality to attempt face - recognition on. Known values are: "low", "medium", and "high". - "smile": 0.0 # Optional. Smile intensity, a number between - [0,1]. - }, - "faceId": "str", # Optional. Unique faceId of the detected face, - created by detection API and it will expire 24 hours after the detection - call. To return this, it requires 'returnFaceId' parameter to be true. - "faceLandmarks": { - "eyeLeftBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeLeftInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeLeftOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeLeftTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowLeftInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowLeftOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowRightInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowRightOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "mouthLeft": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "mouthRight": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseLeftAlarOutTip": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseLeftAlarTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRightAlarOutTip": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRightAlarTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRootLeft": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRootRight": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseTip": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "pupilLeft": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "pupilRight": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "underLipBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "underLipTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "upperLipBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "upperLipTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - } - }, - "recognitionModel": "str" # Optional. The 'recognitionModel' - associated with this faceId. This is only returned when - 'returnRecognitionModel' is explicitly set as true. Known values are: - "recognition_01", "recognition_02", "recognition_03", and "recognition_04". - } - ] """ return super()._detect( image_content, @@ -888,7 +238,7 @@ class FaceSessionClient(FaceSessionClientGenerated): AzureKeyCredential type or a TokenCredential type. Required. :type credential: ~azure.core.credentials.AzureKeyCredential or ~azure.core.credentials.TokenCredential - :keyword api_version: API Version. Default value is "v1.1-preview.1". Note that overriding this + :keyword api_version: API Version. Default value is "v1.2-preview.1". Note that overriding this default value may result in unsupported behavior. :paramtype api_version: str or ~azure.ai.vision.face.models.Versions """ @@ -896,7 +246,7 @@ class FaceSessionClient(FaceSessionClientGenerated): @overload def create_liveness_with_verify_session( self, - body: _models.CreateLivenessSessionContent, + body: _models.CreateLivenessWithVerifySessionContent, *, verify_image: Union[bytes, None], content_type: str = "application/json", @@ -913,113 +263,39 @@ def create_liveness_with_verify_session( **kwargs: Any, ) -> _models.CreateLivenessWithVerifySessionResult: ... - @overload - def create_liveness_with_verify_session( - self, - body: IO[bytes], - *, - verify_image: Union[bytes, None], - content_type: str = "application/json", - **kwargs: Any, - ) -> _models.CreateLivenessWithVerifySessionResult: ... - @distributed_trace def create_liveness_with_verify_session( self, - body: Union[_models.CreateLivenessSessionContent, JSON, IO[bytes]], + body: Union[_models.CreateLivenessWithVerifySessionContent, JSON], *, verify_image: Union[bytes, None], **kwargs: Any, ) -> _models.CreateLivenessWithVerifySessionResult: - # pylint: disable=line-too-long """Create a new liveness session with verify. Client device submits VerifyImage during the /detectLivenessWithVerify/singleModal call. - A session is best for client device scenarios where developers want to authorize a client - device to perform only a liveness detection without granting full access to their resource. - Created sessions have a limited life span and only authorize clients to perform the desired - action before access is expired. - - Permissions includes... - > - * - - - * Ability to call /detectLivenessWithVerify/singleModal for up to 3 retries. - * A token lifetime of 10 minutes. - - .. - - [!NOTE] - - * - - - * Client access can be revoked by deleting the session using the Delete Liveness With Verify - Session operation. - * To retrieve a result, use the Get Liveness With Verify Session. - * To audit the individual requests that a client has made to your resource, use the List - Liveness With Verify Session Audit Entries. - - - Alternative Option: Client device submits VerifyImage during the - /detectLivenessWithVerify/singleModal call. - - .. - - [!NOTE] - Extra measures should be taken to validate that the client is sending the expected - VerifyImage. + Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/create-liveness-with-verify-session + for more details. - :param body: Is one of the following types: CreateLivenessSessionContent, JSON, IO[bytes] - Required. - :type body: ~azure.ai.vision.face.models.CreateLivenessSessionContent or JSON or IO[bytes] - :keyword verify_image: The image for verify. If you don't have any images to use for verification, - set it to None. Required. - :paramtype verify_image: bytes or None + :param body: Body parameter. Is one of the following types: + CreateLivenessWithVerifySessionContent, JSON, IO[bytes] Required. + :type body: ~azure.ai.vision.face.models.CreateLivenessWithVerifySessionContent or JSON or + IO[bytes] :return: CreateLivenessWithVerifySessionResult. The CreateLivenessWithVerifySessionResult is compatible with MutableMapping :rtype: ~azure.ai.vision.face.models.CreateLivenessWithVerifySessionResult :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # JSON input template you can fill out and use as your body input. - body = { - "livenessOperationMode": "str", # Type of liveness mode the client should - follow. Required. "Passive" - "authTokenTimeToLiveInSeconds": 0, # Optional. Seconds the session should - last for. Range is 60 to 86400 seconds. Default value is 600. - "deviceCorrelationId": "str", # Optional. Unique Guid per each end-user - device. This is to provide rate limiting and anti-hammering. If - 'deviceCorrelationIdSetInClient' is true in this request, this - 'deviceCorrelationId' must be null. - "deviceCorrelationIdSetInClient": bool, # Optional. Whether or not to allow - client to set their own 'deviceCorrelationId' via the Vision SDK. Default is - false, and 'deviceCorrelationId' must be set in this request body. - "sendResultsToClient": bool # Optional. Whether or not to allow a '200 - - Success' response body to be sent to the client, which may be undesirable for - security reasons. Default is false, clients will receive a '204 - NoContent' - empty body response. Regardless of selection, calling Session GetResult will - always contain a response body enabling business logic to be implemented. - } - - # response body for status code(s): 200 - response == { - "authToken": "str", # Bearer token to provide authentication for the Vision - SDK running on a client application. This Bearer token has limited permissions to - perform only the required action and expires after the TTL time. It is also - auditable. Required. - "sessionId": "str" # The unique session ID of the created session. It will - expire 48 hours after it was created or may be deleted sooner using the - corresponding Session DELETE operation. Required. - } """ if verify_image is not None: - request_body = _models._models.CreateLivenessWithVerifySessionContent( # pylint: disable=protected-access - parameters=body, - verify_image=("verify-image", verify_image), + if not isinstance(body, _models.CreateLivenessWithVerifySessionContent): + # Convert body to CreateLivenessWithVerifySessionContent if necessary + body = _models.CreateLivenessWithVerifySessionContent(**body) + request_body = ( + _models._models.CreateLivenessWithVerifySessionMultipartContent( # pylint: disable=protected-access + parameters=body, + verify_image=("verify-image", verify_image), + ) ) return super()._create_liveness_with_verify_session_with_verify_image(request_body, **kwargs) diff --git a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_serialization.py b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_serialization.py index 2f781d740827..7b3074215a30 100644 --- a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_serialization.py +++ b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_serialization.py @@ -24,7 +24,6 @@ # # -------------------------------------------------------------------------- -# pylint: skip-file # pyright: reportUnnecessaryTypeIgnoreComment=false from base64 import b64decode, b64encode @@ -52,7 +51,6 @@ MutableMapping, Type, List, - Mapping, ) try: @@ -91,6 +89,8 @@ def deserialize_from_text(cls, data: Optional[Union[AnyStr, IO]], content_type: :param data: Input, could be bytes or stream (will be decoded with UTF8) or text :type data: str or bytes or IO :param str content_type: The content type. + :return: The deserialized data. + :rtype: object """ if hasattr(data, "read"): # Assume a stream @@ -112,7 +112,7 @@ def deserialize_from_text(cls, data: Optional[Union[AnyStr, IO]], content_type: try: return json.loads(data_as_str) except ValueError as err: - raise DeserializationError("JSON is invalid: {}".format(err), err) + raise DeserializationError("JSON is invalid: {}".format(err), err) from err elif "xml" in (content_type or []): try: @@ -144,6 +144,8 @@ def _json_attemp(data): # context otherwise. _LOGGER.critical("Wasn't XML not JSON, failing") raise DeserializationError("XML is invalid") from err + elif content_type.startswith("text/"): + return data_as_str raise DeserializationError("Cannot deserialize content-type: {}".format(content_type)) @classmethod @@ -153,6 +155,11 @@ def deserialize_from_http_generics(cls, body_bytes: Optional[Union[AnyStr, IO]], Use bytes and headers to NOT use any requests/aiohttp or whatever specific implementation. Headers will tested for "content-type" + + :param bytes body_bytes: The body of the response. + :param dict headers: The headers of the response. + :returns: The deserialized data. + :rtype: object """ # Try to use content-type from headers if available content_type = None @@ -182,15 +189,30 @@ class UTC(datetime.tzinfo): """Time Zone info for handling UTC""" def utcoffset(self, dt): - """UTF offset for UTC is 0.""" + """UTF offset for UTC is 0. + + :param datetime.datetime dt: The datetime + :returns: The offset + :rtype: datetime.timedelta + """ return datetime.timedelta(0) def tzname(self, dt): - """Timestamp representation.""" + """Timestamp representation. + + :param datetime.datetime dt: The datetime + :returns: The timestamp representation + :rtype: str + """ return "Z" def dst(self, dt): - """No daylight saving for UTC.""" + """No daylight saving for UTC. + + :param datetime.datetime dt: The datetime + :returns: The daylight saving time + :rtype: datetime.timedelta + """ return datetime.timedelta(hours=1) @@ -233,24 +255,26 @@ def __getinitargs__(self): _FLATTEN = re.compile(r"(? None: self.additional_properties: Optional[Dict[str, Any]] = {} - for k in kwargs: + for k in kwargs: # pylint: disable=consider-using-dict-items if k not in self._attribute_map: _LOGGER.warning("%s is not a known attribute of class %s and will be ignored", k, self.__class__) elif k in self._validation and self._validation[k].get("readonly", False): @@ -298,13 +329,23 @@ def __init__(self, **kwargs: Any) -> None: setattr(self, k, kwargs[k]) def __eq__(self, other: Any) -> bool: - """Compare objects by comparing all attributes.""" + """Compare objects by comparing all attributes. + + :param object other: The object to compare + :returns: True if objects are equal + :rtype: bool + """ if isinstance(other, self.__class__): return self.__dict__ == other.__dict__ return False def __ne__(self, other: Any) -> bool: - """Compare objects by comparing all attributes.""" + """Compare objects by comparing all attributes. + + :param object other: The object to compare + :returns: True if objects are not equal + :rtype: bool + """ return not self.__eq__(other) def __str__(self) -> str: @@ -324,7 +365,11 @@ def is_xml_model(cls) -> bool: @classmethod def _create_xml_node(cls): - """Create XML node.""" + """Create XML node. + + :returns: The XML node + :rtype: xml.etree.ElementTree.Element + """ try: xml_map = cls._xml_map # type: ignore except AttributeError: @@ -344,7 +389,9 @@ def serialize(self, keep_readonly: bool = False, **kwargs: Any) -> JSON: :rtype: dict """ serializer = Serializer(self._infer_class_models()) - return serializer._serialize(self, keep_readonly=keep_readonly, **kwargs) # type: ignore + return serializer._serialize( # type: ignore # pylint: disable=protected-access + self, keep_readonly=keep_readonly, **kwargs + ) def as_dict( self, @@ -378,12 +425,15 @@ def my_key_transformer(key, attr_desc, value): If you want XML serialization, you can pass the kwargs is_xml=True. + :param bool keep_readonly: If you want to serialize the readonly attributes :param function key_transformer: A key transformer function. :returns: A dict JSON compatible object :rtype: dict """ serializer = Serializer(self._infer_class_models()) - return serializer._serialize(self, key_transformer=key_transformer, keep_readonly=keep_readonly, **kwargs) # type: ignore + return serializer._serialize( # type: ignore # pylint: disable=protected-access + self, key_transformer=key_transformer, keep_readonly=keep_readonly, **kwargs + ) @classmethod def _infer_class_models(cls): @@ -393,7 +443,7 @@ def _infer_class_models(cls): client_models = {k: v for k, v in models.__dict__.items() if isinstance(v, type)} if cls.__name__ not in client_models: raise ValueError("Not Autorest generated code") - except Exception: + except Exception: # pylint: disable=broad-exception-caught # Assume it's not Autorest generated (tests?). Add ourselves as dependencies. client_models = {cls.__name__: cls} return client_models @@ -406,6 +456,7 @@ def deserialize(cls: Type[ModelType], data: Any, content_type: Optional[str] = N :param str content_type: JSON by default, set application/xml if XML. :returns: An instance of this model :raises: DeserializationError if something went wrong + :rtype: ModelType """ deserializer = Deserializer(cls._infer_class_models()) return deserializer(cls.__name__, data, content_type=content_type) # type: ignore @@ -424,9 +475,11 @@ def from_dict( and last_rest_key_case_insensitive_extractor) :param dict data: A dict using RestAPI structure + :param function key_extractors: A key extractor function. :param str content_type: JSON by default, set application/xml if XML. :returns: An instance of this model :raises: DeserializationError if something went wrong + :rtype: ModelType """ deserializer = Deserializer(cls._infer_class_models()) deserializer.key_extractors = ( # type: ignore @@ -446,7 +499,7 @@ def _flatten_subtype(cls, key, objects): return {} result = dict(cls._subtype_map[key]) for valuetype in cls._subtype_map[key].values(): - result.update(objects[valuetype]._flatten_subtype(key, objects)) + result.update(objects[valuetype]._flatten_subtype(key, objects)) # pylint: disable=protected-access return result @classmethod @@ -454,6 +507,11 @@ def _classify(cls, response, objects): """Check the class _subtype_map for any child classes. We want to ignore any inherited _subtype_maps. Remove the polymorphic key from the initial data. + + :param dict response: The initial data + :param dict objects: The class objects + :returns: The class to be used + :rtype: class """ for subtype_key in cls.__dict__.get("_subtype_map", {}).keys(): subtype_value = None @@ -499,11 +557,13 @@ def _decode_attribute_map_key(key): inside the received data. :param str key: A key string from the generated code + :returns: The decoded key + :rtype: str """ return key.replace("\\.", ".") -class Serializer(object): +class Serializer(object): # pylint: disable=too-many-public-methods """Request object model serializer.""" basic_types = {str: "str", int: "int", bool: "bool", float: "float"} @@ -558,13 +618,16 @@ def __init__(self, classes: Optional[Mapping[str, type]] = None): self.key_transformer = full_restapi_key_transformer self.client_side_validation = True - def _serialize(self, target_obj, data_type=None, **kwargs): + def _serialize( # pylint: disable=too-many-nested-blocks, too-many-branches, too-many-statements, too-many-locals + self, target_obj, data_type=None, **kwargs + ): """Serialize data into a string according to type. - :param target_obj: The data to be serialized. + :param object target_obj: The data to be serialized. :param str data_type: The type to be serialized from. :rtype: str, dict :raises: SerializationError if serialization fails. + :returns: The serialized data. """ key_transformer = kwargs.get("key_transformer", self.key_transformer) keep_readonly = kwargs.get("keep_readonly", False) @@ -590,12 +653,14 @@ def _serialize(self, target_obj, data_type=None, **kwargs): serialized = {} if is_xml_model_serialization: - serialized = target_obj._create_xml_node() + serialized = target_obj._create_xml_node() # pylint: disable=protected-access try: - attributes = target_obj._attribute_map + attributes = target_obj._attribute_map # pylint: disable=protected-access for attr, attr_desc in attributes.items(): attr_name = attr - if not keep_readonly and target_obj._validation.get(attr_name, {}).get("readonly", False): + if not keep_readonly and target_obj._validation.get( # pylint: disable=protected-access + attr_name, {} + ).get("readonly", False): continue if attr_name == "additional_properties" and attr_desc["key"] == "": @@ -631,7 +696,8 @@ def _serialize(self, target_obj, data_type=None, **kwargs): if isinstance(new_attr, list): serialized.extend(new_attr) # type: ignore elif isinstance(new_attr, ET.Element): - # If the down XML has no XML/Name, we MUST replace the tag with the local tag. But keeping the namespaces. + # If the down XML has no XML/Name, + # we MUST replace the tag with the local tag. But keeping the namespaces. if "name" not in getattr(orig_attr, "_xml_map", {}): splitted_tag = new_attr.tag.split("}") if len(splitted_tag) == 2: # Namespace @@ -662,17 +728,17 @@ def _serialize(self, target_obj, data_type=None, **kwargs): except (AttributeError, KeyError, TypeError) as err: msg = "Attribute {} in object {} cannot be serialized.\n{}".format(attr_name, class_name, str(target_obj)) raise SerializationError(msg) from err - else: - return serialized + return serialized def body(self, data, data_type, **kwargs): """Serialize data intended for a request body. - :param data: The data to be serialized. + :param object data: The data to be serialized. :param str data_type: The type to be serialized from. :rtype: dict :raises: SerializationError if serialization fails. :raises: ValueError if data is None + :returns: The serialized request body """ # Just in case this is a dict @@ -701,7 +767,7 @@ def body(self, data, data_type, **kwargs): attribute_key_case_insensitive_extractor, last_rest_key_case_insensitive_extractor, ] - data = deserializer._deserialize(data_type, data) + data = deserializer._deserialize(data_type, data) # pylint: disable=protected-access except DeserializationError as err: raise SerializationError("Unable to build a model: " + str(err)) from err @@ -710,9 +776,11 @@ def body(self, data, data_type, **kwargs): def url(self, name, data, data_type, **kwargs): """Serialize data intended for a URL path. - :param data: The data to be serialized. + :param str name: The name of the URL path parameter. + :param object data: The data to be serialized. :param str data_type: The type to be serialized from. :rtype: str + :returns: The serialized URL path :raises: TypeError if serialization fails. :raises: ValueError if data is None """ @@ -726,21 +794,20 @@ def url(self, name, data, data_type, **kwargs): output = output.replace("{", quote("{")).replace("}", quote("}")) else: output = quote(str(output), safe="") - except SerializationError: - raise TypeError("{} must be type {}.".format(name, data_type)) - else: - return output + except SerializationError as exc: + raise TypeError("{} must be type {}.".format(name, data_type)) from exc + return output def query(self, name, data, data_type, **kwargs): """Serialize data intended for a URL query. - :param data: The data to be serialized. + :param str name: The name of the query parameter. + :param object data: The data to be serialized. :param str data_type: The type to be serialized from. - :keyword bool skip_quote: Whether to skip quote the serialized result. - Defaults to False. :rtype: str, list :raises: TypeError if serialization fails. :raises: ValueError if data is None + :returns: The serialized query parameter """ try: # Treat the list aside, since we don't want to encode the div separator @@ -757,19 +824,20 @@ def query(self, name, data, data_type, **kwargs): output = str(output) else: output = quote(str(output), safe="") - except SerializationError: - raise TypeError("{} must be type {}.".format(name, data_type)) - else: - return str(output) + except SerializationError as exc: + raise TypeError("{} must be type {}.".format(name, data_type)) from exc + return str(output) def header(self, name, data, data_type, **kwargs): """Serialize data intended for a request header. - :param data: The data to be serialized. + :param str name: The name of the header. + :param object data: The data to be serialized. :param str data_type: The type to be serialized from. :rtype: str :raises: TypeError if serialization fails. :raises: ValueError if data is None + :returns: The serialized header """ try: if data_type in ["[str]"]: @@ -778,21 +846,20 @@ def header(self, name, data, data_type, **kwargs): output = self.serialize_data(data, data_type, **kwargs) if data_type == "bool": output = json.dumps(output) - except SerializationError: - raise TypeError("{} must be type {}.".format(name, data_type)) - else: - return str(output) + except SerializationError as exc: + raise TypeError("{} must be type {}.".format(name, data_type)) from exc + return str(output) def serialize_data(self, data, data_type, **kwargs): """Serialize generic data according to supplied data type. - :param data: The data to be serialized. + :param object data: The data to be serialized. :param str data_type: The type to be serialized from. - :param bool required: Whether it's essential that the data not be - empty or None :raises: AttributeError if required data is None. :raises: ValueError if data is None :raises: SerializationError if serialization fails. + :returns: The serialized data. + :rtype: str, int, float, bool, dict, list """ if data is None: raise ValueError("No value for given attribute") @@ -803,7 +870,7 @@ def serialize_data(self, data, data_type, **kwargs): if data_type in self.basic_types.values(): return self.serialize_basic(data, data_type, **kwargs) - elif data_type in self.serialize_type: + if data_type in self.serialize_type: return self.serialize_type[data_type](data, **kwargs) # If dependencies is empty, try with current data class @@ -819,11 +886,10 @@ def serialize_data(self, data, data_type, **kwargs): except (ValueError, TypeError) as err: msg = "Unable to serialize value: {!r} as type: {!r}." raise SerializationError(msg.format(data, data_type)) from err - else: - return self._serialize(data, **kwargs) + return self._serialize(data, **kwargs) @classmethod - def _get_custom_serializers(cls, data_type, **kwargs): + def _get_custom_serializers(cls, data_type, **kwargs): # pylint: disable=inconsistent-return-statements custom_serializer = kwargs.get("basic_types_serializers", {}).get(data_type) if custom_serializer: return custom_serializer @@ -839,23 +905,26 @@ def serialize_basic(cls, data, data_type, **kwargs): - basic_types_serializers dict[str, callable] : If set, use the callable as serializer - is_xml bool : If set, use xml_basic_types_serializers - :param data: Object to be serialized. + :param obj data: Object to be serialized. :param str data_type: Type of object in the iterable. + :rtype: str, int, float, bool + :return: serialized object """ custom_serializer = cls._get_custom_serializers(data_type, **kwargs) if custom_serializer: return custom_serializer(data) if data_type == "str": return cls.serialize_unicode(data) - return eval(data_type)(data) # nosec + return eval(data_type)(data) # nosec # pylint: disable=eval-used @classmethod def serialize_unicode(cls, data): """Special handling for serializing unicode strings in Py2. Encode to UTF-8 if unicode, otherwise handle as a str. - :param data: Object to be serialized. + :param str data: Object to be serialized. :rtype: str + :return: serialized object """ try: # If I received an enum, return its value return data.value @@ -869,8 +938,7 @@ def serialize_unicode(cls, data): return data except NameError: return str(data) - else: - return str(data) + return str(data) def serialize_iter(self, data, iter_type, div=None, **kwargs): """Serialize iterable. @@ -880,15 +948,13 @@ def serialize_iter(self, data, iter_type, div=None, **kwargs): serialization_ctxt['type'] should be same as data_type. - is_xml bool : If set, serialize as XML - :param list attr: Object to be serialized. + :param list data: Object to be serialized. :param str iter_type: Type of object in the iterable. - :param bool required: Whether the objects in the iterable must - not be None or empty. :param str div: If set, this str will be used to combine the elements in the iterable into a combined string. Default is 'None'. - :keyword bool do_quote: Whether to quote the serialized result of each iterable element. Defaults to False. :rtype: list, str + :return: serialized iterable """ if isinstance(data, str): raise SerializationError("Refuse str type as a valid iter type.") @@ -943,9 +1009,8 @@ def serialize_dict(self, attr, dict_type, **kwargs): :param dict attr: Object to be serialized. :param str dict_type: Type of object in the dictionary. - :param bool required: Whether the objects in the dictionary must - not be None or empty. :rtype: dict + :return: serialized dictionary """ serialization_ctxt = kwargs.get("serialization_ctxt", {}) serialized = {} @@ -969,7 +1034,7 @@ def serialize_dict(self, attr, dict_type, **kwargs): return serialized - def serialize_object(self, attr, **kwargs): + def serialize_object(self, attr, **kwargs): # pylint: disable=too-many-return-statements """Serialize a generic object. This will be handled as a dictionary. If object passed in is not a basic type (str, int, float, dict, list) it will simply be @@ -977,6 +1042,7 @@ def serialize_object(self, attr, **kwargs): :param dict attr: Object to be serialized. :rtype: dict or str + :return: serialized object """ if attr is None: return None @@ -1001,7 +1067,7 @@ def serialize_object(self, attr, **kwargs): return self.serialize_decimal(attr) # If it's a model or I know this dependency, serialize as a Model - elif obj_type in self.dependencies.values() or isinstance(attr, Model): + if obj_type in self.dependencies.values() or isinstance(attr, Model): return self._serialize(attr) if obj_type == dict: @@ -1032,56 +1098,61 @@ def serialize_enum(attr, enum_obj=None): try: enum_obj(result) # type: ignore return result - except ValueError: + except ValueError as exc: for enum_value in enum_obj: # type: ignore if enum_value.value.lower() == str(attr).lower(): return enum_value.value error = "{!r} is not valid value for enum {!r}" - raise SerializationError(error.format(attr, enum_obj)) + raise SerializationError(error.format(attr, enum_obj)) from exc @staticmethod - def serialize_bytearray(attr, **kwargs): + def serialize_bytearray(attr, **kwargs): # pylint: disable=unused-argument """Serialize bytearray into base-64 string. - :param attr: Object to be serialized. + :param str attr: Object to be serialized. :rtype: str + :return: serialized base64 """ return b64encode(attr).decode() @staticmethod - def serialize_base64(attr, **kwargs): + def serialize_base64(attr, **kwargs): # pylint: disable=unused-argument """Serialize str into base-64 string. - :param attr: Object to be serialized. + :param str attr: Object to be serialized. :rtype: str + :return: serialized base64 """ encoded = b64encode(attr).decode("ascii") return encoded.strip("=").replace("+", "-").replace("/", "_") @staticmethod - def serialize_decimal(attr, **kwargs): + def serialize_decimal(attr, **kwargs): # pylint: disable=unused-argument """Serialize Decimal object to float. - :param attr: Object to be serialized. + :param decimal attr: Object to be serialized. :rtype: float + :return: serialized decimal """ return float(attr) @staticmethod - def serialize_long(attr, **kwargs): + def serialize_long(attr, **kwargs): # pylint: disable=unused-argument """Serialize long (Py2) or int (Py3). - :param attr: Object to be serialized. + :param int attr: Object to be serialized. :rtype: int/long + :return: serialized long """ return _long_type(attr) @staticmethod - def serialize_date(attr, **kwargs): + def serialize_date(attr, **kwargs): # pylint: disable=unused-argument """Serialize Date object into ISO-8601 formatted string. :param Date attr: Object to be serialized. :rtype: str + :return: serialized date """ if isinstance(attr, str): attr = isodate.parse_date(attr) @@ -1089,11 +1160,12 @@ def serialize_date(attr, **kwargs): return t @staticmethod - def serialize_time(attr, **kwargs): + def serialize_time(attr, **kwargs): # pylint: disable=unused-argument """Serialize Time object into ISO-8601 formatted string. :param datetime.time attr: Object to be serialized. :rtype: str + :return: serialized time """ if isinstance(attr, str): attr = isodate.parse_time(attr) @@ -1103,30 +1175,32 @@ def serialize_time(attr, **kwargs): return t @staticmethod - def serialize_duration(attr, **kwargs): + def serialize_duration(attr, **kwargs): # pylint: disable=unused-argument """Serialize TimeDelta object into ISO-8601 formatted string. :param TimeDelta attr: Object to be serialized. :rtype: str + :return: serialized duration """ if isinstance(attr, str): attr = isodate.parse_duration(attr) return isodate.duration_isoformat(attr) @staticmethod - def serialize_rfc(attr, **kwargs): + def serialize_rfc(attr, **kwargs): # pylint: disable=unused-argument """Serialize Datetime object into RFC-1123 formatted string. :param Datetime attr: Object to be serialized. :rtype: str :raises: TypeError if format invalid. + :return: serialized rfc """ try: if not attr.tzinfo: _LOGGER.warning("Datetime with no tzinfo will be considered UTC.") utc = attr.utctimetuple() - except AttributeError: - raise TypeError("RFC1123 object must be valid Datetime object.") + except AttributeError as exc: + raise TypeError("RFC1123 object must be valid Datetime object.") from exc return "{}, {:02} {} {:04} {:02}:{:02}:{:02} GMT".format( Serializer.days[utc.tm_wday], @@ -1139,12 +1213,13 @@ def serialize_rfc(attr, **kwargs): ) @staticmethod - def serialize_iso(attr, **kwargs): + def serialize_iso(attr, **kwargs): # pylint: disable=unused-argument """Serialize Datetime object into ISO-8601 formatted string. :param Datetime attr: Object to be serialized. :rtype: str :raises: SerializationError if format invalid. + :return: serialized iso """ if isinstance(attr, str): attr = isodate.parse_datetime(attr) @@ -1170,13 +1245,14 @@ def serialize_iso(attr, **kwargs): raise TypeError(msg) from err @staticmethod - def serialize_unix(attr, **kwargs): + def serialize_unix(attr, **kwargs): # pylint: disable=unused-argument """Serialize Datetime object into IntTime format. This is represented as seconds. :param Datetime attr: Object to be serialized. :rtype: int :raises: SerializationError if format invalid + :return: serialied unix """ if isinstance(attr, int): return attr @@ -1184,11 +1260,11 @@ def serialize_unix(attr, **kwargs): if not attr.tzinfo: _LOGGER.warning("Datetime with no tzinfo will be considered UTC.") return int(calendar.timegm(attr.utctimetuple())) - except AttributeError: - raise TypeError("Unix time object must be valid Datetime object.") + except AttributeError as exc: + raise TypeError("Unix time object must be valid Datetime object.") from exc -def rest_key_extractor(attr, attr_desc, data): +def rest_key_extractor(attr, attr_desc, data): # pylint: disable=unused-argument key = attr_desc["key"] working_data = data @@ -1209,7 +1285,9 @@ def rest_key_extractor(attr, attr_desc, data): return working_data.get(key) -def rest_key_case_insensitive_extractor(attr, attr_desc, data): +def rest_key_case_insensitive_extractor( # pylint: disable=unused-argument, inconsistent-return-statements + attr, attr_desc, data +): key = attr_desc["key"] working_data = data @@ -1230,17 +1308,29 @@ def rest_key_case_insensitive_extractor(attr, attr_desc, data): return attribute_key_case_insensitive_extractor(key, None, working_data) -def last_rest_key_extractor(attr, attr_desc, data): - """Extract the attribute in "data" based on the last part of the JSON path key.""" +def last_rest_key_extractor(attr, attr_desc, data): # pylint: disable=unused-argument + """Extract the attribute in "data" based on the last part of the JSON path key. + + :param str attr: The attribute to extract + :param dict attr_desc: The attribute description + :param dict data: The data to extract from + :rtype: object + :returns: The extracted attribute + """ key = attr_desc["key"] dict_keys = _FLATTEN.split(key) return attribute_key_extractor(dict_keys[-1], None, data) -def last_rest_key_case_insensitive_extractor(attr, attr_desc, data): +def last_rest_key_case_insensitive_extractor(attr, attr_desc, data): # pylint: disable=unused-argument """Extract the attribute in "data" based on the last part of the JSON path key. This is the case insensitive version of "last_rest_key_extractor" + :param str attr: The attribute to extract + :param dict attr_desc: The attribute description + :param dict data: The data to extract from + :rtype: object + :returns: The extracted attribute """ key = attr_desc["key"] dict_keys = _FLATTEN.split(key) @@ -1277,7 +1367,7 @@ def _extract_name_from_internal_type(internal_type): return xml_name -def xml_key_extractor(attr, attr_desc, data): +def xml_key_extractor(attr, attr_desc, data): # pylint: disable=unused-argument,too-many-return-statements if isinstance(data, dict): return None @@ -1329,22 +1419,21 @@ def xml_key_extractor(attr, attr_desc, data): if is_iter_type: if is_wrapped: return None # is_wrapped no node, we want None - else: - return [] # not wrapped, assume empty list + return [] # not wrapped, assume empty list return None # Assume it's not there, maybe an optional node. # If is_iter_type and not wrapped, return all found children if is_iter_type: if not is_wrapped: return children - else: # Iter and wrapped, should have found one node only (the wrap one) - if len(children) != 1: - raise DeserializationError( - "Tried to deserialize an array not wrapped, and found several nodes '{}'. Maybe you should declare this array as wrapped?".format( - xml_name - ) + # Iter and wrapped, should have found one node only (the wrap one) + if len(children) != 1: + raise DeserializationError( + "Tried to deserialize an array not wrapped, and found several nodes '{}'. Maybe you should declare this array as wrapped?".format( # pylint: disable=line-too-long + xml_name ) - return list(children[0]) # Might be empty list and that's ok. + ) + return list(children[0]) # Might be empty list and that's ok. # Here it's not a itertype, we should have found one element only or empty if len(children) > 1: @@ -1361,7 +1450,7 @@ class Deserializer(object): basic_types = {str: "str", int: "int", bool: "bool", float: "float"} - valid_date = re.compile(r"\d{4}[-]\d{2}[-]\d{2}T\d{2}:\d{2}:\d{2}" r"\.?\d*Z?[-+]?[\d{2}]?:?[\d{2}]?") + valid_date = re.compile(r"\d{4}[-]\d{2}[-]\d{2}T\d{2}:\d{2}:\d{2}\.?\d*Z?[-+]?[\d{2}]?:?[\d{2}]?") def __init__(self, classes: Optional[Mapping[str, type]] = None): self.deserialize_type = { @@ -1401,11 +1490,12 @@ def __call__(self, target_obj, response_data, content_type=None): :param str content_type: Swagger "produces" if available. :raises: DeserializationError if deserialization fails. :return: Deserialized object. + :rtype: object """ data = self._unpack_content(response_data, content_type) return self._deserialize(target_obj, data) - def _deserialize(self, target_obj, data): + def _deserialize(self, target_obj, data): # pylint: disable=inconsistent-return-statements """Call the deserializer on a model. Data needs to be already deserialized as JSON or XML ElementTree @@ -1414,12 +1504,13 @@ def _deserialize(self, target_obj, data): :param object data: Object to deserialize. :raises: DeserializationError if deserialization fails. :return: Deserialized object. + :rtype: object """ # This is already a model, go recursive just in case if hasattr(data, "_attribute_map"): constants = [name for name, config in getattr(data, "_validation", {}).items() if config.get("constant")] try: - for attr, mapconfig in data._attribute_map.items(): + for attr, mapconfig in data._attribute_map.items(): # pylint: disable=protected-access if attr in constants: continue value = getattr(data, attr) @@ -1438,13 +1529,13 @@ def _deserialize(self, target_obj, data): if isinstance(response, str): return self.deserialize_data(data, response) - elif isinstance(response, type) and issubclass(response, Enum): + if isinstance(response, type) and issubclass(response, Enum): return self.deserialize_enum(data, response) - if data is None: + if data is None or data is CoreNull: return data try: - attributes = response._attribute_map # type: ignore + attributes = response._attribute_map # type: ignore # pylint: disable=protected-access d_attrs = {} for attr, attr_desc in attributes.items(): # Check empty string. If it's not empty, someone has a real "additionalProperties"... @@ -1474,9 +1565,8 @@ def _deserialize(self, target_obj, data): except (AttributeError, TypeError, KeyError) as err: msg = "Unable to deserialize to object: " + class_name # type: ignore raise DeserializationError(msg) from err - else: - additional_properties = self._build_additional_properties(attributes, data) - return self._instantiate_model(response, d_attrs, additional_properties) + additional_properties = self._build_additional_properties(attributes, data) + return self._instantiate_model(response, d_attrs, additional_properties) def _build_additional_properties(self, attribute_map, data): if not self.additional_properties_detection: @@ -1503,6 +1593,8 @@ def _classify_target(self, target, data): :param str target: The target object type to deserialize to. :param str/dict data: The response data to deserialize. + :return: The classified target object and its class name. + :rtype: tuple """ if target is None: return None, None @@ -1514,7 +1606,7 @@ def _classify_target(self, target, data): return target, target try: - target = target._classify(data, self.dependencies) # type: ignore + target = target._classify(data, self.dependencies) # type: ignore # pylint: disable=protected-access except AttributeError: pass # Target is not a Model, no classify return target, target.__class__.__name__ # type: ignore @@ -1529,10 +1621,12 @@ def failsafe_deserialize(self, target_obj, data, content_type=None): :param str target_obj: The target object type to deserialize to. :param str/dict data: The response data to deserialize. :param str content_type: Swagger "produces" if available. + :return: Deserialized object. + :rtype: object """ try: return self(target_obj, data, content_type=content_type) - except: + except: # pylint: disable=bare-except _LOGGER.debug( "Ran into a deserialization error. Ignoring since this is failsafe deserialization", exc_info=True ) @@ -1550,10 +1644,12 @@ def _unpack_content(raw_data, content_type=None): If raw_data is something else, bypass all logic and return it directly. - :param raw_data: Data to be processed. - :param content_type: How to parse if raw_data is a string/bytes. + :param obj raw_data: Data to be processed. + :param str content_type: How to parse if raw_data is a string/bytes. :raises JSONDecodeError: If JSON is requested and parsing is impossible. :raises UnicodeDecodeError: If bytes is not UTF8 + :rtype: object + :return: Unpacked content. """ # Assume this is enough to detect a Pipeline Response without importing it context = getattr(raw_data, "context", {}) @@ -1577,14 +1673,21 @@ def _unpack_content(raw_data, content_type=None): def _instantiate_model(self, response, attrs, additional_properties=None): """Instantiate a response model passing in deserialized args. - :param response: The response model class. - :param d_attrs: The deserialized response attributes. + :param Response response: The response model class. + :param dict attrs: The deserialized response attributes. + :param dict additional_properties: Additional properties to be set. + :rtype: Response + :return: The instantiated response model. """ if callable(response): subtype = getattr(response, "_subtype_map", {}) try: - readonly = [k for k, v in response._validation.items() if v.get("readonly")] - const = [k for k, v in response._validation.items() if v.get("constant")] + readonly = [ + k for k, v in response._validation.items() if v.get("readonly") # pylint: disable=protected-access + ] + const = [ + k for k, v in response._validation.items() if v.get("constant") # pylint: disable=protected-access + ] kwargs = {k: v for k, v in attrs.items() if k not in subtype and k not in readonly + const} response_obj = response(**kwargs) for attr in readonly: @@ -1594,7 +1697,7 @@ def _instantiate_model(self, response, attrs, additional_properties=None): return response_obj except TypeError as err: msg = "Unable to deserialize {} into model {}. ".format(kwargs, response) # type: ignore - raise DeserializationError(msg + str(err)) + raise DeserializationError(msg + str(err)) from err else: try: for attr, value in attrs.items(): @@ -1603,15 +1706,16 @@ def _instantiate_model(self, response, attrs, additional_properties=None): except Exception as exp: msg = "Unable to populate response model. " msg += "Type: {}, Error: {}".format(type(response), exp) - raise DeserializationError(msg) + raise DeserializationError(msg) from exp - def deserialize_data(self, data, data_type): + def deserialize_data(self, data, data_type): # pylint: disable=too-many-return-statements """Process data for deserialization according to data type. :param str data: The response string to be deserialized. :param str data_type: The type to deserialize to. :raises: DeserializationError if deserialization fails. :return: Deserialized object. + :rtype: object """ if data is None: return data @@ -1625,7 +1729,11 @@ def deserialize_data(self, data, data_type): if isinstance(data, self.deserialize_expected_types.get(data_type, tuple())): return data - is_a_text_parsing_type = lambda x: x not in ["object", "[]", r"{}"] + is_a_text_parsing_type = lambda x: x not in [ # pylint: disable=unnecessary-lambda-assignment + "object", + "[]", + r"{}", + ] if isinstance(data, ET.Element) and is_a_text_parsing_type(data_type) and not data.text: return None data_val = self.deserialize_type[data_type](data) @@ -1645,14 +1753,14 @@ def deserialize_data(self, data, data_type): msg = "Unable to deserialize response data." msg += " Data: {}, {}".format(data, data_type) raise DeserializationError(msg) from err - else: - return self._deserialize(obj_type, data) + return self._deserialize(obj_type, data) def deserialize_iter(self, attr, iter_type): """Deserialize an iterable. :param list attr: Iterable to be deserialized. :param str iter_type: The type of object in the iterable. + :return: Deserialized iterable. :rtype: list """ if attr is None: @@ -1669,6 +1777,7 @@ def deserialize_dict(self, attr, dict_type): :param dict/list attr: Dictionary to be deserialized. Also accepts a list of key, value pairs. :param str dict_type: The object type of the items in the dictionary. + :return: Deserialized dictionary. :rtype: dict """ if isinstance(attr, list): @@ -1679,11 +1788,12 @@ def deserialize_dict(self, attr, dict_type): attr = {el.tag: el.text for el in attr} return {k: self.deserialize_data(v, dict_type) for k, v in attr.items()} - def deserialize_object(self, attr, **kwargs): + def deserialize_object(self, attr, **kwargs): # pylint: disable=too-many-return-statements """Deserialize a generic object. This will be handled as a dictionary. :param dict attr: Dictionary to be deserialized. + :return: Deserialized object. :rtype: dict :raises: TypeError if non-builtin datatype encountered. """ @@ -1718,11 +1828,10 @@ def deserialize_object(self, attr, **kwargs): pass return deserialized - else: - error = "Cannot deserialize generic object with type: " - raise TypeError(error + str(obj_type)) + error = "Cannot deserialize generic object with type: " + raise TypeError(error + str(obj_type)) - def deserialize_basic(self, attr, data_type): + def deserialize_basic(self, attr, data_type): # pylint: disable=too-many-return-statements """Deserialize basic builtin data type from string. Will attempt to convert to str, int, float and bool. This function will also accept '1', '0', 'true' and 'false' as @@ -1730,6 +1839,7 @@ def deserialize_basic(self, attr, data_type): :param str attr: response string to be deserialized. :param str data_type: deserialization data type. + :return: Deserialized basic type. :rtype: str, int, float or bool :raises: TypeError if string format is not valid. """ @@ -1741,24 +1851,23 @@ def deserialize_basic(self, attr, data_type): if data_type == "str": # None or '', node is empty string. return "" - else: - # None or '', node with a strong type is None. - # Don't try to model "empty bool" or "empty int" - return None + # None or '', node with a strong type is None. + # Don't try to model "empty bool" or "empty int" + return None if data_type == "bool": if attr in [True, False, 1, 0]: return bool(attr) - elif isinstance(attr, str): + if isinstance(attr, str): if attr.lower() in ["true", "1"]: return True - elif attr.lower() in ["false", "0"]: + if attr.lower() in ["false", "0"]: return False raise TypeError("Invalid boolean value: {}".format(attr)) if data_type == "str": return self.deserialize_unicode(attr) - return eval(data_type)(attr) # nosec + return eval(data_type)(attr) # nosec # pylint: disable=eval-used @staticmethod def deserialize_unicode(data): @@ -1766,6 +1875,7 @@ def deserialize_unicode(data): as a string. :param str data: response string to be deserialized. + :return: Deserialized string. :rtype: str or unicode """ # We might be here because we have an enum modeled as string, @@ -1779,8 +1889,7 @@ def deserialize_unicode(data): return data except NameError: return str(data) - else: - return str(data) + return str(data) @staticmethod def deserialize_enum(data, enum_obj): @@ -1792,6 +1901,7 @@ def deserialize_enum(data, enum_obj): :param str data: Response string to be deserialized. If this value is None or invalid it will be returned as-is. :param Enum enum_obj: Enum object to deserialize to. + :return: Deserialized enum object. :rtype: Enum """ if isinstance(data, enum_obj) or data is None: @@ -1802,9 +1912,9 @@ def deserialize_enum(data, enum_obj): # Workaround. We might consider remove it in the future. try: return list(enum_obj.__members__.values())[data] - except IndexError: + except IndexError as exc: error = "{!r} is not a valid index for enum {!r}" - raise DeserializationError(error.format(data, enum_obj)) + raise DeserializationError(error.format(data, enum_obj)) from exc try: return enum_obj(str(data)) except ValueError: @@ -1820,6 +1930,7 @@ def deserialize_bytearray(attr): """Deserialize string into bytearray. :param str attr: response string to be deserialized. + :return: Deserialized bytearray :rtype: bytearray :raises: TypeError if string format invalid. """ @@ -1832,6 +1943,7 @@ def deserialize_base64(attr): """Deserialize base64 encoded string into string. :param str attr: response string to be deserialized. + :return: Deserialized base64 string :rtype: bytearray :raises: TypeError if string format invalid. """ @@ -1847,8 +1959,9 @@ def deserialize_decimal(attr): """Deserialize string into Decimal object. :param str attr: response string to be deserialized. - :rtype: Decimal + :return: Deserialized decimal :raises: DeserializationError if string format invalid. + :rtype: decimal """ if isinstance(attr, ET.Element): attr = attr.text @@ -1863,6 +1976,7 @@ def deserialize_long(attr): """Deserialize string into long (Py2) or int (Py3). :param str attr: response string to be deserialized. + :return: Deserialized int :rtype: long or int :raises: ValueError if string format invalid. """ @@ -1875,6 +1989,7 @@ def deserialize_duration(attr): """Deserialize ISO-8601 formatted string into TimeDelta object. :param str attr: response string to be deserialized. + :return: Deserialized duration :rtype: TimeDelta :raises: DeserializationError if string format invalid. """ @@ -1885,14 +2000,14 @@ def deserialize_duration(attr): except (ValueError, OverflowError, AttributeError) as err: msg = "Cannot deserialize duration object." raise DeserializationError(msg) from err - else: - return duration + return duration @staticmethod def deserialize_date(attr): """Deserialize ISO-8601 formatted string into Date object. :param str attr: response string to be deserialized. + :return: Deserialized date :rtype: Date :raises: DeserializationError if string format invalid. """ @@ -1908,6 +2023,7 @@ def deserialize_time(attr): """Deserialize ISO-8601 formatted string into time object. :param str attr: response string to be deserialized. + :return: Deserialized time :rtype: datetime.time :raises: DeserializationError if string format invalid. """ @@ -1922,6 +2038,7 @@ def deserialize_rfc(attr): """Deserialize RFC-1123 formatted string into Datetime object. :param str attr: response string to be deserialized. + :return: Deserialized RFC datetime :rtype: Datetime :raises: DeserializationError if string format invalid. """ @@ -1937,14 +2054,14 @@ def deserialize_rfc(attr): except ValueError as err: msg = "Cannot deserialize to rfc datetime object." raise DeserializationError(msg) from err - else: - return date_obj + return date_obj @staticmethod def deserialize_iso(attr): """Deserialize ISO-8601 formatted string into Datetime object. :param str attr: response string to be deserialized. + :return: Deserialized ISO datetime :rtype: Datetime :raises: DeserializationError if string format invalid. """ @@ -1974,8 +2091,7 @@ def deserialize_iso(attr): except (ValueError, OverflowError, AttributeError) as err: msg = "Cannot deserialize datetime object." raise DeserializationError(msg) from err - else: - return date_obj + return date_obj @staticmethod def deserialize_unix(attr): @@ -1983,6 +2099,7 @@ def deserialize_unix(attr): This is represented as seconds. :param int attr: Object to be serialized. + :return: Deserialized datetime :rtype: Datetime :raises: DeserializationError if format invalid """ @@ -1994,5 +2111,4 @@ def deserialize_unix(attr): except ValueError as err: msg = "Cannot deserialize to unix datetime object." raise DeserializationError(msg) from err - else: - return date_obj + return date_obj diff --git a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_validation.py b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_validation.py new file mode 100644 index 000000000000..752b2822f9d3 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_validation.py @@ -0,0 +1,50 @@ +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- +import functools + + +def api_version_validation(**kwargs): + params_added_on = kwargs.pop("params_added_on", {}) + method_added_on = kwargs.pop("method_added_on", "") + + def decorator(func): + @functools.wraps(func) + def wrapper(*args, **kwargs): + try: + # this assumes the client has an _api_version attribute + client = args[0] + client_api_version = client._config.api_version # pylint: disable=protected-access + except AttributeError: + return func(*args, **kwargs) + + if method_added_on > client_api_version: + raise ValueError( + f"'{func.__name__}' is not available in API version " + f"{client_api_version}. Pass service API version {method_added_on} or newer to your client." + ) + + unsupported = { + parameter: api_version + for api_version, parameters in params_added_on.items() + for parameter in parameters + if parameter in kwargs and api_version > client_api_version + } + if unsupported: + raise ValueError( + "".join( + [ + f"'{param}' is not available in API version {client_api_version}. " + f"Use service API version {version} or newer.\n" + for param, version in unsupported.items() + ] + ) + ) + return func(*args, **kwargs) + + return wrapper + + return decorator diff --git a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_vendor.py b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_vendor.py index 13617a4e266b..76a375ebbef9 100644 --- a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_vendor.py +++ b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_vendor.py @@ -7,13 +7,12 @@ from abc import ABC import json -from typing import Any, Dict, IO, List, Mapping, Optional, Sequence, TYPE_CHECKING, Tuple, Union +from typing import Any, Dict, IO, List, Mapping, Optional, TYPE_CHECKING, Tuple, Union from ._configuration import FaceClientConfiguration, FaceSessionClientConfiguration from ._model_base import Model, SdkJSONEncoder if TYPE_CHECKING: - # pylint: disable=unused-import,ungrouped-imports from azure.core import PipelineClient from ._serialization import Deserializer, Serializer @@ -49,8 +48,6 @@ class FaceSessionClientMixinABC(ABC): Tuple[Optional[str], FileContent, Optional[str]], ] -FilesType = Union[Mapping[str, FileType], Sequence[Tuple[str, FileType]]] - def serialize_multipart_data_entry(data_entry: Any) -> Any: if isinstance(data_entry, (list, tuple, dict, Model)): diff --git a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/__init__.py b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/__init__.py index 25daed5ab3d4..5bd65820f8fe 100644 --- a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/__init__.py +++ b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/__init__.py @@ -6,6 +6,7 @@ # Changes may cause incorrect behavior and will be lost if the code is regenerated. # -------------------------------------------------------------------------- +from ._client import FaceAdministrationClient from ._patch import FaceClient from ._patch import FaceSessionClient @@ -13,6 +14,7 @@ from ._patch import patch_sdk as _patch_sdk __all__ = [ + "FaceAdministrationClient", "FaceClient", "FaceSessionClient", ] diff --git a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/_client.py b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/_client.py index 12cc8657e74a..5ea825943687 100644 --- a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/_client.py +++ b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/_client.py @@ -8,6 +8,7 @@ from copy import deepcopy from typing import Any, Awaitable, TYPE_CHECKING, Union +from typing_extensions import Self from azure.core import AsyncPipelineClient from azure.core.credentials import AzureKeyCredential @@ -15,15 +16,116 @@ from azure.core.rest import AsyncHttpResponse, HttpRequest from .._serialization import Deserializer, Serializer -from ._configuration import FaceClientConfiguration, FaceSessionClientConfiguration -from ._operations import FaceClientOperationsMixin, FaceSessionClientOperationsMixin +from ._configuration import ( + FaceAdministrationClientConfiguration, + FaceClientConfiguration, + FaceSessionClientConfiguration, +) +from .operations import ( + FaceClientOperationsMixin, + FaceSessionClientOperationsMixin, + LargeFaceListOperations, + LargePersonGroupOperations, +) if TYPE_CHECKING: - # pylint: disable=unused-import,ungrouped-imports from azure.core.credentials_async import AsyncTokenCredential -class FaceClient(FaceClientOperationsMixin): # pylint: disable=client-accepts-api-version-keyword +class FaceAdministrationClient: + """FaceAdministrationClient. + + :ivar large_face_list: LargeFaceListOperations operations + :vartype large_face_list: azure.ai.vision.face.aio.operations.LargeFaceListOperations + :ivar large_person_group: LargePersonGroupOperations operations + :vartype large_person_group: azure.ai.vision.face.aio.operations.LargePersonGroupOperations + :param endpoint: Supported Cognitive Services endpoints (protocol and hostname, for example: + https://{resource-name}.cognitiveservices.azure.com). Required. + :type endpoint: str + :param credential: Credential used to authenticate requests to the service. Is either a + AzureKeyCredential type or a TokenCredential type. Required. + :type credential: ~azure.core.credentials.AzureKeyCredential or + ~azure.core.credentials_async.AsyncTokenCredential + :keyword api_version: API Version. Known values are "v1.2-preview.1" and None. Default value is + "v1.2-preview.1". Note that overriding this default value may result in unsupported behavior. + :paramtype api_version: str or ~azure.ai.vision.face.models.Versions + :keyword int polling_interval: Default waiting time between two polls for LRO operations if no + Retry-After header is present. + """ + + def __init__( + self, endpoint: str, credential: Union[AzureKeyCredential, "AsyncTokenCredential"], **kwargs: Any + ) -> None: + _endpoint = "{endpoint}/face/{apiVersion}" + self._config = FaceAdministrationClientConfiguration(endpoint=endpoint, credential=credential, **kwargs) + _policies = kwargs.pop("policies", None) + if _policies is None: + _policies = [ + policies.RequestIdPolicy(**kwargs), + self._config.headers_policy, + self._config.user_agent_policy, + self._config.proxy_policy, + policies.ContentDecodePolicy(**kwargs), + self._config.redirect_policy, + self._config.retry_policy, + self._config.authentication_policy, + self._config.custom_hook_policy, + self._config.logging_policy, + policies.DistributedTracingPolicy(**kwargs), + policies.SensitiveHeaderCleanupPolicy(**kwargs) if self._config.redirect_policy else None, + self._config.http_logging_policy, + ] + self._client: AsyncPipelineClient = AsyncPipelineClient(base_url=_endpoint, policies=_policies, **kwargs) + + self._serialize = Serializer() + self._deserialize = Deserializer() + self._serialize.client_side_validation = False + self.large_face_list = LargeFaceListOperations(self._client, self._config, self._serialize, self._deserialize) + self.large_person_group = LargePersonGroupOperations( + self._client, self._config, self._serialize, self._deserialize + ) + + def send_request( + self, request: HttpRequest, *, stream: bool = False, **kwargs: Any + ) -> Awaitable[AsyncHttpResponse]: + """Runs the network request through the client's chained policies. + + >>> from azure.core.rest import HttpRequest + >>> request = HttpRequest("GET", "https://www.example.org/") + + >>> response = await client.send_request(request) + + + For more information on this code flow, see https://aka.ms/azsdk/dpcodegen/python/send_request + + :param request: The network request you want to make. Required. + :type request: ~azure.core.rest.HttpRequest + :keyword bool stream: Whether the response payload will be streamed. Defaults to False. + :return: The response of your network call. Does not do error handling on your response. + :rtype: ~azure.core.rest.AsyncHttpResponse + """ + + request_copy = deepcopy(request) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + + request_copy.url = self._client.format_url(request_copy.url, **path_format_arguments) + return self._client.send_request(request_copy, stream=stream, **kwargs) # type: ignore + + async def close(self) -> None: + await self._client.close() + + async def __aenter__(self) -> Self: + await self._client.__aenter__() + return self + + async def __aexit__(self, *exc_details: Any) -> None: + await self._client.__aexit__(*exc_details) + + +class FaceClient(FaceClientOperationsMixin): """FaceClient. :param endpoint: Supported Cognitive Services endpoints (protocol and hostname, for example: @@ -33,8 +135,8 @@ class FaceClient(FaceClientOperationsMixin): # pylint: disable=client-accepts-a AzureKeyCredential type or a TokenCredential type. Required. :type credential: ~azure.core.credentials.AzureKeyCredential or ~azure.core.credentials_async.AsyncTokenCredential - :keyword api_version: API Version. Default value is "v1.1-preview.1". Note that overriding this - default value may result in unsupported behavior. + :keyword api_version: API Version. Known values are "v1.2-preview.1" and None. Default value is + "v1.2-preview.1". Note that overriding this default value may result in unsupported behavior. :paramtype api_version: str or ~azure.ai.vision.face.models.Versions """ @@ -98,7 +200,7 @@ def send_request( async def close(self) -> None: await self._client.close() - async def __aenter__(self) -> "FaceClient": + async def __aenter__(self) -> Self: await self._client.__aenter__() return self @@ -106,7 +208,7 @@ async def __aexit__(self, *exc_details: Any) -> None: await self._client.__aexit__(*exc_details) -class FaceSessionClient(FaceSessionClientOperationsMixin): # pylint: disable=client-accepts-api-version-keyword +class FaceSessionClient(FaceSessionClientOperationsMixin): """FaceSessionClient. :param endpoint: Supported Cognitive Services endpoints (protocol and hostname, for example: @@ -116,8 +218,8 @@ class FaceSessionClient(FaceSessionClientOperationsMixin): # pylint: disable=cl AzureKeyCredential type or a TokenCredential type. Required. :type credential: ~azure.core.credentials.AzureKeyCredential or ~azure.core.credentials_async.AsyncTokenCredential - :keyword api_version: API Version. Default value is "v1.1-preview.1". Note that overriding this - default value may result in unsupported behavior. + :keyword api_version: API Version. Known values are "v1.2-preview.1" and None. Default value is + "v1.2-preview.1". Note that overriding this default value may result in unsupported behavior. :paramtype api_version: str or ~azure.ai.vision.face.models.Versions """ @@ -181,7 +283,7 @@ def send_request( async def close(self) -> None: await self._client.close() - async def __aenter__(self) -> "FaceSessionClient": + async def __aenter__(self) -> Self: await self._client.__aenter__() return self diff --git a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/_configuration.py b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/_configuration.py index 6acdf306c03f..fafe5ab8feff 100644 --- a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/_configuration.py +++ b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/_configuration.py @@ -14,10 +14,66 @@ from .._version import VERSION if TYPE_CHECKING: - # pylint: disable=unused-import,ungrouped-imports from azure.core.credentials_async import AsyncTokenCredential +class FaceAdministrationClientConfiguration: # pylint: disable=too-many-instance-attributes + """Configuration for FaceAdministrationClient. + + Note that all parameters used to create this instance are saved as instance + attributes. + + :param endpoint: Supported Cognitive Services endpoints (protocol and hostname, for example: + https://{resource-name}.cognitiveservices.azure.com). Required. + :type endpoint: str + :param credential: Credential used to authenticate requests to the service. Is either a + AzureKeyCredential type or a TokenCredential type. Required. + :type credential: ~azure.core.credentials.AzureKeyCredential or + ~azure.core.credentials_async.AsyncTokenCredential + :keyword api_version: API Version. Known values are "v1.2-preview.1" and None. Default value is + "v1.2-preview.1". Note that overriding this default value may result in unsupported behavior. + :paramtype api_version: str or ~azure.ai.vision.face.models.Versions + """ + + def __init__( + self, endpoint: str, credential: Union[AzureKeyCredential, "AsyncTokenCredential"], **kwargs: Any + ) -> None: + api_version: str = kwargs.pop("api_version", "v1.2-preview.1") + + if endpoint is None: + raise ValueError("Parameter 'endpoint' must not be None.") + if credential is None: + raise ValueError("Parameter 'credential' must not be None.") + + self.endpoint = endpoint + self.credential = credential + self.api_version = api_version + self.credential_scopes = kwargs.pop("credential_scopes", ["https://cognitiveservices.azure.com/.default"]) + kwargs.setdefault("sdk_moniker", "ai-vision-face/{}".format(VERSION)) + self.polling_interval = kwargs.get("polling_interval", 30) + self._configure(**kwargs) + + def _infer_policy(self, **kwargs): + if isinstance(self.credential, AzureKeyCredential): + return policies.AzureKeyCredentialPolicy(self.credential, "Ocp-Apim-Subscription-Key", **kwargs) + if hasattr(self.credential, "get_token"): + return policies.AsyncBearerTokenCredentialPolicy(self.credential, *self.credential_scopes, **kwargs) + raise TypeError(f"Unsupported credential: {self.credential}") + + def _configure(self, **kwargs: Any) -> None: + self.user_agent_policy = kwargs.get("user_agent_policy") or policies.UserAgentPolicy(**kwargs) + self.headers_policy = kwargs.get("headers_policy") or policies.HeadersPolicy(**kwargs) + self.proxy_policy = kwargs.get("proxy_policy") or policies.ProxyPolicy(**kwargs) + self.logging_policy = kwargs.get("logging_policy") or policies.NetworkTraceLoggingPolicy(**kwargs) + self.http_logging_policy = kwargs.get("http_logging_policy") or policies.HttpLoggingPolicy(**kwargs) + self.custom_hook_policy = kwargs.get("custom_hook_policy") or policies.CustomHookPolicy(**kwargs) + self.redirect_policy = kwargs.get("redirect_policy") or policies.AsyncRedirectPolicy(**kwargs) + self.retry_policy = kwargs.get("retry_policy") or policies.AsyncRetryPolicy(**kwargs) + self.authentication_policy = kwargs.get("authentication_policy") + if self.credential and not self.authentication_policy: + self.authentication_policy = self._infer_policy(**kwargs) + + class FaceClientConfiguration: # pylint: disable=too-many-instance-attributes """Configuration for FaceClient. @@ -31,15 +87,15 @@ class FaceClientConfiguration: # pylint: disable=too-many-instance-attributes AzureKeyCredential type or a TokenCredential type. Required. :type credential: ~azure.core.credentials.AzureKeyCredential or ~azure.core.credentials_async.AsyncTokenCredential - :keyword api_version: API Version. Default value is "v1.1-preview.1". Note that overriding this - default value may result in unsupported behavior. + :keyword api_version: API Version. Known values are "v1.2-preview.1" and None. Default value is + "v1.2-preview.1". Note that overriding this default value may result in unsupported behavior. :paramtype api_version: str or ~azure.ai.vision.face.models.Versions """ def __init__( self, endpoint: str, credential: Union[AzureKeyCredential, "AsyncTokenCredential"], **kwargs: Any ) -> None: - api_version: str = kwargs.pop("api_version", "v1.1-preview.1") + api_version: str = kwargs.pop("api_version", "v1.2-preview.1") if endpoint is None: raise ValueError("Parameter 'endpoint' must not be None.") @@ -75,7 +131,7 @@ def _configure(self, **kwargs: Any) -> None: self.authentication_policy = self._infer_policy(**kwargs) -class FaceSessionClientConfiguration: # pylint: disable=too-many-instance-attributes,name-too-long +class FaceSessionClientConfiguration: # pylint: disable=too-many-instance-attributes """Configuration for FaceSessionClient. Note that all parameters used to create this instance are saved as instance @@ -88,15 +144,15 @@ class FaceSessionClientConfiguration: # pylint: disable=too-many-instance-attri AzureKeyCredential type or a TokenCredential type. Required. :type credential: ~azure.core.credentials.AzureKeyCredential or ~azure.core.credentials_async.AsyncTokenCredential - :keyword api_version: API Version. Default value is "v1.1-preview.1". Note that overriding this - default value may result in unsupported behavior. + :keyword api_version: API Version. Known values are "v1.2-preview.1" and None. Default value is + "v1.2-preview.1". Note that overriding this default value may result in unsupported behavior. :paramtype api_version: str or ~azure.ai.vision.face.models.Versions """ def __init__( self, endpoint: str, credential: Union[AzureKeyCredential, "AsyncTokenCredential"], **kwargs: Any ) -> None: - api_version: str = kwargs.pop("api_version", "v1.1-preview.1") + api_version: str = kwargs.pop("api_version", "v1.2-preview.1") if endpoint is None: raise ValueError("Parameter 'endpoint' must not be None.") diff --git a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/_operations/_operations.py b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/_operations/_operations.py deleted file mode 100644 index 30436b1ef06a..000000000000 --- a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/_operations/_operations.py +++ /dev/null @@ -1,3493 +0,0 @@ -# pylint: disable=too-many-lines,too-many-statements -# coding=utf-8 -# -------------------------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See License.txt in the project root for license information. -# Code generated by Microsoft (R) Python Code Generator. -# Changes may cause incorrect behavior and will be lost if the code is regenerated. -# -------------------------------------------------------------------------- -from io import IOBase -import json -import sys -from typing import Any, Callable, Dict, IO, List, Optional, Type, TypeVar, Union, overload - -from azure.core.exceptions import ( - ClientAuthenticationError, - HttpResponseError, - ResourceExistsError, - ResourceNotFoundError, - ResourceNotModifiedError, - map_error, -) -from azure.core.pipeline import PipelineResponse -from azure.core.rest import AsyncHttpResponse, HttpRequest -from azure.core.tracing.decorator_async import distributed_trace_async -from azure.core.utils import case_insensitive_dict - -from ... import _model_base, models as _models -from ..._model_base import SdkJSONEncoder, _deserialize -from ..._operations._operations import ( - build_face_detect_from_url_request, - build_face_detect_request, - build_face_find_similar_request, - build_face_group_request, - build_face_session_create_liveness_session_request, - build_face_session_create_liveness_with_verify_session_request, - build_face_session_create_liveness_with_verify_session_with_verify_image_request, - build_face_session_delete_liveness_session_request, - build_face_session_delete_liveness_with_verify_session_request, - build_face_session_get_liveness_session_audit_entries_request, - build_face_session_get_liveness_session_result_request, - build_face_session_get_liveness_sessions_request, - build_face_session_get_liveness_with_verify_session_audit_entries_request, - build_face_session_get_liveness_with_verify_session_result_request, - build_face_session_get_liveness_with_verify_sessions_request, - build_face_verify_face_to_face_request, -) -from ..._vendor import prepare_multipart_form_data -from .._vendor import FaceClientMixinABC, FaceSessionClientMixinABC - -if sys.version_info >= (3, 9): - from collections.abc import MutableMapping -else: - from typing import MutableMapping # type: ignore # pylint: disable=ungrouped-imports -JSON = MutableMapping[str, Any] # pylint: disable=unsubscriptable-object -_Unset: Any = object() -T = TypeVar("T") -ClsType = Optional[Callable[[PipelineResponse[HttpRequest, AsyncHttpResponse], T, Dict[str, Any]], Any]] - - -class FaceClientOperationsMixin(FaceClientMixinABC): - - @overload - async def _detect_from_url( - self, - body: JSON, - *, - content_type: str = "application/json", - detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, - recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, - return_face_id: Optional[bool] = None, - return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, - return_face_landmarks: Optional[bool] = None, - return_recognition_model: Optional[bool] = None, - face_id_time_to_live: Optional[int] = None, - **kwargs: Any - ) -> List[_models.FaceDetectionResult]: ... - @overload - async def _detect_from_url( - self, - *, - url: str, - content_type: str = "application/json", - detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, - recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, - return_face_id: Optional[bool] = None, - return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, - return_face_landmarks: Optional[bool] = None, - return_recognition_model: Optional[bool] = None, - face_id_time_to_live: Optional[int] = None, - **kwargs: Any - ) -> List[_models.FaceDetectionResult]: ... - @overload - async def _detect_from_url( - self, - body: IO[bytes], - *, - content_type: str = "application/json", - detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, - recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, - return_face_id: Optional[bool] = None, - return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, - return_face_landmarks: Optional[bool] = None, - return_recognition_model: Optional[bool] = None, - face_id_time_to_live: Optional[int] = None, - **kwargs: Any - ) -> List[_models.FaceDetectionResult]: ... - - @distributed_trace_async - async def _detect_from_url( - self, - body: Union[JSON, IO[bytes]] = _Unset, - *, - url: str = _Unset, - detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, - recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, - return_face_id: Optional[bool] = None, - return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, - return_face_landmarks: Optional[bool] = None, - return_recognition_model: Optional[bool] = None, - face_id_time_to_live: Optional[int] = None, - **kwargs: Any - ) -> List[_models.FaceDetectionResult]: - # pylint: disable=line-too-long - """Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, - and attributes. - - .. - - [!IMPORTANT] - To mitigate potential misuse that can subject people to stereotyping, discrimination, or - unfair denial of services, we are retiring Face API attributes that predict emotion, gender, - age, smile, facial hair, hair, and makeup. Read more about this decision - https://azure.microsoft.com/blog/responsible-ai-investments-and-safeguards-for-facial-recognition/. - - - * - - - * No image will be stored. Only the extracted face feature(s) will be stored on server. The - faceId is an identifier of the face feature and will be used in "Identify", "Verify", and "Find - Similar". The stored face features will expire and be deleted at the time specified by - faceIdTimeToLive after the original detection call. - * Optional parameters include faceId, landmarks, and attributes. Attributes include headPose, - glasses, occlusion, accessories, blur, exposure, noise, mask, and qualityForRecognition. Some - of the results returned for specific attributes may not be highly accurate. - * JPEG, PNG, GIF (the first frame), and BMP format are supported. The allowed image file size - is from 1KB to 6MB. - * The minimum detectable face size is 36x36 pixels in an image no larger than 1920x1080 pixels. - Images with dimensions higher than 1920x1080 pixels will need a proportionally larger minimum - face size. - * Up to 100 faces can be returned for an image. Faces are ranked by face rectangle size from - large to small. - * For optimal results when querying "Identify", "Verify", and "Find Similar" ('returnFaceId' is - true), please use faces that are: frontal, clear, and with a minimum size of 200x200 pixels - (100 pixels between eyes). - * Different 'detectionModel' values can be provided. To use and compare different detection - models, please refer to - https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/specify-detection-model - - * 'detection_02': Face attributes and landmarks are disabled if you choose this detection - model. - * 'detection_03': Face attributes (mask, blur, and headPose) and landmarks are supported if - you choose this detection model. - - * Different 'recognitionModel' values are provided. If follow-up operations like "Verify", - "Identify", "Find Similar" are needed, please specify the recognition model with - 'recognitionModel' parameter. The default value for 'recognitionModel' is 'recognition_01', if - latest model needed, please explicitly specify the model you need in this parameter. Once - specified, the detected faceIds will be associated with the specified recognition model. More - details, please refer to - https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/specify-recognition-model. - - :param body: Is either a JSON type or a IO[bytes] type. Required. - :type body: JSON or IO[bytes] - :keyword url: URL of input image. Required. - :paramtype url: str - :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported - 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default - value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". - Default value is None. - :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel - :keyword recognition_model: The 'recognitionModel' associated with the detected faceIds. - Supported 'recognitionModel' values include 'recognition_01', 'recognition_02', - 'recognition_03' or 'recognition_04'. The default value is 'recognition_01'. 'recognition_04' - is recommended since its accuracy is improved on faces wearing masks compared with - 'recognition_03', and its overall accuracy is improved compared with 'recognition_01' and - 'recognition_02'. Known values are: "recognition_01", "recognition_02", "recognition_03", and - "recognition_04". Default value is None. - :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel - :keyword return_face_id: Return faceIds of the detected faces or not. The default value is - true. Default value is None. - :paramtype return_face_id: bool - :keyword return_face_attributes: Analyze and return the one or more specified face attributes - in the comma-separated string like 'returnFaceAttributes=headPose,glasses'. Face attribute - analysis has additional computational and time cost. Default value is None. - :paramtype return_face_attributes: list[str or ~azure.ai.vision.face.models.FaceAttributeType] - :keyword return_face_landmarks: Return face landmarks of the detected faces or not. The default - value is false. Default value is None. - :paramtype return_face_landmarks: bool - :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is - false. This is only applicable when returnFaceId = true. Default value is None. - :paramtype return_recognition_model: bool - :keyword face_id_time_to_live: The number of seconds for the face ID being cached. Supported - range from 60 seconds up to 86400 seconds. The default value is 86400 (24 hours). Default value - is None. - :paramtype face_id_time_to_live: int - :return: list of FaceDetectionResult - :rtype: list[~azure.ai.vision.face.models.FaceDetectionResult] - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # JSON input template you can fill out and use as your body input. - body = { - "url": "str" # URL of input image. Required. - } - - # response body for status code(s): 200 - response == [ - { - "faceRectangle": { - "height": 0, # The height of the rectangle, in pixels. - Required. - "left": 0, # The distance from the left edge if the image to - the left edge of the rectangle, in pixels. Required. - "top": 0, # The distance from the top edge if the image to - the top edge of the rectangle, in pixels. Required. - "width": 0 # The width of the rectangle, in pixels. - Required. - }, - "faceAttributes": { - "accessories": [ - { - "confidence": 0.0, # Confidence level of the - accessory type. Range between [0,1]. Required. - "type": "str" # Type of the accessory. - Required. Known values are: "headwear", "glasses", and "mask". - } - ], - "age": 0.0, # Optional. Age in years. - "blur": { - "blurLevel": "str", # An enum value indicating level - of blurriness. Required. Known values are: "low", "medium", and - "high". - "value": 0.0 # A number indicating level of - blurriness ranging from 0 to 1. Required. - }, - "exposure": { - "exposureLevel": "str", # An enum value indicating - level of exposure. Required. Known values are: "underExposure", - "goodExposure", and "overExposure". - "value": 0.0 # A number indicating level of exposure - level ranging from 0 to 1. [0, 0.25) is under exposure. [0.25, 0.75) - is good exposure. [0.75, 1] is over exposure. Required. - }, - "facialHair": { - "beard": 0.0, # A number ranging from 0 to 1 - indicating a level of confidence associated with a property. - Required. - "moustache": 0.0, # A number ranging from 0 to 1 - indicating a level of confidence associated with a property. - Required. - "sideburns": 0.0 # A number ranging from 0 to 1 - indicating a level of confidence associated with a property. - Required. - }, - "glasses": "str", # Optional. Glasses type if any of the - face. Known values are: "noGlasses", "readingGlasses", "sunglasses", and - "swimmingGoggles". - "hair": { - "bald": 0.0, # A number describing confidence level - of whether the person is bald. Required. - "hairColor": [ - { - "color": "str", # Name of the hair - color. Required. Known values are: "unknown", "white", - "gray", "blond", "brown", "red", "black", and "other". - "confidence": 0.0 # Confidence level - of the color. Range between [0,1]. Required. - } - ], - "invisible": bool # A boolean value describing - whether the hair is visible in the image. Required. - }, - "headPose": { - "pitch": 0.0, # Value of angles. Required. - "roll": 0.0, # Value of angles. Required. - "yaw": 0.0 # Value of angles. Required. - }, - "mask": { - "noseAndMouthCovered": bool, # A boolean value - indicating whether nose and mouth are covered. Required. - "type": "str" # Type of the mask. Required. Known - values are: "faceMask", "noMask", "otherMaskOrOcclusion", and - "uncertain". - }, - "noise": { - "noiseLevel": "str", # An enum value indicating - level of noise. Required. Known values are: "low", "medium", and - "high". - "value": 0.0 # A number indicating level of noise - level ranging from 0 to 1. [0, 0.25) is under exposure. [0.25, 0.75) - is good exposure. [0.75, 1] is over exposure. [0, 0.3) is low noise - level. [0.3, 0.7) is medium noise level. [0.7, 1] is high noise - level. Required. - }, - "occlusion": { - "eyeOccluded": bool, # A boolean value indicating - whether eyes are occluded. Required. - "foreheadOccluded": bool, # A boolean value - indicating whether forehead is occluded. Required. - "mouthOccluded": bool # A boolean value indicating - whether the mouth is occluded. Required. - }, - "qualityForRecognition": "str", # Optional. Properties - describing the overall image quality regarding whether the image being - used in the detection is of sufficient quality to attempt face - recognition on. Known values are: "low", "medium", and "high". - "smile": 0.0 # Optional. Smile intensity, a number between - [0,1]. - }, - "faceId": "str", # Optional. Unique faceId of the detected face, - created by detection API and it will expire 24 hours after the detection - call. To return this, it requires 'returnFaceId' parameter to be true. - "faceLandmarks": { - "eyeLeftBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeLeftInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeLeftOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeLeftTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowLeftInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowLeftOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowRightInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowRightOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "mouthLeft": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "mouthRight": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseLeftAlarOutTip": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseLeftAlarTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRightAlarOutTip": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRightAlarTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRootLeft": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRootRight": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseTip": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "pupilLeft": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "pupilRight": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "underLipBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "underLipTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "upperLipBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "upperLipTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - } - }, - "recognitionModel": "str" # Optional. The 'recognitionModel' - associated with this faceId. This is only returned when - 'returnRecognitionModel' is explicitly set as true. Known values are: - "recognition_01", "recognition_02", "recognition_03", and "recognition_04". - } - ] - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - _params = kwargs.pop("params", {}) or {} - - content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("content-type", None)) - cls: ClsType[List[_models.FaceDetectionResult]] = kwargs.pop("cls", None) - - if body is _Unset: - if url is _Unset: - raise TypeError("missing required argument: url") - body = {"url": url} - body = {k: v for k, v in body.items() if v is not None} - content_type = content_type or "application/json" - _content = None - if isinstance(body, (IOBase, bytes)): - _content = body - else: - _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore - - _request = build_face_detect_from_url_request( - detection_model=detection_model, - recognition_model=recognition_model, - return_face_id=return_face_id, - return_face_attributes=return_face_attributes, - return_face_landmarks=return_face_landmarks, - return_recognition_model=return_recognition_model, - face_id_time_to_live=face_id_time_to_live, - content_type=content_type, - content=_content, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - await response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(List[_models.FaceDetectionResult], response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore - - @distributed_trace_async - async def _detect( - self, - image_content: bytes, - *, - detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, - recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, - return_face_id: Optional[bool] = None, - return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, - return_face_landmarks: Optional[bool] = None, - return_recognition_model: Optional[bool] = None, - face_id_time_to_live: Optional[int] = None, - **kwargs: Any - ) -> List[_models.FaceDetectionResult]: - # pylint: disable=line-too-long - """Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, - and attributes. - - .. - - [!IMPORTANT] - To mitigate potential misuse that can subject people to stereotyping, discrimination, or - unfair denial of services, we are retiring Face API attributes that predict emotion, gender, - age, smile, facial hair, hair, and makeup. Read more about this decision - https://azure.microsoft.com/blog/responsible-ai-investments-and-safeguards-for-facial-recognition/. - - - * - - - * No image will be stored. Only the extracted face feature(s) will be stored on server. The - faceId is an identifier of the face feature and will be used in "Identify", "Verify", and "Find - Similar". The stored face features will expire and be deleted at the time specified by - faceIdTimeToLive after the original detection call. - * Optional parameters include faceId, landmarks, and attributes. Attributes include headPose, - glasses, occlusion, accessories, blur, exposure, noise, mask, and qualityForRecognition. Some - of the results returned for specific attributes may not be highly accurate. - * JPEG, PNG, GIF (the first frame), and BMP format are supported. The allowed image file size - is from 1KB to 6MB. - * The minimum detectable face size is 36x36 pixels in an image no larger than 1920x1080 pixels. - Images with dimensions higher than 1920x1080 pixels will need a proportionally larger minimum - face size. - * Up to 100 faces can be returned for an image. Faces are ranked by face rectangle size from - large to small. - * For optimal results when querying "Identify", "Verify", and "Find Similar" ('returnFaceId' is - true), please use faces that are: frontal, clear, and with a minimum size of 200x200 pixels - (100 pixels between eyes). - * Different 'detectionModel' values can be provided. To use and compare different detection - models, please refer to - https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/specify-detection-model - - * 'detection_02': Face attributes and landmarks are disabled if you choose this detection - model. - * 'detection_03': Face attributes (mask, blur, and headPose) and landmarks are supported if - you choose this detection model. - - * Different 'recognitionModel' values are provided. If follow-up operations like "Verify", - "Identify", "Find Similar" are needed, please specify the recognition model with - 'recognitionModel' parameter. The default value for 'recognitionModel' is 'recognition_01', if - latest model needed, please explicitly specify the model you need in this parameter. Once - specified, the detected faceIds will be associated with the specified recognition model. More - details, please refer to - https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/specify-recognition-model. - - :param image_content: The input image binary. Required. - :type image_content: bytes - :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported - 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default - value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". - Default value is None. - :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel - :keyword recognition_model: The 'recognitionModel' associated with the detected faceIds. - Supported 'recognitionModel' values include 'recognition_01', 'recognition_02', - 'recognition_03' or 'recognition_04'. The default value is 'recognition_01'. 'recognition_04' - is recommended since its accuracy is improved on faces wearing masks compared with - 'recognition_03', and its overall accuracy is improved compared with 'recognition_01' and - 'recognition_02'. Known values are: "recognition_01", "recognition_02", "recognition_03", and - "recognition_04". Default value is None. - :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel - :keyword return_face_id: Return faceIds of the detected faces or not. The default value is - true. Default value is None. - :paramtype return_face_id: bool - :keyword return_face_attributes: Analyze and return the one or more specified face attributes - in the comma-separated string like 'returnFaceAttributes=headPose,glasses'. Face attribute - analysis has additional computational and time cost. Default value is None. - :paramtype return_face_attributes: list[str or ~azure.ai.vision.face.models.FaceAttributeType] - :keyword return_face_landmarks: Return face landmarks of the detected faces or not. The default - value is false. Default value is None. - :paramtype return_face_landmarks: bool - :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is - false. This is only applicable when returnFaceId = true. Default value is None. - :paramtype return_recognition_model: bool - :keyword face_id_time_to_live: The number of seconds for the face ID being cached. Supported - range from 60 seconds up to 86400 seconds. The default value is 86400 (24 hours). Default value - is None. - :paramtype face_id_time_to_live: int - :return: list of FaceDetectionResult - :rtype: list[~azure.ai.vision.face.models.FaceDetectionResult] - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == [ - { - "faceRectangle": { - "height": 0, # The height of the rectangle, in pixels. - Required. - "left": 0, # The distance from the left edge if the image to - the left edge of the rectangle, in pixels. Required. - "top": 0, # The distance from the top edge if the image to - the top edge of the rectangle, in pixels. Required. - "width": 0 # The width of the rectangle, in pixels. - Required. - }, - "faceAttributes": { - "accessories": [ - { - "confidence": 0.0, # Confidence level of the - accessory type. Range between [0,1]. Required. - "type": "str" # Type of the accessory. - Required. Known values are: "headwear", "glasses", and "mask". - } - ], - "age": 0.0, # Optional. Age in years. - "blur": { - "blurLevel": "str", # An enum value indicating level - of blurriness. Required. Known values are: "low", "medium", and - "high". - "value": 0.0 # A number indicating level of - blurriness ranging from 0 to 1. Required. - }, - "exposure": { - "exposureLevel": "str", # An enum value indicating - level of exposure. Required. Known values are: "underExposure", - "goodExposure", and "overExposure". - "value": 0.0 # A number indicating level of exposure - level ranging from 0 to 1. [0, 0.25) is under exposure. [0.25, 0.75) - is good exposure. [0.75, 1] is over exposure. Required. - }, - "facialHair": { - "beard": 0.0, # A number ranging from 0 to 1 - indicating a level of confidence associated with a property. - Required. - "moustache": 0.0, # A number ranging from 0 to 1 - indicating a level of confidence associated with a property. - Required. - "sideburns": 0.0 # A number ranging from 0 to 1 - indicating a level of confidence associated with a property. - Required. - }, - "glasses": "str", # Optional. Glasses type if any of the - face. Known values are: "noGlasses", "readingGlasses", "sunglasses", and - "swimmingGoggles". - "hair": { - "bald": 0.0, # A number describing confidence level - of whether the person is bald. Required. - "hairColor": [ - { - "color": "str", # Name of the hair - color. Required. Known values are: "unknown", "white", - "gray", "blond", "brown", "red", "black", and "other". - "confidence": 0.0 # Confidence level - of the color. Range between [0,1]. Required. - } - ], - "invisible": bool # A boolean value describing - whether the hair is visible in the image. Required. - }, - "headPose": { - "pitch": 0.0, # Value of angles. Required. - "roll": 0.0, # Value of angles. Required. - "yaw": 0.0 # Value of angles. Required. - }, - "mask": { - "noseAndMouthCovered": bool, # A boolean value - indicating whether nose and mouth are covered. Required. - "type": "str" # Type of the mask. Required. Known - values are: "faceMask", "noMask", "otherMaskOrOcclusion", and - "uncertain". - }, - "noise": { - "noiseLevel": "str", # An enum value indicating - level of noise. Required. Known values are: "low", "medium", and - "high". - "value": 0.0 # A number indicating level of noise - level ranging from 0 to 1. [0, 0.25) is under exposure. [0.25, 0.75) - is good exposure. [0.75, 1] is over exposure. [0, 0.3) is low noise - level. [0.3, 0.7) is medium noise level. [0.7, 1] is high noise - level. Required. - }, - "occlusion": { - "eyeOccluded": bool, # A boolean value indicating - whether eyes are occluded. Required. - "foreheadOccluded": bool, # A boolean value - indicating whether forehead is occluded. Required. - "mouthOccluded": bool # A boolean value indicating - whether the mouth is occluded. Required. - }, - "qualityForRecognition": "str", # Optional. Properties - describing the overall image quality regarding whether the image being - used in the detection is of sufficient quality to attempt face - recognition on. Known values are: "low", "medium", and "high". - "smile": 0.0 # Optional. Smile intensity, a number between - [0,1]. - }, - "faceId": "str", # Optional. Unique faceId of the detected face, - created by detection API and it will expire 24 hours after the detection - call. To return this, it requires 'returnFaceId' parameter to be true. - "faceLandmarks": { - "eyeLeftBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeLeftInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeLeftOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeLeftTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowLeftInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowLeftOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowRightInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowRightOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "mouthLeft": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "mouthRight": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseLeftAlarOutTip": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseLeftAlarTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRightAlarOutTip": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRightAlarTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRootLeft": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRootRight": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseTip": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "pupilLeft": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "pupilRight": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "underLipBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "underLipTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "upperLipBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "upperLipTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - } - }, - "recognitionModel": "str" # Optional. The 'recognitionModel' - associated with this faceId. This is only returned when - 'returnRecognitionModel' is explicitly set as true. Known values are: - "recognition_01", "recognition_02", "recognition_03", and "recognition_04". - } - ] - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - _params = kwargs.pop("params", {}) or {} - - content_type: str = kwargs.pop("content_type", _headers.pop("content-type", "application/octet-stream")) - cls: ClsType[List[_models.FaceDetectionResult]] = kwargs.pop("cls", None) - - _content = image_content - - _request = build_face_detect_request( - detection_model=detection_model, - recognition_model=recognition_model, - return_face_id=return_face_id, - return_face_attributes=return_face_attributes, - return_face_landmarks=return_face_landmarks, - return_recognition_model=return_recognition_model, - face_id_time_to_live=face_id_time_to_live, - content_type=content_type, - content=_content, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - await response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(List[_models.FaceDetectionResult], response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore - - @overload - async def find_similar( - self, body: JSON, *, content_type: str = "application/json", **kwargs: Any - ) -> List[_models.FaceFindSimilarResult]: - # pylint: disable=line-too-long - """Given query face's faceId, to search the similar-looking faces from a faceId array. A faceId - array contains the faces created by Detect. - - Depending on the input the returned similar faces list contains faceIds or persistedFaceIds - ranked by similarity. - - Find similar has two working modes, "matchPerson" and "matchFace". "matchPerson" is the default - mode that it tries to find faces of the same person as possible by using internal same-person - thresholds. It is useful to find a known person's other photos. Note that an empty list will be - returned if no faces pass the internal thresholds. "matchFace" mode ignores same-person - thresholds and returns ranked similar faces anyway, even the similarity is low. It can be used - in the cases like searching celebrity-looking faces. - - The 'recognitionModel' associated with the query faceId should be the same as the - 'recognitionModel' used by the target faceId array. - - :param body: Required. - :type body: JSON - :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. - Default value is "application/json". - :paramtype content_type: str - :return: list of FaceFindSimilarResult - :rtype: list[~azure.ai.vision.face.models.FaceFindSimilarResult] - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # JSON input template you can fill out and use as your body input. - body = { - "faceId": "str", # faceId of the query face. User needs to call "Detect" - first to get a valid faceId. Note that this faceId is not persisted and will - expire 24 hours after the detection call. Required. - "faceIds": [ - "str" # An array of candidate faceIds. All of them are created by - "Detect" and the faceIds will expire 24 hours after the detection call. The - number of faceIds is limited to 1000. Required. - ], - "maxNumOfCandidatesReturned": 0, # Optional. The number of top similar faces - returned. The valid range is [1, 1000]. Default value is 20. - "mode": "str" # Optional. Similar face searching mode. It can be - 'matchPerson' or 'matchFace'. Default value is 'matchPerson'. Known values are: - "matchPerson" and "matchFace". - } - - # response body for status code(s): 200 - response == [ - { - "confidence": 0.0, # Confidence value of the candidate. The higher - confidence, the more similar. Range between [0,1]. Required. - "faceId": "str", # Optional. faceId of candidate face when find by - faceIds. faceId is created by "Detect" and will expire 24 hours after the - detection call. - "persistedFaceId": "str" # Optional. persistedFaceId of candidate - face when find by faceListId or largeFaceListId. persistedFaceId in face - list/large face list is persisted and will not expire. - } - ] - """ - - @overload - async def find_similar( - self, - *, - face_id: str, - face_ids: List[str], - content_type: str = "application/json", - max_num_of_candidates_returned: Optional[int] = None, - mode: Optional[Union[str, _models.FindSimilarMatchMode]] = None, - **kwargs: Any - ) -> List[_models.FaceFindSimilarResult]: - # pylint: disable=line-too-long - """Given query face's faceId, to search the similar-looking faces from a faceId array. A faceId - array contains the faces created by Detect. - - Depending on the input the returned similar faces list contains faceIds or persistedFaceIds - ranked by similarity. - - Find similar has two working modes, "matchPerson" and "matchFace". "matchPerson" is the default - mode that it tries to find faces of the same person as possible by using internal same-person - thresholds. It is useful to find a known person's other photos. Note that an empty list will be - returned if no faces pass the internal thresholds. "matchFace" mode ignores same-person - thresholds and returns ranked similar faces anyway, even the similarity is low. It can be used - in the cases like searching celebrity-looking faces. - - The 'recognitionModel' associated with the query faceId should be the same as the - 'recognitionModel' used by the target faceId array. - - :keyword face_id: faceId of the query face. User needs to call "Detect" first to get a valid - faceId. Note that this faceId is not persisted and will expire 24 hours after the detection - call. Required. - :paramtype face_id: str - :keyword face_ids: An array of candidate faceIds. All of them are created by "Detect" and the - faceIds will expire 24 hours after the detection call. The number of faceIds is limited to - 1000. Required. - :paramtype face_ids: list[str] - :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. - Default value is "application/json". - :paramtype content_type: str - :keyword max_num_of_candidates_returned: The number of top similar faces returned. The valid - range is [1, 1000]. Default value is 20. Default value is None. - :paramtype max_num_of_candidates_returned: int - :keyword mode: Similar face searching mode. It can be 'matchPerson' or 'matchFace'. Default - value is 'matchPerson'. Known values are: "matchPerson" and "matchFace". Default value is None. - :paramtype mode: str or ~azure.ai.vision.face.models.FindSimilarMatchMode - :return: list of FaceFindSimilarResult - :rtype: list[~azure.ai.vision.face.models.FaceFindSimilarResult] - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == [ - { - "confidence": 0.0, # Confidence value of the candidate. The higher - confidence, the more similar. Range between [0,1]. Required. - "faceId": "str", # Optional. faceId of candidate face when find by - faceIds. faceId is created by "Detect" and will expire 24 hours after the - detection call. - "persistedFaceId": "str" # Optional. persistedFaceId of candidate - face when find by faceListId or largeFaceListId. persistedFaceId in face - list/large face list is persisted and will not expire. - } - ] - """ - - @overload - async def find_similar( - self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any - ) -> List[_models.FaceFindSimilarResult]: - # pylint: disable=line-too-long - """Given query face's faceId, to search the similar-looking faces from a faceId array. A faceId - array contains the faces created by Detect. - - Depending on the input the returned similar faces list contains faceIds or persistedFaceIds - ranked by similarity. - - Find similar has two working modes, "matchPerson" and "matchFace". "matchPerson" is the default - mode that it tries to find faces of the same person as possible by using internal same-person - thresholds. It is useful to find a known person's other photos. Note that an empty list will be - returned if no faces pass the internal thresholds. "matchFace" mode ignores same-person - thresholds and returns ranked similar faces anyway, even the similarity is low. It can be used - in the cases like searching celebrity-looking faces. - - The 'recognitionModel' associated with the query faceId should be the same as the - 'recognitionModel' used by the target faceId array. - - :param body: Required. - :type body: IO[bytes] - :keyword content_type: Body Parameter content-type. Content type parameter for binary body. - Default value is "application/json". - :paramtype content_type: str - :return: list of FaceFindSimilarResult - :rtype: list[~azure.ai.vision.face.models.FaceFindSimilarResult] - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == [ - { - "confidence": 0.0, # Confidence value of the candidate. The higher - confidence, the more similar. Range between [0,1]. Required. - "faceId": "str", # Optional. faceId of candidate face when find by - faceIds. faceId is created by "Detect" and will expire 24 hours after the - detection call. - "persistedFaceId": "str" # Optional. persistedFaceId of candidate - face when find by faceListId or largeFaceListId. persistedFaceId in face - list/large face list is persisted and will not expire. - } - ] - """ - - @distributed_trace_async - async def find_similar( - self, - body: Union[JSON, IO[bytes]] = _Unset, - *, - face_id: str = _Unset, - face_ids: List[str] = _Unset, - max_num_of_candidates_returned: Optional[int] = None, - mode: Optional[Union[str, _models.FindSimilarMatchMode]] = None, - **kwargs: Any - ) -> List[_models.FaceFindSimilarResult]: - # pylint: disable=line-too-long - """Given query face's faceId, to search the similar-looking faces from a faceId array. A faceId - array contains the faces created by Detect. - - Depending on the input the returned similar faces list contains faceIds or persistedFaceIds - ranked by similarity. - - Find similar has two working modes, "matchPerson" and "matchFace". "matchPerson" is the default - mode that it tries to find faces of the same person as possible by using internal same-person - thresholds. It is useful to find a known person's other photos. Note that an empty list will be - returned if no faces pass the internal thresholds. "matchFace" mode ignores same-person - thresholds and returns ranked similar faces anyway, even the similarity is low. It can be used - in the cases like searching celebrity-looking faces. - - The 'recognitionModel' associated with the query faceId should be the same as the - 'recognitionModel' used by the target faceId array. - - :param body: Is either a JSON type or a IO[bytes] type. Required. - :type body: JSON or IO[bytes] - :keyword face_id: faceId of the query face. User needs to call "Detect" first to get a valid - faceId. Note that this faceId is not persisted and will expire 24 hours after the detection - call. Required. - :paramtype face_id: str - :keyword face_ids: An array of candidate faceIds. All of them are created by "Detect" and the - faceIds will expire 24 hours after the detection call. The number of faceIds is limited to - 1000. Required. - :paramtype face_ids: list[str] - :keyword max_num_of_candidates_returned: The number of top similar faces returned. The valid - range is [1, 1000]. Default value is 20. Default value is None. - :paramtype max_num_of_candidates_returned: int - :keyword mode: Similar face searching mode. It can be 'matchPerson' or 'matchFace'. Default - value is 'matchPerson'. Known values are: "matchPerson" and "matchFace". Default value is None. - :paramtype mode: str or ~azure.ai.vision.face.models.FindSimilarMatchMode - :return: list of FaceFindSimilarResult - :rtype: list[~azure.ai.vision.face.models.FaceFindSimilarResult] - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # JSON input template you can fill out and use as your body input. - body = { - "faceId": "str", # faceId of the query face. User needs to call "Detect" - first to get a valid faceId. Note that this faceId is not persisted and will - expire 24 hours after the detection call. Required. - "faceIds": [ - "str" # An array of candidate faceIds. All of them are created by - "Detect" and the faceIds will expire 24 hours after the detection call. The - number of faceIds is limited to 1000. Required. - ], - "maxNumOfCandidatesReturned": 0, # Optional. The number of top similar faces - returned. The valid range is [1, 1000]. Default value is 20. - "mode": "str" # Optional. Similar face searching mode. It can be - 'matchPerson' or 'matchFace'. Default value is 'matchPerson'. Known values are: - "matchPerson" and "matchFace". - } - - # response body for status code(s): 200 - response == [ - { - "confidence": 0.0, # Confidence value of the candidate. The higher - confidence, the more similar. Range between [0,1]. Required. - "faceId": "str", # Optional. faceId of candidate face when find by - faceIds. faceId is created by "Detect" and will expire 24 hours after the - detection call. - "persistedFaceId": "str" # Optional. persistedFaceId of candidate - face when find by faceListId or largeFaceListId. persistedFaceId in face - list/large face list is persisted and will not expire. - } - ] - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - _params = kwargs.pop("params", {}) or {} - - content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) - cls: ClsType[List[_models.FaceFindSimilarResult]] = kwargs.pop("cls", None) - - if body is _Unset: - if face_id is _Unset: - raise TypeError("missing required argument: face_id") - if face_ids is _Unset: - raise TypeError("missing required argument: face_ids") - body = { - "faceid": face_id, - "faceids": face_ids, - "maxnumofcandidatesreturned": max_num_of_candidates_returned, - "mode": mode, - } - body = {k: v for k, v in body.items() if v is not None} - content_type = content_type or "application/json" - _content = None - if isinstance(body, (IOBase, bytes)): - _content = body - else: - _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore - - _request = build_face_find_similar_request( - content_type=content_type, - content=_content, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - await response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(List[_models.FaceFindSimilarResult], response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore - - @overload - async def verify_face_to_face( - self, body: JSON, *, content_type: str = "application/json", **kwargs: Any - ) -> _models.FaceVerificationResult: - # pylint: disable=line-too-long - """Verify whether two faces belong to a same person. - - .. - - [!NOTE] - - * - - - * Higher face image quality means better identification precision. Please consider - high-quality faces: frontal, clear, and face size is 200x200 pixels (100 pixels between eyes) - or bigger. - * For the scenarios that are sensitive to accuracy please make your own judgment. - * The 'recognitionModel' associated with the both faces should be the same. - - :param body: Required. - :type body: JSON - :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. - Default value is "application/json". - :paramtype content_type: str - :return: FaceVerificationResult. The FaceVerificationResult is compatible with MutableMapping - :rtype: ~azure.ai.vision.face.models.FaceVerificationResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # JSON input template you can fill out and use as your body input. - body = { - "faceId1": "str", # The faceId of one face, come from "Detect". Required. - "faceId2": "str" # The faceId of another face, come from "Detect". Required. - } - - # response body for status code(s): 200 - response == { - "confidence": 0.0, # A number indicates the similarity confidence of whether - two faces belong to the same person, or whether the face belongs to the person. - By default, isIdentical is set to True if similarity confidence is greater than - or equal to 0.5. This is useful for advanced users to override 'isIdentical' and - fine-tune the result on their own data. Required. - "isIdentical": bool # True if the two faces belong to the same person or the - face belongs to the person, otherwise false. Required. - } - """ - - @overload - async def verify_face_to_face( - self, *, face_id1: str, face_id2: str, content_type: str = "application/json", **kwargs: Any - ) -> _models.FaceVerificationResult: - # pylint: disable=line-too-long - """Verify whether two faces belong to a same person. - - .. - - [!NOTE] - - * - - - * Higher face image quality means better identification precision. Please consider - high-quality faces: frontal, clear, and face size is 200x200 pixels (100 pixels between eyes) - or bigger. - * For the scenarios that are sensitive to accuracy please make your own judgment. - * The 'recognitionModel' associated with the both faces should be the same. - - :keyword face_id1: The faceId of one face, come from "Detect". Required. - :paramtype face_id1: str - :keyword face_id2: The faceId of another face, come from "Detect". Required. - :paramtype face_id2: str - :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. - Default value is "application/json". - :paramtype content_type: str - :return: FaceVerificationResult. The FaceVerificationResult is compatible with MutableMapping - :rtype: ~azure.ai.vision.face.models.FaceVerificationResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == { - "confidence": 0.0, # A number indicates the similarity confidence of whether - two faces belong to the same person, or whether the face belongs to the person. - By default, isIdentical is set to True if similarity confidence is greater than - or equal to 0.5. This is useful for advanced users to override 'isIdentical' and - fine-tune the result on their own data. Required. - "isIdentical": bool # True if the two faces belong to the same person or the - face belongs to the person, otherwise false. Required. - } - """ - - @overload - async def verify_face_to_face( - self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any - ) -> _models.FaceVerificationResult: - # pylint: disable=line-too-long - """Verify whether two faces belong to a same person. - - .. - - [!NOTE] - - * - - - * Higher face image quality means better identification precision. Please consider - high-quality faces: frontal, clear, and face size is 200x200 pixels (100 pixels between eyes) - or bigger. - * For the scenarios that are sensitive to accuracy please make your own judgment. - * The 'recognitionModel' associated with the both faces should be the same. - - :param body: Required. - :type body: IO[bytes] - :keyword content_type: Body Parameter content-type. Content type parameter for binary body. - Default value is "application/json". - :paramtype content_type: str - :return: FaceVerificationResult. The FaceVerificationResult is compatible with MutableMapping - :rtype: ~azure.ai.vision.face.models.FaceVerificationResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == { - "confidence": 0.0, # A number indicates the similarity confidence of whether - two faces belong to the same person, or whether the face belongs to the person. - By default, isIdentical is set to True if similarity confidence is greater than - or equal to 0.5. This is useful for advanced users to override 'isIdentical' and - fine-tune the result on their own data. Required. - "isIdentical": bool # True if the two faces belong to the same person or the - face belongs to the person, otherwise false. Required. - } - """ - - @distributed_trace_async - async def verify_face_to_face( - self, body: Union[JSON, IO[bytes]] = _Unset, *, face_id1: str = _Unset, face_id2: str = _Unset, **kwargs: Any - ) -> _models.FaceVerificationResult: - # pylint: disable=line-too-long - """Verify whether two faces belong to a same person. - - .. - - [!NOTE] - - * - - - * Higher face image quality means better identification precision. Please consider - high-quality faces: frontal, clear, and face size is 200x200 pixels (100 pixels between eyes) - or bigger. - * For the scenarios that are sensitive to accuracy please make your own judgment. - * The 'recognitionModel' associated with the both faces should be the same. - - :param body: Is either a JSON type or a IO[bytes] type. Required. - :type body: JSON or IO[bytes] - :keyword face_id1: The faceId of one face, come from "Detect". Required. - :paramtype face_id1: str - :keyword face_id2: The faceId of another face, come from "Detect". Required. - :paramtype face_id2: str - :return: FaceVerificationResult. The FaceVerificationResult is compatible with MutableMapping - :rtype: ~azure.ai.vision.face.models.FaceVerificationResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # JSON input template you can fill out and use as your body input. - body = { - "faceId1": "str", # The faceId of one face, come from "Detect". Required. - "faceId2": "str" # The faceId of another face, come from "Detect". Required. - } - - # response body for status code(s): 200 - response == { - "confidence": 0.0, # A number indicates the similarity confidence of whether - two faces belong to the same person, or whether the face belongs to the person. - By default, isIdentical is set to True if similarity confidence is greater than - or equal to 0.5. This is useful for advanced users to override 'isIdentical' and - fine-tune the result on their own data. Required. - "isIdentical": bool # True if the two faces belong to the same person or the - face belongs to the person, otherwise false. Required. - } - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - _params = kwargs.pop("params", {}) or {} - - content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) - cls: ClsType[_models.FaceVerificationResult] = kwargs.pop("cls", None) - - if body is _Unset: - if face_id1 is _Unset: - raise TypeError("missing required argument: face_id1") - if face_id2 is _Unset: - raise TypeError("missing required argument: face_id2") - body = {"faceid1": face_id1, "faceid2": face_id2} - body = {k: v for k, v in body.items() if v is not None} - content_type = content_type or "application/json" - _content = None - if isinstance(body, (IOBase, bytes)): - _content = body - else: - _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore - - _request = build_face_verify_face_to_face_request( - content_type=content_type, - content=_content, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - await response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(_models.FaceVerificationResult, response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore - - @overload - async def group( - self, body: JSON, *, content_type: str = "application/json", **kwargs: Any - ) -> _models.FaceGroupingResult: - # pylint: disable=line-too-long - """Divide candidate faces into groups based on face similarity. - - > - * - - - * The output is one or more disjointed face groups and a messyGroup. A face group contains - faces that have similar looking, often of the same person. Face groups are ranked by group - size, i.e. number of faces. Notice that faces belonging to a same person might be split into - several groups in the result. - * MessyGroup is a special face group containing faces that cannot find any similar counterpart - face from original faces. The messyGroup will not appear in the result if all faces found their - counterparts. - * Group API needs at least 2 candidate faces and 1000 at most. We suggest to try "Verify Face - To Face" when you only have 2 candidate faces. - * The 'recognitionModel' associated with the query faces' faceIds should be the same. - - :param body: Required. - :type body: JSON - :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. - Default value is "application/json". - :paramtype content_type: str - :return: FaceGroupingResult. The FaceGroupingResult is compatible with MutableMapping - :rtype: ~azure.ai.vision.face.models.FaceGroupingResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # JSON input template you can fill out and use as your body input. - body = { - "faceIds": [ - "str" # Array of candidate faceIds created by "Detect". The maximum - is 1000 faces. Required. - ] - } - - # response body for status code(s): 200 - response == { - "groups": [ - [ - "str" # A partition of the original faces based on face - similarity. Groups are ranked by number of faces. Required. - ] - ], - "messyGroup": [ - "str" # Face ids array of faces that cannot find any similar faces - from original faces. Required. - ] - } - """ - - @overload - async def group( - self, *, face_ids: List[str], content_type: str = "application/json", **kwargs: Any - ) -> _models.FaceGroupingResult: - # pylint: disable=line-too-long - """Divide candidate faces into groups based on face similarity. - - > - * - - - * The output is one or more disjointed face groups and a messyGroup. A face group contains - faces that have similar looking, often of the same person. Face groups are ranked by group - size, i.e. number of faces. Notice that faces belonging to a same person might be split into - several groups in the result. - * MessyGroup is a special face group containing faces that cannot find any similar counterpart - face from original faces. The messyGroup will not appear in the result if all faces found their - counterparts. - * Group API needs at least 2 candidate faces and 1000 at most. We suggest to try "Verify Face - To Face" when you only have 2 candidate faces. - * The 'recognitionModel' associated with the query faces' faceIds should be the same. - - :keyword face_ids: Array of candidate faceIds created by "Detect". The maximum is 1000 faces. - Required. - :paramtype face_ids: list[str] - :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. - Default value is "application/json". - :paramtype content_type: str - :return: FaceGroupingResult. The FaceGroupingResult is compatible with MutableMapping - :rtype: ~azure.ai.vision.face.models.FaceGroupingResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == { - "groups": [ - [ - "str" # A partition of the original faces based on face - similarity. Groups are ranked by number of faces. Required. - ] - ], - "messyGroup": [ - "str" # Face ids array of faces that cannot find any similar faces - from original faces. Required. - ] - } - """ - - @overload - async def group( - self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any - ) -> _models.FaceGroupingResult: - # pylint: disable=line-too-long - """Divide candidate faces into groups based on face similarity. - - > - * - - - * The output is one or more disjointed face groups and a messyGroup. A face group contains - faces that have similar looking, often of the same person. Face groups are ranked by group - size, i.e. number of faces. Notice that faces belonging to a same person might be split into - several groups in the result. - * MessyGroup is a special face group containing faces that cannot find any similar counterpart - face from original faces. The messyGroup will not appear in the result if all faces found their - counterparts. - * Group API needs at least 2 candidate faces and 1000 at most. We suggest to try "Verify Face - To Face" when you only have 2 candidate faces. - * The 'recognitionModel' associated with the query faces' faceIds should be the same. - - :param body: Required. - :type body: IO[bytes] - :keyword content_type: Body Parameter content-type. Content type parameter for binary body. - Default value is "application/json". - :paramtype content_type: str - :return: FaceGroupingResult. The FaceGroupingResult is compatible with MutableMapping - :rtype: ~azure.ai.vision.face.models.FaceGroupingResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == { - "groups": [ - [ - "str" # A partition of the original faces based on face - similarity. Groups are ranked by number of faces. Required. - ] - ], - "messyGroup": [ - "str" # Face ids array of faces that cannot find any similar faces - from original faces. Required. - ] - } - """ - - @distributed_trace_async - async def group( - self, body: Union[JSON, IO[bytes]] = _Unset, *, face_ids: List[str] = _Unset, **kwargs: Any - ) -> _models.FaceGroupingResult: - # pylint: disable=line-too-long - """Divide candidate faces into groups based on face similarity. - - > - * - - - * The output is one or more disjointed face groups and a messyGroup. A face group contains - faces that have similar looking, often of the same person. Face groups are ranked by group - size, i.e. number of faces. Notice that faces belonging to a same person might be split into - several groups in the result. - * MessyGroup is a special face group containing faces that cannot find any similar counterpart - face from original faces. The messyGroup will not appear in the result if all faces found their - counterparts. - * Group API needs at least 2 candidate faces and 1000 at most. We suggest to try "Verify Face - To Face" when you only have 2 candidate faces. - * The 'recognitionModel' associated with the query faces' faceIds should be the same. - - :param body: Is either a JSON type or a IO[bytes] type. Required. - :type body: JSON or IO[bytes] - :keyword face_ids: Array of candidate faceIds created by "Detect". The maximum is 1000 faces. - Required. - :paramtype face_ids: list[str] - :return: FaceGroupingResult. The FaceGroupingResult is compatible with MutableMapping - :rtype: ~azure.ai.vision.face.models.FaceGroupingResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # JSON input template you can fill out and use as your body input. - body = { - "faceIds": [ - "str" # Array of candidate faceIds created by "Detect". The maximum - is 1000 faces. Required. - ] - } - - # response body for status code(s): 200 - response == { - "groups": [ - [ - "str" # A partition of the original faces based on face - similarity. Groups are ranked by number of faces. Required. - ] - ], - "messyGroup": [ - "str" # Face ids array of faces that cannot find any similar faces - from original faces. Required. - ] - } - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - _params = kwargs.pop("params", {}) or {} - - content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) - cls: ClsType[_models.FaceGroupingResult] = kwargs.pop("cls", None) - - if body is _Unset: - if face_ids is _Unset: - raise TypeError("missing required argument: face_ids") - body = {"faceids": face_ids} - body = {k: v for k, v in body.items() if v is not None} - content_type = content_type or "application/json" - _content = None - if isinstance(body, (IOBase, bytes)): - _content = body - else: - _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore - - _request = build_face_group_request( - content_type=content_type, - content=_content, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - await response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(_models.FaceGroupingResult, response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore - - -class FaceSessionClientOperationsMixin(FaceSessionClientMixinABC): - - @overload - async def create_liveness_session( - self, body: _models.CreateLivenessSessionContent, *, content_type: str = "application/json", **kwargs: Any - ) -> _models.CreateLivenessSessionResult: - # pylint: disable=line-too-long - """Create a new detect liveness session. - - A session is best for client device scenarios where developers want to authorize a client - device to perform only a liveness detection without granting full access to their resource. - Created sessions have a limited life span and only authorize clients to perform the desired - action before access is expired. - - Permissions includes... - > - * - - - * Ability to call /detectLiveness/singleModal for up to 3 retries. - * A token lifetime of 10 minutes. - - .. - - [!NOTE] - Client access can be revoked by deleting the session using the Delete Liveness Session - operation. To retrieve a result, use the Get Liveness Session. To audit the individual requests - that a client has made to your resource, use the List Liveness Session Audit Entries. - - :param body: Required. - :type body: ~azure.ai.vision.face.models.CreateLivenessSessionContent - :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. - Default value is "application/json". - :paramtype content_type: str - :return: CreateLivenessSessionResult. The CreateLivenessSessionResult is compatible with - MutableMapping - :rtype: ~azure.ai.vision.face.models.CreateLivenessSessionResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # JSON input template you can fill out and use as your body input. - body = { - "livenessOperationMode": "str", # Type of liveness mode the client should - follow. Required. Known values are: "Passive" and "PassiveActive". - "authTokenTimeToLiveInSeconds": 0, # Optional. Seconds the session should - last for. Range is 60 to 86400 seconds. Default value is 600. - "deviceCorrelationId": "str", # Optional. Unique Guid per each end-user - device. This is to provide rate limiting and anti-hammering. If - 'deviceCorrelationIdSetInClient' is true in this request, this - 'deviceCorrelationId' must be null. - "deviceCorrelationIdSetInClient": bool, # Optional. Whether or not to allow - client to set their own 'deviceCorrelationId' via the Vision SDK. Default is - false, and 'deviceCorrelationId' must be set in this request body. - "sendResultsToClient": bool # Optional. Whether or not to allow a '200 - - Success' response body to be sent to the client, which may be undesirable for - security reasons. Default is false, clients will receive a '204 - NoContent' - empty body response. Regardless of selection, calling Session GetResult will - always contain a response body enabling business logic to be implemented. - } - - # response body for status code(s): 200 - response == { - "authToken": "str", # Bearer token to provide authentication for the Vision - SDK running on a client application. This Bearer token has limited permissions to - perform only the required action and expires after the TTL time. It is also - auditable. Required. - "sessionId": "str" # The unique session ID of the created session. It will - expire 48 hours after it was created or may be deleted sooner using the - corresponding Session DELETE operation. Required. - } - """ - - @overload - async def create_liveness_session( - self, body: JSON, *, content_type: str = "application/json", **kwargs: Any - ) -> _models.CreateLivenessSessionResult: - # pylint: disable=line-too-long - """Create a new detect liveness session. - - A session is best for client device scenarios where developers want to authorize a client - device to perform only a liveness detection without granting full access to their resource. - Created sessions have a limited life span and only authorize clients to perform the desired - action before access is expired. - - Permissions includes... - > - * - - - * Ability to call /detectLiveness/singleModal for up to 3 retries. - * A token lifetime of 10 minutes. - - .. - - [!NOTE] - Client access can be revoked by deleting the session using the Delete Liveness Session - operation. To retrieve a result, use the Get Liveness Session. To audit the individual requests - that a client has made to your resource, use the List Liveness Session Audit Entries. - - :param body: Required. - :type body: JSON - :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. - Default value is "application/json". - :paramtype content_type: str - :return: CreateLivenessSessionResult. The CreateLivenessSessionResult is compatible with - MutableMapping - :rtype: ~azure.ai.vision.face.models.CreateLivenessSessionResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == { - "authToken": "str", # Bearer token to provide authentication for the Vision - SDK running on a client application. This Bearer token has limited permissions to - perform only the required action and expires after the TTL time. It is also - auditable. Required. - "sessionId": "str" # The unique session ID of the created session. It will - expire 48 hours after it was created or may be deleted sooner using the - corresponding Session DELETE operation. Required. - } - """ - - @overload - async def create_liveness_session( - self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any - ) -> _models.CreateLivenessSessionResult: - # pylint: disable=line-too-long - """Create a new detect liveness session. - - A session is best for client device scenarios where developers want to authorize a client - device to perform only a liveness detection without granting full access to their resource. - Created sessions have a limited life span and only authorize clients to perform the desired - action before access is expired. - - Permissions includes... - > - * - - - * Ability to call /detectLiveness/singleModal for up to 3 retries. - * A token lifetime of 10 minutes. - - .. - - [!NOTE] - Client access can be revoked by deleting the session using the Delete Liveness Session - operation. To retrieve a result, use the Get Liveness Session. To audit the individual requests - that a client has made to your resource, use the List Liveness Session Audit Entries. - - :param body: Required. - :type body: IO[bytes] - :keyword content_type: Body Parameter content-type. Content type parameter for binary body. - Default value is "application/json". - :paramtype content_type: str - :return: CreateLivenessSessionResult. The CreateLivenessSessionResult is compatible with - MutableMapping - :rtype: ~azure.ai.vision.face.models.CreateLivenessSessionResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == { - "authToken": "str", # Bearer token to provide authentication for the Vision - SDK running on a client application. This Bearer token has limited permissions to - perform only the required action and expires after the TTL time. It is also - auditable. Required. - "sessionId": "str" # The unique session ID of the created session. It will - expire 48 hours after it was created or may be deleted sooner using the - corresponding Session DELETE operation. Required. - } - """ - - @distributed_trace_async - async def create_liveness_session( - self, body: Union[_models.CreateLivenessSessionContent, JSON, IO[bytes]], **kwargs: Any - ) -> _models.CreateLivenessSessionResult: - # pylint: disable=line-too-long - """Create a new detect liveness session. - - A session is best for client device scenarios where developers want to authorize a client - device to perform only a liveness detection without granting full access to their resource. - Created sessions have a limited life span and only authorize clients to perform the desired - action before access is expired. - - Permissions includes... - > - * - - - * Ability to call /detectLiveness/singleModal for up to 3 retries. - * A token lifetime of 10 minutes. - - .. - - [!NOTE] - Client access can be revoked by deleting the session using the Delete Liveness Session - operation. To retrieve a result, use the Get Liveness Session. To audit the individual requests - that a client has made to your resource, use the List Liveness Session Audit Entries. - - :param body: Is one of the following types: CreateLivenessSessionContent, JSON, IO[bytes] - Required. - :type body: ~azure.ai.vision.face.models.CreateLivenessSessionContent or JSON or IO[bytes] - :return: CreateLivenessSessionResult. The CreateLivenessSessionResult is compatible with - MutableMapping - :rtype: ~azure.ai.vision.face.models.CreateLivenessSessionResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # JSON input template you can fill out and use as your body input. - body = { - "livenessOperationMode": "str", # Type of liveness mode the client should - follow. Required. Known values are: "Passive" and "PassiveActive". - "authTokenTimeToLiveInSeconds": 0, # Optional. Seconds the session should - last for. Range is 60 to 86400 seconds. Default value is 600. - "deviceCorrelationId": "str", # Optional. Unique Guid per each end-user - device. This is to provide rate limiting and anti-hammering. If - 'deviceCorrelationIdSetInClient' is true in this request, this - 'deviceCorrelationId' must be null. - "deviceCorrelationIdSetInClient": bool, # Optional. Whether or not to allow - client to set their own 'deviceCorrelationId' via the Vision SDK. Default is - false, and 'deviceCorrelationId' must be set in this request body. - "sendResultsToClient": bool # Optional. Whether or not to allow a '200 - - Success' response body to be sent to the client, which may be undesirable for - security reasons. Default is false, clients will receive a '204 - NoContent' - empty body response. Regardless of selection, calling Session GetResult will - always contain a response body enabling business logic to be implemented. - } - - # response body for status code(s): 200 - response == { - "authToken": "str", # Bearer token to provide authentication for the Vision - SDK running on a client application. This Bearer token has limited permissions to - perform only the required action and expires after the TTL time. It is also - auditable. Required. - "sessionId": "str" # The unique session ID of the created session. It will - expire 48 hours after it was created or may be deleted sooner using the - corresponding Session DELETE operation. Required. - } - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - _params = kwargs.pop("params", {}) or {} - - content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) - cls: ClsType[_models.CreateLivenessSessionResult] = kwargs.pop("cls", None) - - content_type = content_type or "application/json" - _content = None - if isinstance(body, (IOBase, bytes)): - _content = body - else: - _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore - - _request = build_face_session_create_liveness_session_request( - content_type=content_type, - content=_content, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - await response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(_models.CreateLivenessSessionResult, response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore - - @distributed_trace_async - async def delete_liveness_session( # pylint: disable=inconsistent-return-statements - self, session_id: str, **kwargs: Any - ) -> None: - """Delete all session related information for matching the specified session id. - - .. - - [!NOTE] - Deleting a session deactivates the Session Auth Token by blocking future API calls made with - that Auth Token. While this can be used to remove any access for that token, those requests - will still count towards overall resource rate limits. It's best to leverage TokenTTL to limit - length of tokens in the case that it is misused. - - :param session_id: The unique ID to reference this session. Required. - :type session_id: str - :return: None - :rtype: None - :raises ~azure.core.exceptions.HttpResponseError: - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = kwargs.pop("headers", {}) or {} - _params = kwargs.pop("params", {}) or {} - - cls: ClsType[None] = kwargs.pop("cls", None) - - _request = build_face_session_delete_liveness_session_request( - session_id=session_id, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = False - pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - await response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if cls: - return cls(pipeline_response, None, {}) # type: ignore - - @distributed_trace_async - async def get_liveness_session_result(self, session_id: str, **kwargs: Any) -> _models.LivenessSession: - # pylint: disable=line-too-long - """Get session result of detectLiveness/singleModal call. - - :param session_id: The unique ID to reference this session. Required. - :type session_id: str - :return: LivenessSession. The LivenessSession is compatible with MutableMapping - :rtype: ~azure.ai.vision.face.models.LivenessSession - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == { - "createdDateTime": "2020-02-20 00:00:00", # DateTime when this session was - created. Required. - "id": "str", # The unique ID to reference this session. Required. - "sessionExpired": bool, # Whether or not the session is expired. Required. - "status": "str", # The current status of the session. Required. Known values - are: "NotStarted", "Started", and "ResultAvailable". - "authTokenTimeToLiveInSeconds": 0, # Optional. Seconds the session should - last for. Range is 60 to 86400 seconds. Default value is 600. - "deviceCorrelationId": "str", # Optional. Unique Guid per each end-user - device. This is to provide rate limiting and anti-hammering. If - 'deviceCorrelationIdSetInClient' is true in this request, this - 'deviceCorrelationId' must be null. - "result": { - "clientRequestId": "str", # The unique clientRequestId that is sent - by the client in the 'client-request-id' header. Required. - "digest": "str", # The server calculated digest for this request. If - the client reported digest differs from the server calculated digest, then - the message integrity between the client and service has been compromised and - the result should not be trusted. For more information, see how to guides on - how to leverage this value to secure your end-to-end solution. Required. - "id": 0, # The unique id to refer to this audit request. Use this id - with the 'start' query parameter to continue on to the next page of audit - results. Required. - "receivedDateTime": "2020-02-20 00:00:00", # The UTC DateTime that - the request was received. Required. - "request": { - "contentType": "str", # The content type of the request. - Required. - "method": "str", # The HTTP method of the request (i.e., - GET, POST, DELETE). Required. - "url": "str", # The relative URL and query of the liveness - request. Required. - "contentLength": 0, # Optional. The length of the request - body in bytes. - "userAgent": "str" # Optional. The user agent used to submit - the request. - }, - "requestId": "str", # The unique requestId that is returned by the - service to the client in the 'apim-request-id' header. Required. - "response": { - "body": { - "livenessDecision": "str", # Optional. The liveness - classification for the target face. Known values are: "uncertain", - "realface", and "spoofface". - "modelVersionUsed": "str", # Optional. The model - version used for liveness classification. Known values are: - "2020-02-15-preview.01", "2021-11-12-preview.03", - "2022-10-15-preview.04", and "2023-03-02-preview.05". - "target": { - "faceRectangle": { - "height": 0, # The height of the - rectangle, in pixels. Required. - "left": 0, # The distance from the - left edge if the image to the left edge of the rectangle, in - pixels. Required. - "top": 0, # The distance from the - top edge if the image to the top edge of the rectangle, in - pixels. Required. - "width": 0 # The width of the - rectangle, in pixels. Required. - }, - "fileName": "str", # The file name which - contains the face rectangle where the liveness classification was - made on. Required. - "imageType": "str", # The image type which - contains the face rectangle where the liveness classification was - made on. Required. Known values are: "Color", "Infrared", and - "Depth". - "timeOffsetWithinFile": 0 # The time offset - within the file of the frame which contains the face rectangle - where the liveness classification was made on. Required. - }, - "verifyResult": { - "isIdentical": bool, # Whether the target - liveness face and comparison image face match. Required. - "matchConfidence": 0.0, # The target face - liveness face and comparison image face verification confidence. - Required. - "verifyImage": { - "faceRectangle": { - "height": 0, # The height of - the rectangle, in pixels. Required. - "left": 0, # The distance - from the left edge if the image to the left edge of the - rectangle, in pixels. Required. - "top": 0, # The distance - from the top edge if the image to the top edge of the - rectangle, in pixels. Required. - "width": 0 # The width of - the rectangle, in pixels. Required. - }, - "qualityForRecognition": "str" # - Quality of face image for recognition. Required. Known values - are: "low", "medium", and "high". - } - } - }, - "latencyInMilliseconds": 0, # The server measured latency - for this request in milliseconds. Required. - "statusCode": 0 # The HTTP status code returned to the - client. Required. - }, - "sessionId": "str" # The unique sessionId of the created session. It - will expire 48 hours after it was created or may be deleted sooner using the - corresponding session DELETE operation. Required. - }, - "sessionStartDateTime": "2020-02-20 00:00:00" # Optional. DateTime when this - session was started by the client. - } - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = kwargs.pop("headers", {}) or {} - _params = kwargs.pop("params", {}) or {} - - cls: ClsType[_models.LivenessSession] = kwargs.pop("cls", None) - - _request = build_face_session_get_liveness_session_result_request( - session_id=session_id, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - await response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(_models.LivenessSession, response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore - - @distributed_trace_async - async def get_liveness_sessions( - self, *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any - ) -> List[_models.LivenessSessionItem]: - # pylint: disable=line-too-long - """Lists sessions for /detectLiveness/SingleModal. - - List sessions from the last sessionId greater than the 'start'. - - The result should be ordered by sessionId in ascending order. - - :keyword start: List resources greater than the "start". It contains no more than 64 - characters. Default is empty. Default value is None. - :paramtype start: str - :keyword top: The number of items to list, ranging in [1, 1000]. Default is 1000. Default value - is None. - :paramtype top: int - :return: list of LivenessSessionItem - :rtype: list[~azure.ai.vision.face.models.LivenessSessionItem] - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == [ - { - "createdDateTime": "2020-02-20 00:00:00", # DateTime when this - session was created. Required. - "id": "str", # The unique ID to reference this session. Required. - "sessionExpired": bool, # Whether or not the session is expired. - Required. - "authTokenTimeToLiveInSeconds": 0, # Optional. Seconds the session - should last for. Range is 60 to 86400 seconds. Default value is 600. - "deviceCorrelationId": "str", # Optional. Unique Guid per each - end-user device. This is to provide rate limiting and anti-hammering. If - 'deviceCorrelationIdSetInClient' is true in this request, this - 'deviceCorrelationId' must be null. - "sessionStartDateTime": "2020-02-20 00:00:00" # Optional. DateTime - when this session was started by the client. - } - ] - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = kwargs.pop("headers", {}) or {} - _params = kwargs.pop("params", {}) or {} - - cls: ClsType[List[_models.LivenessSessionItem]] = kwargs.pop("cls", None) - - _request = build_face_session_get_liveness_sessions_request( - start=start, - top=top, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - await response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(List[_models.LivenessSessionItem], response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore - - @distributed_trace_async - async def get_liveness_session_audit_entries( - self, session_id: str, *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any - ) -> List[_models.LivenessSessionAuditEntry]: - # pylint: disable=line-too-long - """Gets session requests and response body for the session. - - :param session_id: The unique ID to reference this session. Required. - :type session_id: str - :keyword start: List resources greater than the "start". It contains no more than 64 - characters. Default is empty. Default value is None. - :paramtype start: str - :keyword top: The number of items to list, ranging in [1, 1000]. Default is 1000. Default value - is None. - :paramtype top: int - :return: list of LivenessSessionAuditEntry - :rtype: list[~azure.ai.vision.face.models.LivenessSessionAuditEntry] - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == [ - { - "clientRequestId": "str", # The unique clientRequestId that is sent - by the client in the 'client-request-id' header. Required. - "digest": "str", # The server calculated digest for this request. If - the client reported digest differs from the server calculated digest, then - the message integrity between the client and service has been compromised and - the result should not be trusted. For more information, see how to guides on - how to leverage this value to secure your end-to-end solution. Required. - "id": 0, # The unique id to refer to this audit request. Use this id - with the 'start' query parameter to continue on to the next page of audit - results. Required. - "receivedDateTime": "2020-02-20 00:00:00", # The UTC DateTime that - the request was received. Required. - "request": { - "contentType": "str", # The content type of the request. - Required. - "method": "str", # The HTTP method of the request (i.e., - GET, POST, DELETE). Required. - "url": "str", # The relative URL and query of the liveness - request. Required. - "contentLength": 0, # Optional. The length of the request - body in bytes. - "userAgent": "str" # Optional. The user agent used to submit - the request. - }, - "requestId": "str", # The unique requestId that is returned by the - service to the client in the 'apim-request-id' header. Required. - "response": { - "body": { - "livenessDecision": "str", # Optional. The liveness - classification for the target face. Known values are: "uncertain", - "realface", and "spoofface". - "modelVersionUsed": "str", # Optional. The model - version used for liveness classification. Known values are: - "2020-02-15-preview.01", "2021-11-12-preview.03", - "2022-10-15-preview.04", and "2023-03-02-preview.05". - "target": { - "faceRectangle": { - "height": 0, # The height of the - rectangle, in pixels. Required. - "left": 0, # The distance from the - left edge if the image to the left edge of the rectangle, in - pixels. Required. - "top": 0, # The distance from the - top edge if the image to the top edge of the rectangle, in - pixels. Required. - "width": 0 # The width of the - rectangle, in pixels. Required. - }, - "fileName": "str", # The file name which - contains the face rectangle where the liveness classification was - made on. Required. - "imageType": "str", # The image type which - contains the face rectangle where the liveness classification was - made on. Required. Known values are: "Color", "Infrared", and - "Depth". - "timeOffsetWithinFile": 0 # The time offset - within the file of the frame which contains the face rectangle - where the liveness classification was made on. Required. - }, - "verifyResult": { - "isIdentical": bool, # Whether the target - liveness face and comparison image face match. Required. - "matchConfidence": 0.0, # The target face - liveness face and comparison image face verification confidence. - Required. - "verifyImage": { - "faceRectangle": { - "height": 0, # The height of - the rectangle, in pixels. Required. - "left": 0, # The distance - from the left edge if the image to the left edge of the - rectangle, in pixels. Required. - "top": 0, # The distance - from the top edge if the image to the top edge of the - rectangle, in pixels. Required. - "width": 0 # The width of - the rectangle, in pixels. Required. - }, - "qualityForRecognition": "str" # - Quality of face image for recognition. Required. Known values - are: "low", "medium", and "high". - } - } - }, - "latencyInMilliseconds": 0, # The server measured latency - for this request in milliseconds. Required. - "statusCode": 0 # The HTTP status code returned to the - client. Required. - }, - "sessionId": "str" # The unique sessionId of the created session. It - will expire 48 hours after it was created or may be deleted sooner using the - corresponding session DELETE operation. Required. - } - ] - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = kwargs.pop("headers", {}) or {} - _params = kwargs.pop("params", {}) or {} - - cls: ClsType[List[_models.LivenessSessionAuditEntry]] = kwargs.pop("cls", None) - - _request = build_face_session_get_liveness_session_audit_entries_request( - session_id=session_id, - start=start, - top=top, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - await response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(List[_models.LivenessSessionAuditEntry], response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore - - @overload - async def _create_liveness_with_verify_session( - self, body: _models.CreateLivenessSessionContent, *, content_type: str = "application/json", **kwargs: Any - ) -> _models.CreateLivenessWithVerifySessionResult: ... - @overload - async def _create_liveness_with_verify_session( - self, body: JSON, *, content_type: str = "application/json", **kwargs: Any - ) -> _models.CreateLivenessWithVerifySessionResult: ... - @overload - async def _create_liveness_with_verify_session( - self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any - ) -> _models.CreateLivenessWithVerifySessionResult: ... - - @distributed_trace_async - async def _create_liveness_with_verify_session( - self, body: Union[_models.CreateLivenessSessionContent, JSON, IO[bytes]], **kwargs: Any - ) -> _models.CreateLivenessWithVerifySessionResult: - # pylint: disable=line-too-long - """Create a new liveness session with verify. Client device submits VerifyImage during the - /detectLivenessWithVerify/singleModal call. - - A session is best for client device scenarios where developers want to authorize a client - device to perform only a liveness detection without granting full access to their resource. - Created sessions have a limited life span and only authorize clients to perform the desired - action before access is expired. - - Permissions includes... - > - * - - - * Ability to call /detectLivenessWithVerify/singleModal for up to 3 retries. - * A token lifetime of 10 minutes. - - .. - - [!NOTE] - - * - - - * Client access can be revoked by deleting the session using the Delete Liveness With Verify - Session operation. - * To retrieve a result, use the Get Liveness With Verify Session. - * To audit the individual requests that a client has made to your resource, use the List - Liveness With Verify Session Audit Entries. - - - Alternative Option: Client device submits VerifyImage during the - /detectLivenessWithVerify/singleModal call. - - .. - - [!NOTE] - Extra measures should be taken to validate that the client is sending the expected - VerifyImage. - - :param body: Is one of the following types: CreateLivenessSessionContent, JSON, IO[bytes] - Required. - :type body: ~azure.ai.vision.face.models.CreateLivenessSessionContent or JSON or IO[bytes] - :return: CreateLivenessWithVerifySessionResult. The CreateLivenessWithVerifySessionResult is - compatible with MutableMapping - :rtype: ~azure.ai.vision.face.models.CreateLivenessWithVerifySessionResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # JSON input template you can fill out and use as your body input. - body = { - "livenessOperationMode": "str", # Type of liveness mode the client should - follow. Required. Known values are: "Passive" and "PassiveActive". - "authTokenTimeToLiveInSeconds": 0, # Optional. Seconds the session should - last for. Range is 60 to 86400 seconds. Default value is 600. - "deviceCorrelationId": "str", # Optional. Unique Guid per each end-user - device. This is to provide rate limiting and anti-hammering. If - 'deviceCorrelationIdSetInClient' is true in this request, this - 'deviceCorrelationId' must be null. - "deviceCorrelationIdSetInClient": bool, # Optional. Whether or not to allow - client to set their own 'deviceCorrelationId' via the Vision SDK. Default is - false, and 'deviceCorrelationId' must be set in this request body. - "sendResultsToClient": bool # Optional. Whether or not to allow a '200 - - Success' response body to be sent to the client, which may be undesirable for - security reasons. Default is false, clients will receive a '204 - NoContent' - empty body response. Regardless of selection, calling Session GetResult will - always contain a response body enabling business logic to be implemented. - } - - # response body for status code(s): 200 - response == { - "authToken": "str", # Bearer token to provide authentication for the Vision - SDK running on a client application. This Bearer token has limited permissions to - perform only the required action and expires after the TTL time. It is also - auditable. Required. - "sessionId": "str", # The unique session ID of the created session. It will - expire 48 hours after it was created or may be deleted sooner using the - corresponding Session DELETE operation. Required. - "verifyImage": { - "faceRectangle": { - "height": 0, # The height of the rectangle, in pixels. - Required. - "left": 0, # The distance from the left edge if the image to - the left edge of the rectangle, in pixels. Required. - "top": 0, # The distance from the top edge if the image to - the top edge of the rectangle, in pixels. Required. - "width": 0 # The width of the rectangle, in pixels. - Required. - }, - "qualityForRecognition": "str" # Quality of face image for - recognition. Required. Known values are: "low", "medium", and "high". - } - } - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) - _params = kwargs.pop("params", {}) or {} - - content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) - cls: ClsType[_models.CreateLivenessWithVerifySessionResult] = kwargs.pop("cls", None) - - content_type = content_type or "application/json" - _content = None - if isinstance(body, (IOBase, bytes)): - _content = body - else: - _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore - - _request = build_face_session_create_liveness_with_verify_session_request( - content_type=content_type, - content=_content, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - await response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(_models.CreateLivenessWithVerifySessionResult, response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore - - @overload - async def _create_liveness_with_verify_session_with_verify_image( # pylint: disable=protected-access,name-too-long - self, body: _models._models.CreateLivenessWithVerifySessionContent, **kwargs: Any - ) -> _models.CreateLivenessWithVerifySessionResult: ... - @overload - async def _create_liveness_with_verify_session_with_verify_image( # pylint: disable=name-too-long - self, body: JSON, **kwargs: Any - ) -> _models.CreateLivenessWithVerifySessionResult: ... - - @distributed_trace_async - async def _create_liveness_with_verify_session_with_verify_image( # pylint: disable=name-too-long - self, body: Union[_models._models.CreateLivenessWithVerifySessionContent, JSON], **kwargs: Any - ) -> _models.CreateLivenessWithVerifySessionResult: - # pylint: disable=line-too-long - """Create a new liveness session with verify. Provide the verify image during session creation. - - A session is best for client device scenarios where developers want to authorize a client - device to perform only a liveness detection without granting full access to their resource. - Created sessions have a limited life span and only authorize clients to perform the desired - action before access is expired. - - Permissions includes... - > - * - - - * Ability to call /detectLivenessWithVerify/singleModal for up to 3 retries. - * A token lifetime of 10 minutes. - - .. - - [!NOTE] - - * - - - * Client access can be revoked by deleting the session using the Delete Liveness With Verify - Session operation. - * To retrieve a result, use the Get Liveness With Verify Session. - * To audit the individual requests that a client has made to your resource, use the List - Liveness With Verify Session Audit Entries. - - - Recommended Option: VerifyImage is provided during session creation. - - :param body: Is either a CreateLivenessWithVerifySessionContent type or a JSON type. Required. - :type body: ~azure.ai.vision.face.models._models.CreateLivenessWithVerifySessionContent or JSON - :return: CreateLivenessWithVerifySessionResult. The CreateLivenessWithVerifySessionResult is - compatible with MutableMapping - :rtype: ~azure.ai.vision.face.models.CreateLivenessWithVerifySessionResult - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # JSON input template you can fill out and use as your body input. - body = { - "Parameters": { - "livenessOperationMode": "str", # Type of liveness mode the client - should follow. Required. Known values are: "Passive" and "PassiveActive". - "authTokenTimeToLiveInSeconds": 0, # Optional. Seconds the session - should last for. Range is 60 to 86400 seconds. Default value is 600. - "deviceCorrelationId": "str", # Optional. Unique Guid per each - end-user device. This is to provide rate limiting and anti-hammering. If - 'deviceCorrelationIdSetInClient' is true in this request, this - 'deviceCorrelationId' must be null. - "deviceCorrelationIdSetInClient": bool, # Optional. Whether or not - to allow client to set their own 'deviceCorrelationId' via the Vision SDK. - Default is false, and 'deviceCorrelationId' must be set in this request body. - "sendResultsToClient": bool # Optional. Whether or not to allow a - '200 - Success' response body to be sent to the client, which may be - undesirable for security reasons. Default is false, clients will receive a - '204 - NoContent' empty body response. Regardless of selection, calling - Session GetResult will always contain a response body enabling business logic - to be implemented. - }, - "VerifyImage": filetype - } - - # response body for status code(s): 200 - response == { - "authToken": "str", # Bearer token to provide authentication for the Vision - SDK running on a client application. This Bearer token has limited permissions to - perform only the required action and expires after the TTL time. It is also - auditable. Required. - "sessionId": "str", # The unique session ID of the created session. It will - expire 48 hours after it was created or may be deleted sooner using the - corresponding Session DELETE operation. Required. - "verifyImage": { - "faceRectangle": { - "height": 0, # The height of the rectangle, in pixels. - Required. - "left": 0, # The distance from the left edge if the image to - the left edge of the rectangle, in pixels. Required. - "top": 0, # The distance from the top edge if the image to - the top edge of the rectangle, in pixels. Required. - "width": 0 # The width of the rectangle, in pixels. - Required. - }, - "qualityForRecognition": "str" # Quality of face image for - recognition. Required. Known values are: "low", "medium", and "high". - } - } - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = kwargs.pop("headers", {}) or {} - _params = kwargs.pop("params", {}) or {} - - cls: ClsType[_models.CreateLivenessWithVerifySessionResult] = kwargs.pop("cls", None) - - _body = body.as_dict() if isinstance(body, _model_base.Model) else body - _file_fields: List[str] = ["VerifyImage"] - _data_fields: List[str] = ["Parameters"] - _files, _data = prepare_multipart_form_data(_body, _file_fields, _data_fields) - - _request = build_face_session_create_liveness_with_verify_session_with_verify_image_request( - files=_files, - data=_data, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - await response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(_models.CreateLivenessWithVerifySessionResult, response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore - - @distributed_trace_async - async def delete_liveness_with_verify_session( # pylint: disable=inconsistent-return-statements - self, session_id: str, **kwargs: Any - ) -> None: - """Delete all session related information for matching the specified session id. - - .. - - [!NOTE] - Deleting a session deactivates the Session Auth Token by blocking future API calls made with - that Auth Token. While this can be used to remove any access for that token, those requests - will still count towards overall resource rate limits. It's best to leverage TokenTTL to limit - length of tokens in the case that it is misused. - - :param session_id: The unique ID to reference this session. Required. - :type session_id: str - :return: None - :rtype: None - :raises ~azure.core.exceptions.HttpResponseError: - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = kwargs.pop("headers", {}) or {} - _params = kwargs.pop("params", {}) or {} - - cls: ClsType[None] = kwargs.pop("cls", None) - - _request = build_face_session_delete_liveness_with_verify_session_request( - session_id=session_id, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = False - pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - await response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if cls: - return cls(pipeline_response, None, {}) # type: ignore - - @distributed_trace_async - async def get_liveness_with_verify_session_result( - self, session_id: str, **kwargs: Any - ) -> _models.LivenessWithVerifySession: - # pylint: disable=line-too-long - """Get session result of detectLivenessWithVerify/singleModal call. - - :param session_id: The unique ID to reference this session. Required. - :type session_id: str - :return: LivenessWithVerifySession. The LivenessWithVerifySession is compatible with - MutableMapping - :rtype: ~azure.ai.vision.face.models.LivenessWithVerifySession - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == { - "createdDateTime": "2020-02-20 00:00:00", # DateTime when this session was - created. Required. - "id": "str", # The unique ID to reference this session. Required. - "sessionExpired": bool, # Whether or not the session is expired. Required. - "status": "str", # The current status of the session. Required. Known values - are: "NotStarted", "Started", and "ResultAvailable". - "authTokenTimeToLiveInSeconds": 0, # Optional. Seconds the session should - last for. Range is 60 to 86400 seconds. Default value is 600. - "deviceCorrelationId": "str", # Optional. Unique Guid per each end-user - device. This is to provide rate limiting and anti-hammering. If - 'deviceCorrelationIdSetInClient' is true in this request, this - 'deviceCorrelationId' must be null. - "result": { - "clientRequestId": "str", # The unique clientRequestId that is sent - by the client in the 'client-request-id' header. Required. - "digest": "str", # The server calculated digest for this request. If - the client reported digest differs from the server calculated digest, then - the message integrity between the client and service has been compromised and - the result should not be trusted. For more information, see how to guides on - how to leverage this value to secure your end-to-end solution. Required. - "id": 0, # The unique id to refer to this audit request. Use this id - with the 'start' query parameter to continue on to the next page of audit - results. Required. - "receivedDateTime": "2020-02-20 00:00:00", # The UTC DateTime that - the request was received. Required. - "request": { - "contentType": "str", # The content type of the request. - Required. - "method": "str", # The HTTP method of the request (i.e., - GET, POST, DELETE). Required. - "url": "str", # The relative URL and query of the liveness - request. Required. - "contentLength": 0, # Optional. The length of the request - body in bytes. - "userAgent": "str" # Optional. The user agent used to submit - the request. - }, - "requestId": "str", # The unique requestId that is returned by the - service to the client in the 'apim-request-id' header. Required. - "response": { - "body": { - "livenessDecision": "str", # Optional. The liveness - classification for the target face. Known values are: "uncertain", - "realface", and "spoofface". - "modelVersionUsed": "str", # Optional. The model - version used for liveness classification. Known values are: - "2020-02-15-preview.01", "2021-11-12-preview.03", - "2022-10-15-preview.04", and "2023-03-02-preview.05". - "target": { - "faceRectangle": { - "height": 0, # The height of the - rectangle, in pixels. Required. - "left": 0, # The distance from the - left edge if the image to the left edge of the rectangle, in - pixels. Required. - "top": 0, # The distance from the - top edge if the image to the top edge of the rectangle, in - pixels. Required. - "width": 0 # The width of the - rectangle, in pixels. Required. - }, - "fileName": "str", # The file name which - contains the face rectangle where the liveness classification was - made on. Required. - "imageType": "str", # The image type which - contains the face rectangle where the liveness classification was - made on. Required. Known values are: "Color", "Infrared", and - "Depth". - "timeOffsetWithinFile": 0 # The time offset - within the file of the frame which contains the face rectangle - where the liveness classification was made on. Required. - }, - "verifyResult": { - "isIdentical": bool, # Whether the target - liveness face and comparison image face match. Required. - "matchConfidence": 0.0, # The target face - liveness face and comparison image face verification confidence. - Required. - "verifyImage": { - "faceRectangle": { - "height": 0, # The height of - the rectangle, in pixels. Required. - "left": 0, # The distance - from the left edge if the image to the left edge of the - rectangle, in pixels. Required. - "top": 0, # The distance - from the top edge if the image to the top edge of the - rectangle, in pixels. Required. - "width": 0 # The width of - the rectangle, in pixels. Required. - }, - "qualityForRecognition": "str" # - Quality of face image for recognition. Required. Known values - are: "low", "medium", and "high". - } - } - }, - "latencyInMilliseconds": 0, # The server measured latency - for this request in milliseconds. Required. - "statusCode": 0 # The HTTP status code returned to the - client. Required. - }, - "sessionId": "str" # The unique sessionId of the created session. It - will expire 48 hours after it was created or may be deleted sooner using the - corresponding session DELETE operation. Required. - }, - "sessionStartDateTime": "2020-02-20 00:00:00" # Optional. DateTime when this - session was started by the client. - } - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = kwargs.pop("headers", {}) or {} - _params = kwargs.pop("params", {}) or {} - - cls: ClsType[_models.LivenessWithVerifySession] = kwargs.pop("cls", None) - - _request = build_face_session_get_liveness_with_verify_session_result_request( - session_id=session_id, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - await response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(_models.LivenessWithVerifySession, response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore - - @distributed_trace_async - async def get_liveness_with_verify_sessions( - self, *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any - ) -> List[_models.LivenessSessionItem]: - # pylint: disable=line-too-long - """Lists sessions for /detectLivenessWithVerify/SingleModal. - - List sessions from the last sessionId greater than the "start". - - The result should be ordered by sessionId in ascending order. - - :keyword start: List resources greater than the "start". It contains no more than 64 - characters. Default is empty. Default value is None. - :paramtype start: str - :keyword top: The number of items to list, ranging in [1, 1000]. Default is 1000. Default value - is None. - :paramtype top: int - :return: list of LivenessSessionItem - :rtype: list[~azure.ai.vision.face.models.LivenessSessionItem] - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == [ - { - "createdDateTime": "2020-02-20 00:00:00", # DateTime when this - session was created. Required. - "id": "str", # The unique ID to reference this session. Required. - "sessionExpired": bool, # Whether or not the session is expired. - Required. - "authTokenTimeToLiveInSeconds": 0, # Optional. Seconds the session - should last for. Range is 60 to 86400 seconds. Default value is 600. - "deviceCorrelationId": "str", # Optional. Unique Guid per each - end-user device. This is to provide rate limiting and anti-hammering. If - 'deviceCorrelationIdSetInClient' is true in this request, this - 'deviceCorrelationId' must be null. - "sessionStartDateTime": "2020-02-20 00:00:00" # Optional. DateTime - when this session was started by the client. - } - ] - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = kwargs.pop("headers", {}) or {} - _params = kwargs.pop("params", {}) or {} - - cls: ClsType[List[_models.LivenessSessionItem]] = kwargs.pop("cls", None) - - _request = build_face_session_get_liveness_with_verify_sessions_request( - start=start, - top=top, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - await response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(List[_models.LivenessSessionItem], response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore - - @distributed_trace_async - async def get_liveness_with_verify_session_audit_entries( # pylint: disable=name-too-long - self, session_id: str, *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any - ) -> List[_models.LivenessSessionAuditEntry]: - # pylint: disable=line-too-long - """Gets session requests and response body for the session. - - :param session_id: The unique ID to reference this session. Required. - :type session_id: str - :keyword start: List resources greater than the "start". It contains no more than 64 - characters. Default is empty. Default value is None. - :paramtype start: str - :keyword top: The number of items to list, ranging in [1, 1000]. Default is 1000. Default value - is None. - :paramtype top: int - :return: list of LivenessSessionAuditEntry - :rtype: list[~azure.ai.vision.face.models.LivenessSessionAuditEntry] - :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == [ - { - "clientRequestId": "str", # The unique clientRequestId that is sent - by the client in the 'client-request-id' header. Required. - "digest": "str", # The server calculated digest for this request. If - the client reported digest differs from the server calculated digest, then - the message integrity between the client and service has been compromised and - the result should not be trusted. For more information, see how to guides on - how to leverage this value to secure your end-to-end solution. Required. - "id": 0, # The unique id to refer to this audit request. Use this id - with the 'start' query parameter to continue on to the next page of audit - results. Required. - "receivedDateTime": "2020-02-20 00:00:00", # The UTC DateTime that - the request was received. Required. - "request": { - "contentType": "str", # The content type of the request. - Required. - "method": "str", # The HTTP method of the request (i.e., - GET, POST, DELETE). Required. - "url": "str", # The relative URL and query of the liveness - request. Required. - "contentLength": 0, # Optional. The length of the request - body in bytes. - "userAgent": "str" # Optional. The user agent used to submit - the request. - }, - "requestId": "str", # The unique requestId that is returned by the - service to the client in the 'apim-request-id' header. Required. - "response": { - "body": { - "livenessDecision": "str", # Optional. The liveness - classification for the target face. Known values are: "uncertain", - "realface", and "spoofface". - "modelVersionUsed": "str", # Optional. The model - version used for liveness classification. Known values are: - "2020-02-15-preview.01", "2021-11-12-preview.03", - "2022-10-15-preview.04", and "2023-03-02-preview.05". - "target": { - "faceRectangle": { - "height": 0, # The height of the - rectangle, in pixels. Required. - "left": 0, # The distance from the - left edge if the image to the left edge of the rectangle, in - pixels. Required. - "top": 0, # The distance from the - top edge if the image to the top edge of the rectangle, in - pixels. Required. - "width": 0 # The width of the - rectangle, in pixels. Required. - }, - "fileName": "str", # The file name which - contains the face rectangle where the liveness classification was - made on. Required. - "imageType": "str", # The image type which - contains the face rectangle where the liveness classification was - made on. Required. Known values are: "Color", "Infrared", and - "Depth". - "timeOffsetWithinFile": 0 # The time offset - within the file of the frame which contains the face rectangle - where the liveness classification was made on. Required. - }, - "verifyResult": { - "isIdentical": bool, # Whether the target - liveness face and comparison image face match. Required. - "matchConfidence": 0.0, # The target face - liveness face and comparison image face verification confidence. - Required. - "verifyImage": { - "faceRectangle": { - "height": 0, # The height of - the rectangle, in pixels. Required. - "left": 0, # The distance - from the left edge if the image to the left edge of the - rectangle, in pixels. Required. - "top": 0, # The distance - from the top edge if the image to the top edge of the - rectangle, in pixels. Required. - "width": 0 # The width of - the rectangle, in pixels. Required. - }, - "qualityForRecognition": "str" # - Quality of face image for recognition. Required. Known values - are: "low", "medium", and "high". - } - } - }, - "latencyInMilliseconds": 0, # The server measured latency - for this request in milliseconds. Required. - "statusCode": 0 # The HTTP status code returned to the - client. Required. - }, - "sessionId": "str" # The unique sessionId of the created session. It - will expire 48 hours after it was created or may be deleted sooner using the - corresponding session DELETE operation. Required. - } - ] - """ - error_map: MutableMapping[int, Type[HttpResponseError]] = { - 401: ClientAuthenticationError, - 404: ResourceNotFoundError, - 409: ResourceExistsError, - 304: ResourceNotModifiedError, - } - error_map.update(kwargs.pop("error_map", {}) or {}) - - _headers = kwargs.pop("headers", {}) or {} - _params = kwargs.pop("params", {}) or {} - - cls: ClsType[List[_models.LivenessSessionAuditEntry]] = kwargs.pop("cls", None) - - _request = build_face_session_get_liveness_with_verify_session_audit_entries_request( - session_id=session_id, - start=start, - top=top, - headers=_headers, - params=_params, - ) - path_format_arguments = { - "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), - "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), - } - _request.url = self._client.format_url(_request.url, **path_format_arguments) - - _stream = kwargs.pop("stream", False) - pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access - _request, stream=_stream, **kwargs - ) - - response = pipeline_response.http_response - - if response.status_code not in [200]: - if _stream: - await response.read() # Load the body in memory and close the socket - map_error(status_code=response.status_code, response=response, error_map=error_map) - error = _deserialize(_models.FaceErrorResponse, response.json()) - raise HttpResponseError(response=response, model=error) - - if _stream: - deserialized = response.iter_bytes() - else: - deserialized = _deserialize(List[_models.LivenessSessionAuditEntry], response.json()) - - if cls: - return cls(pipeline_response, deserialized, {}) # type: ignore - - return deserialized # type: ignore diff --git a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/_patch.py b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/_patch.py index 61fb93215f3e..430ddc352d01 100644 --- a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/_patch.py +++ b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/_patch.py @@ -14,7 +14,7 @@ from .. import models as _models from ._client import FaceClient as FaceClientGenerated from ._client import FaceSessionClient as FaceSessionClientGenerated -from ._operations._operations import JSON, _Unset +from .operations._operations import JSON, _Unset class FaceClient(FaceClientGenerated): @@ -27,7 +27,7 @@ class FaceClient(FaceClientGenerated): AzureKeyCredential type or a TokenCredential type. Required. :type credential: ~azure.core.credentials.AzureKeyCredential or ~azure.core.credentials_async.AsyncTokenCredential - :keyword api_version: API Version. Default value is "v1.1-preview.1". Note that overriding this + :keyword api_version: API Version. Default value is "v1.2-preview.1". Note that overriding this default value may result in unsupported behavior. :paramtype api_version: str or ~azure.ai.vision.face.models.Versions """ @@ -95,64 +95,22 @@ async def detect_from_url( face_id_time_to_live: Optional[int] = None, **kwargs: Any, ) -> List[_models.FaceDetectionResult]: - # pylint: disable=line-too-long """Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, and attributes. - .. - - [!IMPORTANT] - To mitigate potential misuse that can subject people to stereotyping, discrimination, or - unfair denial of services, we are retiring Face API attributes that predict emotion, gender, - age, smile, facial hair, hair, and makeup. Read more about this decision - https://azure.microsoft.com/blog/responsible-ai-investments-and-safeguards-for-facial-recognition/. - - - * - - - * No image will be stored. Only the extracted face feature(s) will be stored on server. The - faceId is an identifier of the face feature and will be used in Face - Identify, Face - Verify, - and Face - Find Similar. The stored face features will expire and be deleted at the time - specified by faceIdTimeToLive after the original detection call. - * Optional parameters include faceId, landmarks, and attributes. Attributes include headPose, - glasses, occlusion, accessories, blur, exposure, noise, mask, and qualityForRecognition. Some - of the results returned for specific attributes may not be highly accurate. - * JPEG, PNG, GIF (the first frame), and BMP format are supported. The allowed image file size - is from 1KB to 6MB. - * The minimum detectable face size is 36x36 pixels in an image no larger than 1920x1080 pixels. - Images with dimensions higher than 1920x1080 pixels will need a proportionally larger minimum - face size. - * Up to 100 faces can be returned for an image. Faces are ranked by face rectangle size from - large to small. - * For optimal results when querying Face - Identify, Face - Verify, and Face - Find Similar - ('returnFaceId' is true), please use faces that are: frontal, clear, and with a minimum size of - 200x200 pixels (100 pixels between eyes). - * Different 'detectionModel' values can be provided. To use and compare different detection - models, please refer to - https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/specify-detection-model - - * 'detection_02': Face attributes and landmarks are disabled if you choose this detection - model. - * 'detection_03': Face attributes (mask and headPose only) and landmarks are supported if you - choose this detection model. - - * Different 'recognitionModel' values are provided. If follow-up operations like Verify, - Identify, Find Similar are needed, please specify the recognition model with 'recognitionModel' - parameter. The default value for 'recognitionModel' is 'recognition_01', if latest model - needed, please explicitly specify the model you need in this parameter. Once specified, the - detected faceIds will be associated with the specified recognition model. More details, please - refer to - https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/specify-recognition-model. + Please refer to + https://learn.microsoft.com/rest/api/face/face-detection-operations/detect-from-url for more + details. :param body: Is either a JSON type or a IO[bytes] type. Required. :type body: JSON or IO[bytes] - :keyword url: URL of input image. Required when body is not set. + :keyword url: URL of input image. Required. :paramtype url: str :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default - value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". - Required. + value is 'detection_01'. 'detection_03' is recommended since its accuracy is improved on + smaller faces (64x64 pixels) and rotated face orientations. Known values are: "detection_01", + "detection_02", and "detection_03". Default value is None. :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel :keyword recognition_model: The 'recognitionModel' associated with the detected faceIds. Supported 'recognitionModel' values include 'recognition_01', 'recognition_02', @@ -160,9 +118,10 @@ async def detect_from_url( is recommended since its accuracy is improved on faces wearing masks compared with 'recognition_03', and its overall accuracy is improved compared with 'recognition_01' and 'recognition_02'. Known values are: "recognition_01", "recognition_02", "recognition_03", and - "recognition_04". Required. + "recognition_04". Default value is None. :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel - :keyword return_face_id: Return faceIds of the detected faces or not. Required. + :keyword return_face_id: Return faceIds of the detected faces or not. The default value is + true. Default value is None. :paramtype return_face_id: bool :keyword return_face_attributes: Analyze and return the one or more specified face attributes in the comma-separated string like 'returnFaceAttributes=headPose,glasses'. Face attribute @@ -172,7 +131,7 @@ async def detect_from_url( value is false. Default value is None. :paramtype return_face_landmarks: bool :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is - false. Default value is None. + false. This is only applicable when returnFaceId = true. Default value is None. :paramtype return_recognition_model: bool :keyword face_id_time_to_live: The number of seconds for the face ID being cached. Supported range from 60 seconds up to 86400 seconds. The default value is 86400 (24 hours). Default value @@ -181,292 +140,6 @@ async def detect_from_url( :return: list of FaceDetectionResult :rtype: list[~azure.ai.vision.face.models.FaceDetectionResult] :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # JSON input template you can fill out and use as your body input. - body = { - "url": "str" # URL of input image. Required. - } - - # response body for status code(s): 200 - response == [ - { - "faceRectangle": { - "height": 0, # The height of the rectangle, in pixels. - Required. - "left": 0, # The distance from the left edge if the image to - the left edge of the rectangle, in pixels. Required. - "top": 0, # The distance from the top edge if the image to - the top edge of the rectangle, in pixels. Required. - "width": 0 # The width of the rectangle, in pixels. - Required. - }, - "faceAttributes": { - "accessories": [ - { - "confidence": 0.0, # Confidence level of the - accessory type. Range between [0,1]. Required. - "type": "str" # Type of the accessory. - Required. Known values are: "headwear", "glasses", and "mask". - } - ], - "age": 0.0, # Optional. Age in years. - "blur": { - "blurLevel": "str", # An enum value indicating level - of blurriness. Required. Known values are: "low", "medium", and - "high". - "value": 0.0 # A number indicating level of - blurriness ranging from 0 to 1. Required. - }, - "exposure": { - "exposureLevel": "str", # An enum value indicating - level of exposure. Required. Known values are: "underExposure", - "goodExposure", and "overExposure". - "value": 0.0 # A number indicating level of exposure - level ranging from 0 to 1. [0, 0.25) is under exposure. [0.25, 0.75) - is good exposure. [0.75, 1] is over exposure. Required. - }, - "facialHair": { - "beard": 0.0, # A number ranging from 0 to 1 - indicating a level of confidence associated with a property. - Required. - "moustache": 0.0, # A number ranging from 0 to 1 - indicating a level of confidence associated with a property. - Required. - "sideburns": 0.0 # A number ranging from 0 to 1 - indicating a level of confidence associated with a property. - Required. - }, - "glasses": "str", # Optional. Glasses type if any of the - face. Known values are: "noGlasses", "readingGlasses", "sunglasses", and - "swimmingGoggles". - "hair": { - "bald": 0.0, # A number describing confidence level - of whether the person is bald. Required. - "hairColor": [ - { - "color": "str", # Name of the hair - color. Required. Known values are: "unknown", "white", - "gray", "blond", "brown", "red", "black", and "other". - "confidence": 0.0 # Confidence level - of the color. Range between [0,1]. Required. - } - ], - "invisible": bool # A boolean value describing - whether the hair is visible in the image. Required. - }, - "headPose": { - "pitch": 0.0, # Value of angles. Required. - "roll": 0.0, # Value of angles. Required. - "yaw": 0.0 # Value of angles. Required. - }, - "mask": { - "noseAndMouthCovered": bool, # A boolean value - indicating whether nose and mouth are covered. Required. - "type": "str" # Type of the mask. Required. Known - values are: "faceMask", "noMask", "otherMaskOrOcclusion", and - "uncertain". - }, - "noise": { - "noiseLevel": "str", # An enum value indicating - level of noise. Required. Known values are: "low", "medium", and - "high". - "value": 0.0 # A number indicating level of noise - level ranging from 0 to 1. [0, 0.25) is under exposure. [0.25, 0.75) - is good exposure. [0.75, 1] is over exposure. [0, 0.3) is low noise - level. [0.3, 0.7) is medium noise level. [0.7, 1] is high noise - level. Required. - }, - "occlusion": { - "eyeOccluded": bool, # A boolean value indicating - whether eyes are occluded. Required. - "foreheadOccluded": bool, # A boolean value - indicating whether forehead is occluded. Required. - "mouthOccluded": bool # A boolean value indicating - whether the mouth is occluded. Required. - }, - "qualityForRecognition": "str", # Optional. Properties - describing the overall image quality regarding whether the image being - used in the detection is of sufficient quality to attempt face - recognition on. Known values are: "low", "medium", and "high". - "smile": 0.0 # Optional. Smile intensity, a number between - [0,1]. - }, - "faceId": "str", # Optional. Unique faceId of the detected face, - created by detection API and it will expire 24 hours after the detection - call. To return this, it requires 'returnFaceId' parameter to be true. - "faceLandmarks": { - "eyeLeftBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeLeftInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeLeftOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeLeftTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowLeftInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowLeftOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowRightInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowRightOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "mouthLeft": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "mouthRight": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseLeftAlarOutTip": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseLeftAlarTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRightAlarOutTip": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRightAlarTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRootLeft": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRootRight": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseTip": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "pupilLeft": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "pupilRight": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "underLipBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "underLipTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "upperLipBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "upperLipTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - } - }, - "recognitionModel": "str" # Optional. The 'recognitionModel' - associated with this faceId. This is only returned when - 'returnRecognitionModel' is explicitly set as true. Known values are: - "recognition_01", "recognition_02", "recognition_03", and "recognition_04". - } - ] """ return await super()._detect_from_url( body, @@ -495,62 +168,19 @@ async def detect( face_id_time_to_live: Optional[int] = None, **kwargs: Any, ) -> List[_models.FaceDetectionResult]: - # pylint: disable=line-too-long """Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, and attributes. - .. - - [!IMPORTANT] - To mitigate potential misuse that can subject people to stereotyping, discrimination, or - unfair denial of services, we are retiring Face API attributes that predict emotion, gender, - age, smile, facial hair, hair, and makeup. Read more about this decision - https://azure.microsoft.com/blog/responsible-ai-investments-and-safeguards-for-facial-recognition/. - - - * - - - * No image will be stored. Only the extracted face feature(s) will be stored on server. The - faceId is an identifier of the face feature and will be used in Face - Identify, Face - Verify, - and Face - Find Similar. The stored face features will expire and be deleted at the time - specified by faceIdTimeToLive after the original detection call. - * Optional parameters include faceId, landmarks, and attributes. Attributes include headPose, - glasses, occlusion, accessories, blur, exposure, noise, mask, and qualityForRecognition. Some - of the results returned for specific attributes may not be highly accurate. - * JPEG, PNG, GIF (the first frame), and BMP format are supported. The allowed image file size - is from 1KB to 6MB. - * The minimum detectable face size is 36x36 pixels in an image no larger than 1920x1080 pixels. - Images with dimensions higher than 1920x1080 pixels will need a proportionally larger minimum - face size. - * Up to 100 faces can be returned for an image. Faces are ranked by face rectangle size from - large to small. - * For optimal results when querying Face - Identify, Face - Verify, and Face - Find Similar - ('returnFaceId' is true), please use faces that are: frontal, clear, and with a minimum size of - 200x200 pixels (100 pixels between eyes). - * Different 'detectionModel' values can be provided. To use and compare different detection - models, please refer to - https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/specify-detection-model - - * 'detection_02': Face attributes and landmarks are disabled if you choose this detection - model. - * 'detection_03': Face attributes (mask and headPose only) and landmarks are supported if you - choose this detection model. - - * Different 'recognitionModel' values are provided. If follow-up operations like Verify, - Identify, Find Similar are needed, please specify the recognition model with 'recognitionModel' - parameter. The default value for 'recognitionModel' is 'recognition_01', if latest model - needed, please explicitly specify the model you need in this parameter. Once specified, the - detected faceIds will be associated with the specified recognition model. More details, please - refer to - https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/specify-recognition-model. + Please refer to https://learn.microsoft.com/rest/api/face/face-detection-operations/detect for + more details. :param image_content: The input image binary. Required. :type image_content: bytes :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default - value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". - Required. + value is 'detection_01'. 'detection_03' is recommended since its accuracy is improved on + smaller faces (64x64 pixels) and rotated face orientations. Known values are: "detection_01", + "detection_02", and "detection_03". Default value is None. :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel :keyword recognition_model: The 'recognitionModel' associated with the detected faceIds. Supported 'recognitionModel' values include 'recognition_01', 'recognition_02', @@ -558,9 +188,10 @@ async def detect( is recommended since its accuracy is improved on faces wearing masks compared with 'recognition_03', and its overall accuracy is improved compared with 'recognition_01' and 'recognition_02'. Known values are: "recognition_01", "recognition_02", "recognition_03", and - "recognition_04". Required. + "recognition_04". Default value is None. :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel - :keyword return_face_id: Return faceIds of the detected faces or not. Required. + :keyword return_face_id: Return faceIds of the detected faces or not. The default value is + true. Default value is None. :paramtype return_face_id: bool :keyword return_face_attributes: Analyze and return the one or more specified face attributes in the comma-separated string like 'returnFaceAttributes=headPose,glasses'. Face attribute @@ -570,7 +201,7 @@ async def detect( value is false. Default value is None. :paramtype return_face_landmarks: bool :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is - false. Default value is None. + false. This is only applicable when returnFaceId = true. Default value is None. :paramtype return_recognition_model: bool :keyword face_id_time_to_live: The number of seconds for the face ID being cached. Supported range from 60 seconds up to 86400 seconds. The default value is 86400 (24 hours). Default value @@ -579,287 +210,6 @@ async def detect( :return: list of FaceDetectionResult :rtype: list[~azure.ai.vision.face.models.FaceDetectionResult] :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # response body for status code(s): 200 - response == [ - { - "faceRectangle": { - "height": 0, # The height of the rectangle, in pixels. - Required. - "left": 0, # The distance from the left edge if the image to - the left edge of the rectangle, in pixels. Required. - "top": 0, # The distance from the top edge if the image to - the top edge of the rectangle, in pixels. Required. - "width": 0 # The width of the rectangle, in pixels. - Required. - }, - "faceAttributes": { - "accessories": [ - { - "confidence": 0.0, # Confidence level of the - accessory type. Range between [0,1]. Required. - "type": "str" # Type of the accessory. - Required. Known values are: "headwear", "glasses", and "mask". - } - ], - "age": 0.0, # Optional. Age in years. - "blur": { - "blurLevel": "str", # An enum value indicating level - of blurriness. Required. Known values are: "low", "medium", and - "high". - "value": 0.0 # A number indicating level of - blurriness ranging from 0 to 1. Required. - }, - "exposure": { - "exposureLevel": "str", # An enum value indicating - level of exposure. Required. Known values are: "underExposure", - "goodExposure", and "overExposure". - "value": 0.0 # A number indicating level of exposure - level ranging from 0 to 1. [0, 0.25) is under exposure. [0.25, 0.75) - is good exposure. [0.75, 1] is over exposure. Required. - }, - "facialHair": { - "beard": 0.0, # A number ranging from 0 to 1 - indicating a level of confidence associated with a property. - Required. - "moustache": 0.0, # A number ranging from 0 to 1 - indicating a level of confidence associated with a property. - Required. - "sideburns": 0.0 # A number ranging from 0 to 1 - indicating a level of confidence associated with a property. - Required. - }, - "glasses": "str", # Optional. Glasses type if any of the - face. Known values are: "noGlasses", "readingGlasses", "sunglasses", and - "swimmingGoggles". - "hair": { - "bald": 0.0, # A number describing confidence level - of whether the person is bald. Required. - "hairColor": [ - { - "color": "str", # Name of the hair - color. Required. Known values are: "unknown", "white", - "gray", "blond", "brown", "red", "black", and "other". - "confidence": 0.0 # Confidence level - of the color. Range between [0,1]. Required. - } - ], - "invisible": bool # A boolean value describing - whether the hair is visible in the image. Required. - }, - "headPose": { - "pitch": 0.0, # Value of angles. Required. - "roll": 0.0, # Value of angles. Required. - "yaw": 0.0 # Value of angles. Required. - }, - "mask": { - "noseAndMouthCovered": bool, # A boolean value - indicating whether nose and mouth are covered. Required. - "type": "str" # Type of the mask. Required. Known - values are: "faceMask", "noMask", "otherMaskOrOcclusion", and - "uncertain". - }, - "noise": { - "noiseLevel": "str", # An enum value indicating - level of noise. Required. Known values are: "low", "medium", and - "high". - "value": 0.0 # A number indicating level of noise - level ranging from 0 to 1. [0, 0.25) is under exposure. [0.25, 0.75) - is good exposure. [0.75, 1] is over exposure. [0, 0.3) is low noise - level. [0.3, 0.7) is medium noise level. [0.7, 1] is high noise - level. Required. - }, - "occlusion": { - "eyeOccluded": bool, # A boolean value indicating - whether eyes are occluded. Required. - "foreheadOccluded": bool, # A boolean value - indicating whether forehead is occluded. Required. - "mouthOccluded": bool # A boolean value indicating - whether the mouth is occluded. Required. - }, - "qualityForRecognition": "str", # Optional. Properties - describing the overall image quality regarding whether the image being - used in the detection is of sufficient quality to attempt face - recognition on. Known values are: "low", "medium", and "high". - "smile": 0.0 # Optional. Smile intensity, a number between - [0,1]. - }, - "faceId": "str", # Optional. Unique faceId of the detected face, - created by detection API and it will expire 24 hours after the detection - call. To return this, it requires 'returnFaceId' parameter to be true. - "faceLandmarks": { - "eyeLeftBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeLeftInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeLeftOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeLeftTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyeRightTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowLeftInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowLeftOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowRightInner": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "eyebrowRightOuter": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "mouthLeft": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "mouthRight": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseLeftAlarOutTip": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseLeftAlarTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRightAlarOutTip": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRightAlarTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRootLeft": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseRootRight": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "noseTip": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "pupilLeft": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "pupilRight": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "underLipBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "underLipTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "upperLipBottom": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - }, - "upperLipTop": { - "x": 0.0, # The horizontal component, in pixels. - Required. - "y": 0.0 # The vertical component, in pixels. - Required. - } - }, - "recognitionModel": "str" # Optional. The 'recognitionModel' - associated with this faceId. This is only returned when - 'returnRecognitionModel' is explicitly set as true. Known values are: - "recognition_01", "recognition_02", "recognition_03", and "recognition_04". - } - ] """ return await super()._detect( image_content, @@ -888,7 +238,7 @@ class FaceSessionClient(FaceSessionClientGenerated): AzureKeyCredential type or a TokenCredential type. Required. :type credential: ~azure.core.credentials.AzureKeyCredential or ~azure.core.credentials_async.AsyncTokenCredential - :keyword api_version: API Version. Default value is "v1.1-preview.1". Note that overriding this + :keyword api_version: API Version. Default value is "v1.2-preview.1". Note that overriding this default value may result in unsupported behavior. :paramtype api_version: str or ~azure.ai.vision.face.models.Versions """ @@ -913,113 +263,39 @@ async def create_liveness_with_verify_session( **kwargs: Any, ) -> _models.CreateLivenessWithVerifySessionResult: ... - @overload - async def create_liveness_with_verify_session( - self, - body: IO[bytes], - *, - verify_image: Union[bytes, None], - content_type: str = "application/json", - **kwargs: Any, - ) -> _models.CreateLivenessWithVerifySessionResult: ... - @distributed_trace_async async def create_liveness_with_verify_session( self, - body: Union[_models.CreateLivenessSessionContent, JSON, IO[bytes]], + body: Union[_models.CreateLivenessWithVerifySessionContent, JSON], *, verify_image: Union[bytes, None], **kwargs: Any, ) -> _models.CreateLivenessWithVerifySessionResult: - # pylint: disable=line-too-long """Create a new liveness session with verify. Client device submits VerifyImage during the /detectLivenessWithVerify/singleModal call. - A session is best for client device scenarios where developers want to authorize a client - device to perform only a liveness detection without granting full access to their resource. - Created sessions have a limited life span and only authorize clients to perform the desired - action before access is expired. - - Permissions includes... - > - * - - - * Ability to call /detectLivenessWithVerify/singleModal for up to 3 retries. - * A token lifetime of 10 minutes. - - .. - - [!NOTE] - - * - - - * Client access can be revoked by deleting the session using the Delete Liveness With Verify - Session operation. - * To retrieve a result, use the Get Liveness With Verify Session. - * To audit the individual requests that a client has made to your resource, use the List - Liveness With Verify Session Audit Entries. - - - Alternative Option: Client device submits VerifyImage during the - /detectLivenessWithVerify/singleModal call. - - .. - - [!NOTE] - Extra measures should be taken to validate that the client is sending the expected - VerifyImage. + Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/create-liveness-with-verify-session + for more details. - :param body: Is one of the following types: CreateLivenessSessionContent, JSON, IO[bytes] - Required. - :type body: ~azure.ai.vision.face.models.CreateLivenessSessionContent or JSON or IO[bytes] - :keyword verify_image: The image for verify. If you don't have any images to use for verification, - set it to None. Required. - :paramtype verify_image: bytes or None + :param body: Body parameter. Is one of the following types: + CreateLivenessWithVerifySessionContent, JSON, IO[bytes] Required. + :type body: ~azure.ai.vision.face.models.CreateLivenessWithVerifySessionContent or JSON or + IO[bytes] :return: CreateLivenessWithVerifySessionResult. The CreateLivenessWithVerifySessionResult is compatible with MutableMapping :rtype: ~azure.ai.vision.face.models.CreateLivenessWithVerifySessionResult :raises ~azure.core.exceptions.HttpResponseError: - - Example: - .. code-block:: python - - # JSON input template you can fill out and use as your body input. - body = { - "livenessOperationMode": "str", # Type of liveness mode the client should - follow. Required. "Passive" - "authTokenTimeToLiveInSeconds": 0, # Optional. Seconds the session should - last for. Range is 60 to 86400 seconds. Default value is 600. - "deviceCorrelationId": "str", # Optional. Unique Guid per each end-user - device. This is to provide rate limiting and anti-hammering. If - 'deviceCorrelationIdSetInClient' is true in this request, this - 'deviceCorrelationId' must be null. - "deviceCorrelationIdSetInClient": bool, # Optional. Whether or not to allow - client to set their own 'deviceCorrelationId' via the Vision SDK. Default is - false, and 'deviceCorrelationId' must be set in this request body. - "sendResultsToClient": bool # Optional. Whether or not to allow a '200 - - Success' response body to be sent to the client, which may be undesirable for - security reasons. Default is false, clients will receive a '204 - NoContent' - empty body response. Regardless of selection, calling Session GetResult will - always contain a response body enabling business logic to be implemented. - } - - # response body for status code(s): 200 - response == { - "authToken": "str", # Bearer token to provide authentication for the Vision - SDK running on a client application. This Bearer token has limited permissions to - perform only the required action and expires after the TTL time. It is also - auditable. Required. - "sessionId": "str" # The unique session ID of the created session. It will - expire 48 hours after it was created or may be deleted sooner using the - corresponding Session DELETE operation. Required. - } """ if verify_image is not None: - request_body = _models._models.CreateLivenessWithVerifySessionContent( # pylint: disable=protected-access - parameters=body, - verify_image=("verify-image", verify_image), + if not isinstance(body, _models.CreateLivenessWithVerifySessionContent): + # Convert body to CreateLivenessWithVerifySessionContent if necessary + body = _models.CreateLivenessWithVerifySessionContent(**body) + request_body = ( + _models._models.CreateLivenessWithVerifySessionMultipartContent( # pylint: disable=protected-access + parameters=body, + verify_image=("verify-image", verify_image), + ) ) return await super()._create_liveness_with_verify_session_with_verify_image(request_body, **kwargs) diff --git a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/_vendor.py b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/_vendor.py index 8ae3db7799ae..6193ed0a757b 100644 --- a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/_vendor.py +++ b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/_vendor.py @@ -11,7 +11,6 @@ from ._configuration import FaceClientConfiguration, FaceSessionClientConfiguration if TYPE_CHECKING: - # pylint: disable=unused-import,ungrouped-imports from azure.core import AsyncPipelineClient from .._serialization import Deserializer, Serializer diff --git a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_operations/__init__.py b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/operations/__init__.py similarity index 84% rename from sdk/face/azure-ai-vision-face/azure/ai/vision/face/_operations/__init__.py rename to sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/operations/__init__.py index 366e660e06db..d69ac05180a1 100644 --- a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_operations/__init__.py +++ b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/operations/__init__.py @@ -6,6 +6,8 @@ # Changes may cause incorrect behavior and will be lost if the code is regenerated. # -------------------------------------------------------------------------- +from ._operations import LargeFaceListOperations +from ._operations import LargePersonGroupOperations from ._operations import FaceClientOperationsMixin from ._operations import FaceSessionClientOperationsMixin @@ -14,6 +16,8 @@ from ._patch import patch_sdk as _patch_sdk __all__ = [ + "LargeFaceListOperations", + "LargePersonGroupOperations", "FaceClientOperationsMixin", "FaceSessionClientOperationsMixin", ] diff --git a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/operations/_operations.py b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/operations/_operations.py new file mode 100644 index 000000000000..44f9683129a0 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/operations/_operations.py @@ -0,0 +1,6043 @@ +# pylint: disable=too-many-lines +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- +from io import IOBase +import json +import sys +from typing import Any, AsyncIterator, Callable, Dict, IO, List, Optional, TypeVar, Union, cast, overload + +from azure.core.exceptions import ( + ClientAuthenticationError, + HttpResponseError, + ResourceExistsError, + ResourceNotFoundError, + ResourceNotModifiedError, + StreamClosedError, + StreamConsumedError, + map_error, +) +from azure.core.pipeline import PipelineResponse +from azure.core.polling import AsyncLROPoller, AsyncNoPolling, AsyncPollingMethod +from azure.core.polling.async_base_polling import AsyncLROBasePolling +from azure.core.rest import AsyncHttpResponse, HttpRequest +from azure.core.tracing.decorator_async import distributed_trace_async +from azure.core.utils import case_insensitive_dict + +from ... import _model_base, models as _models +from ..._model_base import SdkJSONEncoder, _deserialize +from ..._validation import api_version_validation +from ..._vendor import prepare_multipart_form_data +from ...operations._operations import ( + build_face_detect_from_url_request, + build_face_detect_request, + build_face_find_similar_from_large_face_list_request, + build_face_find_similar_request, + build_face_group_request, + build_face_identify_from_large_person_group_request, + build_face_session_create_liveness_session_request, + build_face_session_create_liveness_with_verify_session_request, + build_face_session_create_liveness_with_verify_session_with_verify_image_request, + build_face_session_delete_liveness_session_request, + build_face_session_delete_liveness_with_verify_session_request, + build_face_session_detect_from_session_image_request, + build_face_session_get_liveness_session_audit_entries_request, + build_face_session_get_liveness_session_result_request, + build_face_session_get_liveness_sessions_request, + build_face_session_get_liveness_with_verify_session_audit_entries_request, + build_face_session_get_liveness_with_verify_session_result_request, + build_face_session_get_liveness_with_verify_sessions_request, + build_face_session_get_session_image_request, + build_face_verify_face_to_face_request, + build_face_verify_from_large_person_group_request, + build_large_face_list_add_face_from_url_request, + build_large_face_list_add_face_request, + build_large_face_list_create_request, + build_large_face_list_delete_face_request, + build_large_face_list_delete_request, + build_large_face_list_get_face_request, + build_large_face_list_get_faces_request, + build_large_face_list_get_large_face_lists_request, + build_large_face_list_get_request, + build_large_face_list_get_training_status_request, + build_large_face_list_train_request, + build_large_face_list_update_face_request, + build_large_face_list_update_request, + build_large_person_group_add_face_from_url_request, + build_large_person_group_add_face_request, + build_large_person_group_create_person_request, + build_large_person_group_create_request, + build_large_person_group_delete_face_request, + build_large_person_group_delete_person_request, + build_large_person_group_delete_request, + build_large_person_group_get_face_request, + build_large_person_group_get_large_person_groups_request, + build_large_person_group_get_person_request, + build_large_person_group_get_persons_request, + build_large_person_group_get_request, + build_large_person_group_get_training_status_request, + build_large_person_group_train_request, + build_large_person_group_update_face_request, + build_large_person_group_update_person_request, + build_large_person_group_update_request, +) +from .._vendor import FaceClientMixinABC, FaceSessionClientMixinABC + +if sys.version_info >= (3, 9): + from collections.abc import MutableMapping +else: + from typing import MutableMapping # type: ignore +JSON = MutableMapping[str, Any] # pylint: disable=unsubscriptable-object +_Unset: Any = object() +T = TypeVar("T") +ClsType = Optional[Callable[[PipelineResponse[HttpRequest, AsyncHttpResponse], T, Dict[str, Any]], Any]] + + +class LargeFaceListOperations: + """ + .. warning:: + **DO NOT** instantiate this class directly. + + Instead, you should access the following operations through + :class:`~azure.ai.vision.face.aio.FaceAdministrationClient`'s + :attr:`large_face_list` attribute. + """ + + def __init__(self, *args, **kwargs) -> None: + input_args = list(args) + self._client = input_args.pop(0) if input_args else kwargs.pop("client") + self._config = input_args.pop(0) if input_args else kwargs.pop("config") + self._serialize = input_args.pop(0) if input_args else kwargs.pop("serializer") + self._deserialize = input_args.pop(0) if input_args else kwargs.pop("deserializer") + + @overload + async def create( + self, large_face_list_id: str, body: JSON, *, content_type: str = "application/json", **kwargs: Any + ) -> None: + """Create an empty Large Face List with user-specified largeFaceListId, name, an optional userData + and recognitionModel. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/create-large-face-list for more + details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def create( + self, + large_face_list_id: str, + *, + name: str, + content_type: str = "application/json", + user_data: Optional[str] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + **kwargs: Any + ) -> None: + """Create an empty Large Face List with user-specified largeFaceListId, name, an optional userData + and recognitionModel. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/create-large-face-list for more + details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :keyword name: User defined name, maximum length is 128. Required. + :paramtype name: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :keyword user_data: Optional user defined data. Length should not exceed 16K. Default value is + None. + :paramtype user_data: str + :keyword recognition_model: The 'recognitionModel' associated with this face list. Supported + 'recognitionModel' values include 'recognition_01', 'recognition_02, 'recognition_03', and + 'recognition_04'. The default value is 'recognition_01'. 'recognition_04' is recommended since + its accuracy is improved on faces wearing masks compared with 'recognition_03', and its overall + accuracy is improved compared with 'recognition_01' and 'recognition_02'. Known values are: + "recognition_01", "recognition_02", "recognition_03", and "recognition_04". Default value is + None. + :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def create( + self, large_face_list_id: str, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + ) -> None: + """Create an empty Large Face List with user-specified largeFaceListId, name, an optional userData + and recognitionModel. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/create-large-face-list for more + details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace_async + async def create( + self, + large_face_list_id: str, + body: Union[JSON, IO[bytes]] = _Unset, + *, + name: str = _Unset, + user_data: Optional[str] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + **kwargs: Any + ) -> None: + """Create an empty Large Face List with user-specified largeFaceListId, name, an optional userData + and recognitionModel. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/create-large-face-list for more + details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword name: User defined name, maximum length is 128. Required. + :paramtype name: str + :keyword user_data: Optional user defined data. Length should not exceed 16K. Default value is + None. + :paramtype user_data: str + :keyword recognition_model: The 'recognitionModel' associated with this face list. Supported + 'recognitionModel' values include 'recognition_01', 'recognition_02, 'recognition_03', and + 'recognition_04'. The default value is 'recognition_01'. 'recognition_04' is recommended since + its accuracy is improved on faces wearing masks compared with 'recognition_03', and its overall + accuracy is improved compared with 'recognition_01' and 'recognition_02'. Known values are: + "recognition_01", "recognition_02", "recognition_03", and "recognition_04". Default value is + None. + :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[None] = kwargs.pop("cls", None) + + if body is _Unset: + if name is _Unset: + raise TypeError("missing required argument: name") + body = {"name": name, "recognitionModel": recognition_model, "userData": user_data} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_large_face_list_create_request( + large_face_list_id=large_face_list_id, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + @distributed_trace_async + async def delete(self, large_face_list_id: str, **kwargs: Any) -> None: + """Delete a face from a Large Face List by specified largeFaceListId and persistedFaceId. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/delete-large-face-list for more + details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[None] = kwargs.pop("cls", None) + + _request = build_large_face_list_delete_request( + large_face_list_id=large_face_list_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + @distributed_trace_async + async def get( + self, large_face_list_id: str, *, return_recognition_model: Optional[bool] = None, **kwargs: Any + ) -> _models.LargeFaceList: + """Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/get-large-face-list for more + details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is + false. Default value is None. + :paramtype return_recognition_model: bool + :return: LargeFaceList. The LargeFaceList is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.LargeFaceList + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[_models.LargeFaceList] = kwargs.pop("cls", None) + + _request = build_large_face_list_get_request( + large_face_list_id=large_face_list_id, + return_recognition_model=return_recognition_model, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.LargeFaceList, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + async def update( + self, large_face_list_id: str, body: JSON, *, content_type: str = "application/json", **kwargs: Any + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/update-large-face-list for more + details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def update( + self, + large_face_list_id: str, + *, + content_type: str = "application/json", + name: Optional[str] = None, + user_data: Optional[str] = None, + **kwargs: Any + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/update-large-face-list for more + details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :keyword name: User defined name, maximum length is 128. Default value is None. + :paramtype name: str + :keyword user_data: Optional user defined data. Length should not exceed 16K. Default value is + None. + :paramtype user_data: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def update( + self, large_face_list_id: str, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/update-large-face-list for more + details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace_async + async def update( + self, + large_face_list_id: str, + body: Union[JSON, IO[bytes]] = _Unset, + *, + name: Optional[str] = None, + user_data: Optional[str] = None, + **kwargs: Any + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/update-large-face-list for more + details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword name: User defined name, maximum length is 128. Default value is None. + :paramtype name: str + :keyword user_data: Optional user defined data. Length should not exceed 16K. Default value is + None. + :paramtype user_data: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[None] = kwargs.pop("cls", None) + + if body is _Unset: + body = {"name": name, "userData": user_data} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_large_face_list_update_request( + large_face_list_id=large_face_list_id, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + @distributed_trace_async + async def get_large_face_lists( + self, + *, + start: Optional[str] = None, + top: Optional[int] = None, + return_recognition_model: Optional[bool] = None, + **kwargs: Any + ) -> List[_models.LargeFaceList]: + """List Large Face Lists' information of largeFaceListId, name, userData and recognitionModel. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/get-large-face-lists for more + details. + + :keyword start: List resources greater than the "start". It contains no more than 64 + characters. Default is empty. Default value is None. + :paramtype start: str + :keyword top: The number of items to list, ranging in [1, 1000]. Default is 1000. Default value + is None. + :paramtype top: int + :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is + false. Default value is None. + :paramtype return_recognition_model: bool + :return: list of LargeFaceList + :rtype: list[~azure.ai.vision.face.models.LargeFaceList] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[List[_models.LargeFaceList]] = kwargs.pop("cls", None) + + _request = build_large_face_list_get_large_face_lists_request( + start=start, + top=top, + return_recognition_model=return_recognition_model, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.LargeFaceList], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace_async + async def get_training_status(self, large_face_list_id: str, **kwargs: Any) -> _models.FaceTrainingResult: + """Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/get-large-face-list-training-status + for more details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :return: FaceTrainingResult. The FaceTrainingResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceTrainingResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[_models.FaceTrainingResult] = kwargs.pop("cls", None) + + _request = build_large_face_list_get_training_status_request( + large_face_list_id=large_face_list_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.FaceTrainingResult, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + async def _train_initial(self, large_face_list_id: str, **kwargs: Any) -> AsyncIterator[bytes]: + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[AsyncIterator[bytes]] = kwargs.pop("cls", None) + + _request = build_large_face_list_train_request( + large_face_list_id=large_face_list_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = True + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [202]: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + response_headers = {} + response_headers["operation-Location"] = self._deserialize("str", response.headers.get("operation-Location")) + + deserialized = response.iter_bytes() + + if cls: + return cls(pipeline_response, deserialized, response_headers) # type: ignore + + return deserialized # type: ignore + + @distributed_trace_async + async def begin_train(self, large_face_list_id: str, **kwargs: Any) -> AsyncLROPoller[None]: + """Submit a Large Face List training task. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/train-large-face-list for more + details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :return: An instance of AsyncLROPoller that returns None + :rtype: ~azure.core.polling.AsyncLROPoller[None] + :raises ~azure.core.exceptions.HttpResponseError: + """ + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[None] = kwargs.pop("cls", None) + polling: Union[bool, AsyncPollingMethod] = kwargs.pop("polling", True) + lro_delay = kwargs.pop("polling_interval", self._config.polling_interval) + cont_token: Optional[str] = kwargs.pop("continuation_token", None) + if cont_token is None: + raw_result = await self._train_initial( + large_face_list_id=large_face_list_id, cls=lambda x, y, z: x, headers=_headers, params=_params, **kwargs + ) + await raw_result.http_response.read() # type: ignore + kwargs.pop("error_map", None) + + def get_long_running_output(pipeline_response): # pylint: disable=inconsistent-return-statements + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + + if polling is True: + polling_method: AsyncPollingMethod = cast( + AsyncPollingMethod, + AsyncLROBasePolling(lro_delay, path_format_arguments=path_format_arguments, **kwargs), + ) + elif polling is False: + polling_method = cast(AsyncPollingMethod, AsyncNoPolling()) + else: + polling_method = polling + if cont_token: + return AsyncLROPoller[None].from_continuation_token( + polling_method=polling_method, + continuation_token=cont_token, + client=self._client, + deserialization_callback=get_long_running_output, + ) + return AsyncLROPoller[None](self._client, raw_result, get_long_running_output, polling_method) # type: ignore + + @overload + async def add_face_from_url( + self, + large_face_list_id: str, + body: JSON, + *, + target_face: Optional[List[int]] = None, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + user_data: Optional[str] = None, + content_type: str = "application/json", + **kwargs: Any + ) -> _models.AddFaceResult: + """Add a face to a specified Large Face List, up to 1,000,000 faces. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/add-large-face-list-face-from-url + for more details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param body: Required. + :type body: JSON + :keyword target_face: A face rectangle to specify the target face to be added to a person, in + the format of 'targetFace=left,top,width,height'. Default value is None. + :paramtype target_face: list[int] + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". + Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword user_data: User-provided data attached to the face. The size limit is 1K. Default + value is None. + :paramtype user_data: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: AddFaceResult. The AddFaceResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.AddFaceResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def add_face_from_url( + self, + large_face_list_id: str, + *, + url: str, + target_face: Optional[List[int]] = None, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + user_data: Optional[str] = None, + content_type: str = "application/json", + **kwargs: Any + ) -> _models.AddFaceResult: + """Add a face to a specified Large Face List, up to 1,000,000 faces. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/add-large-face-list-face-from-url + for more details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :keyword url: URL of input image. Required. + :paramtype url: str + :keyword target_face: A face rectangle to specify the target face to be added to a person, in + the format of 'targetFace=left,top,width,height'. Default value is None. + :paramtype target_face: list[int] + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". + Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword user_data: User-provided data attached to the face. The size limit is 1K. Default + value is None. + :paramtype user_data: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: AddFaceResult. The AddFaceResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.AddFaceResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def add_face_from_url( + self, + large_face_list_id: str, + body: IO[bytes], + *, + target_face: Optional[List[int]] = None, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + user_data: Optional[str] = None, + content_type: str = "application/json", + **kwargs: Any + ) -> _models.AddFaceResult: + """Add a face to a specified Large Face List, up to 1,000,000 faces. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/add-large-face-list-face-from-url + for more details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param body: Required. + :type body: IO[bytes] + :keyword target_face: A face rectangle to specify the target face to be added to a person, in + the format of 'targetFace=left,top,width,height'. Default value is None. + :paramtype target_face: list[int] + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". + Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword user_data: User-provided data attached to the face. The size limit is 1K. Default + value is None. + :paramtype user_data: str + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: AddFaceResult. The AddFaceResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.AddFaceResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace_async + async def add_face_from_url( + self, + large_face_list_id: str, + body: Union[JSON, IO[bytes]] = _Unset, + *, + url: str = _Unset, + target_face: Optional[List[int]] = None, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + user_data: Optional[str] = None, + **kwargs: Any + ) -> _models.AddFaceResult: + """Add a face to a specified Large Face List, up to 1,000,000 faces. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/add-large-face-list-face-from-url + for more details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword url: URL of input image. Required. + :paramtype url: str + :keyword target_face: A face rectangle to specify the target face to be added to a person, in + the format of 'targetFace=left,top,width,height'. Default value is None. + :paramtype target_face: list[int] + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". + Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword user_data: User-provided data attached to the face. The size limit is 1K. Default + value is None. + :paramtype user_data: str + :return: AddFaceResult. The AddFaceResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.AddFaceResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[_models.AddFaceResult] = kwargs.pop("cls", None) + + if body is _Unset: + if url is _Unset: + raise TypeError("missing required argument: url") + body = {"url": url} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_large_face_list_add_face_from_url_request( + large_face_list_id=large_face_list_id, + target_face=target_face, + detection_model=detection_model, + user_data=user_data, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.AddFaceResult, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace_async + async def add_face( + self, + large_face_list_id: str, + image_content: bytes, + *, + target_face: Optional[List[int]] = None, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + user_data: Optional[str] = None, + **kwargs: Any + ) -> _models.AddFaceResult: + """Add a face to a specified Large Face List, up to 1,000,000 faces. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/add-large-face-list-face for + more details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param image_content: The image to be analyzed. Required. + :type image_content: bytes + :keyword target_face: A face rectangle to specify the target face to be added to a person, in + the format of 'targetFace=left,top,width,height'. Default value is None. + :paramtype target_face: list[int] + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". + Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword user_data: User-provided data attached to the face. The size limit is 1K. Default + value is None. + :paramtype user_data: str + :return: AddFaceResult. The AddFaceResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.AddFaceResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: str = kwargs.pop("content_type", _headers.pop("content-type", "application/octet-stream")) + cls: ClsType[_models.AddFaceResult] = kwargs.pop("cls", None) + + _content = image_content + + _request = build_large_face_list_add_face_request( + large_face_list_id=large_face_list_id, + target_face=target_face, + detection_model=detection_model, + user_data=user_data, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.AddFaceResult, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace_async + async def delete_face(self, large_face_list_id: str, persisted_face_id: str, **kwargs: Any) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/delete-large-face-list-face for + more details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param persisted_face_id: Face ID of the face. Required. + :type persisted_face_id: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[None] = kwargs.pop("cls", None) + + _request = build_large_face_list_delete_face_request( + large_face_list_id=large_face_list_id, + persisted_face_id=persisted_face_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + @distributed_trace_async + async def get_face( + self, large_face_list_id: str, persisted_face_id: str, **kwargs: Any + ) -> _models.LargeFaceListFace: + """Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/get-large-face-list-face for + more details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param persisted_face_id: Face ID of the face. Required. + :type persisted_face_id: str + :return: LargeFaceListFace. The LargeFaceListFace is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.LargeFaceListFace + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[_models.LargeFaceListFace] = kwargs.pop("cls", None) + + _request = build_large_face_list_get_face_request( + large_face_list_id=large_face_list_id, + persisted_face_id=persisted_face_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.LargeFaceListFace, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + async def update_face( + self, + large_face_list_id: str, + persisted_face_id: str, + body: JSON, + *, + content_type: str = "application/json", + **kwargs: Any + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/update-large-face-list-face for + more details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param persisted_face_id: Face ID of the face. Required. + :type persisted_face_id: str + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def update_face( + self, + large_face_list_id: str, + persisted_face_id: str, + *, + content_type: str = "application/json", + user_data: Optional[str] = None, + **kwargs: Any + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/update-large-face-list-face for + more details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param persisted_face_id: Face ID of the face. Required. + :type persisted_face_id: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :keyword user_data: User-provided data attached to the face. The length limit is 1K. Default + value is None. + :paramtype user_data: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def update_face( + self, + large_face_list_id: str, + persisted_face_id: str, + body: IO[bytes], + *, + content_type: str = "application/json", + **kwargs: Any + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/update-large-face-list-face for + more details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param persisted_face_id: Face ID of the face. Required. + :type persisted_face_id: str + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace_async + async def update_face( + self, + large_face_list_id: str, + persisted_face_id: str, + body: Union[JSON, IO[bytes]] = _Unset, + *, + user_data: Optional[str] = None, + **kwargs: Any + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/update-large-face-list-face for + more details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param persisted_face_id: Face ID of the face. Required. + :type persisted_face_id: str + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword user_data: User-provided data attached to the face. The length limit is 1K. Default + value is None. + :paramtype user_data: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[None] = kwargs.pop("cls", None) + + if body is _Unset: + body = {"userData": user_data} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_large_face_list_update_face_request( + large_face_list_id=large_face_list_id, + persisted_face_id=persisted_face_id, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + @distributed_trace_async + async def get_faces( + self, large_face_list_id: str, *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any + ) -> List[_models.LargeFaceListFace]: + """List faces' persistedFaceId and userData in a specified Large Face List. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/get-large-face-list-faces for + more details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :keyword start: List resources greater than the "start". It contains no more than 64 + characters. Default is empty. Default value is None. + :paramtype start: str + :keyword top: The number of items to list, ranging in [1, 1000]. Default is 1000. Default value + is None. + :paramtype top: int + :return: list of LargeFaceListFace + :rtype: list[~azure.ai.vision.face.models.LargeFaceListFace] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[List[_models.LargeFaceListFace]] = kwargs.pop("cls", None) + + _request = build_large_face_list_get_faces_request( + large_face_list_id=large_face_list_id, + start=start, + top=top, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.LargeFaceListFace], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + +class LargePersonGroupOperations: + """ + .. warning:: + **DO NOT** instantiate this class directly. + + Instead, you should access the following operations through + :class:`~azure.ai.vision.face.aio.FaceAdministrationClient`'s + :attr:`large_person_group` attribute. + """ + + def __init__(self, *args, **kwargs) -> None: + input_args = list(args) + self._client = input_args.pop(0) if input_args else kwargs.pop("client") + self._config = input_args.pop(0) if input_args else kwargs.pop("config") + self._serialize = input_args.pop(0) if input_args else kwargs.pop("serializer") + self._deserialize = input_args.pop(0) if input_args else kwargs.pop("deserializer") + + @overload + async def create( + self, large_person_group_id: str, body: JSON, *, content_type: str = "application/json", **kwargs: Any + ) -> None: + """Create a new Large Person Group with user-specified largePersonGroupId, name, an optional + userData and recognitionModel. + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/create-large-person-group for + more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def create( + self, + large_person_group_id: str, + *, + name: str, + content_type: str = "application/json", + user_data: Optional[str] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + **kwargs: Any + ) -> None: + """Create a new Large Person Group with user-specified largePersonGroupId, name, an optional + userData and recognitionModel. + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/create-large-person-group for + more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :keyword name: User defined name, maximum length is 128. Required. + :paramtype name: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :keyword user_data: Optional user defined data. Length should not exceed 16K. Default value is + None. + :paramtype user_data: str + :keyword recognition_model: The 'recognitionModel' associated with this face list. Supported + 'recognitionModel' values include 'recognition_01', 'recognition_02, 'recognition_03', and + 'recognition_04'. The default value is 'recognition_01'. 'recognition_04' is recommended since + its accuracy is improved on faces wearing masks compared with 'recognition_03', and its overall + accuracy is improved compared with 'recognition_01' and 'recognition_02'. Known values are: + "recognition_01", "recognition_02", "recognition_03", and "recognition_04". Default value is + None. + :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def create( + self, large_person_group_id: str, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + ) -> None: + """Create a new Large Person Group with user-specified largePersonGroupId, name, an optional + userData and recognitionModel. + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/create-large-person-group for + more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace_async + async def create( + self, + large_person_group_id: str, + body: Union[JSON, IO[bytes]] = _Unset, + *, + name: str = _Unset, + user_data: Optional[str] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + **kwargs: Any + ) -> None: + """Create a new Large Person Group with user-specified largePersonGroupId, name, an optional + userData and recognitionModel. + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/create-large-person-group for + more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword name: User defined name, maximum length is 128. Required. + :paramtype name: str + :keyword user_data: Optional user defined data. Length should not exceed 16K. Default value is + None. + :paramtype user_data: str + :keyword recognition_model: The 'recognitionModel' associated with this face list. Supported + 'recognitionModel' values include 'recognition_01', 'recognition_02, 'recognition_03', and + 'recognition_04'. The default value is 'recognition_01'. 'recognition_04' is recommended since + its accuracy is improved on faces wearing masks compared with 'recognition_03', and its overall + accuracy is improved compared with 'recognition_01' and 'recognition_02'. Known values are: + "recognition_01", "recognition_02", "recognition_03", and "recognition_04". Default value is + None. + :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[None] = kwargs.pop("cls", None) + + if body is _Unset: + if name is _Unset: + raise TypeError("missing required argument: name") + body = {"name": name, "recognitionModel": recognition_model, "userData": user_data} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_large_person_group_create_request( + large_person_group_id=large_person_group_id, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + @distributed_trace_async + async def delete(self, large_person_group_id: str, **kwargs: Any) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/delete-large-person-group for + more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[None] = kwargs.pop("cls", None) + + _request = build_large_person_group_delete_request( + large_person_group_id=large_person_group_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + @distributed_trace_async + async def get( + self, large_person_group_id: str, *, return_recognition_model: Optional[bool] = None, **kwargs: Any + ) -> _models.LargePersonGroup: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/get-large-person-group for + more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is + false. Default value is None. + :paramtype return_recognition_model: bool + :return: LargePersonGroup. The LargePersonGroup is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.LargePersonGroup + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[_models.LargePersonGroup] = kwargs.pop("cls", None) + + _request = build_large_person_group_get_request( + large_person_group_id=large_person_group_id, + return_recognition_model=return_recognition_model, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.LargePersonGroup, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + async def update( + self, large_person_group_id: str, body: JSON, *, content_type: str = "application/json", **kwargs: Any + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/update-large-person-group for + more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def update( + self, + large_person_group_id: str, + *, + content_type: str = "application/json", + name: Optional[str] = None, + user_data: Optional[str] = None, + **kwargs: Any + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/update-large-person-group for + more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :keyword name: User defined name, maximum length is 128. Default value is None. + :paramtype name: str + :keyword user_data: Optional user defined data. Length should not exceed 16K. Default value is + None. + :paramtype user_data: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def update( + self, large_person_group_id: str, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/update-large-person-group for + more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace_async + async def update( + self, + large_person_group_id: str, + body: Union[JSON, IO[bytes]] = _Unset, + *, + name: Optional[str] = None, + user_data: Optional[str] = None, + **kwargs: Any + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/update-large-person-group for + more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword name: User defined name, maximum length is 128. Default value is None. + :paramtype name: str + :keyword user_data: Optional user defined data. Length should not exceed 16K. Default value is + None. + :paramtype user_data: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[None] = kwargs.pop("cls", None) + + if body is _Unset: + body = {"name": name, "userData": user_data} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_large_person_group_update_request( + large_person_group_id=large_person_group_id, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + @distributed_trace_async + async def get_large_person_groups( + self, + *, + start: Optional[str] = None, + top: Optional[int] = None, + return_recognition_model: Optional[bool] = None, + **kwargs: Any + ) -> List[_models.LargePersonGroup]: + """List all existing Large Person Groups' largePersonGroupId, name, userData and recognitionModel. + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/get-large-person-groups for + more details. + + :keyword start: List resources greater than the "start". It contains no more than 64 + characters. Default is empty. Default value is None. + :paramtype start: str + :keyword top: The number of items to list, ranging in [1, 1000]. Default is 1000. Default value + is None. + :paramtype top: int + :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is + false. Default value is None. + :paramtype return_recognition_model: bool + :return: list of LargePersonGroup + :rtype: list[~azure.ai.vision.face.models.LargePersonGroup] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[List[_models.LargePersonGroup]] = kwargs.pop("cls", None) + + _request = build_large_person_group_get_large_person_groups_request( + start=start, + top=top, + return_recognition_model=return_recognition_model, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.LargePersonGroup], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace_async + async def get_training_status(self, large_person_group_id: str, **kwargs: Any) -> _models.FaceTrainingResult: + """To check Large Person Group training status completed or still ongoing. Large Person Group + training is an asynchronous operation triggered by "Train Large Person Group" API. + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/get-large-person-group-training-status + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :return: FaceTrainingResult. The FaceTrainingResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceTrainingResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[_models.FaceTrainingResult] = kwargs.pop("cls", None) + + _request = build_large_person_group_get_training_status_request( + large_person_group_id=large_person_group_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.FaceTrainingResult, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + async def _train_initial(self, large_person_group_id: str, **kwargs: Any) -> AsyncIterator[bytes]: + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[AsyncIterator[bytes]] = kwargs.pop("cls", None) + + _request = build_large_person_group_train_request( + large_person_group_id=large_person_group_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = True + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [202]: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + response_headers = {} + response_headers["operation-Location"] = self._deserialize("str", response.headers.get("operation-Location")) + + deserialized = response.iter_bytes() + + if cls: + return cls(pipeline_response, deserialized, response_headers) # type: ignore + + return deserialized # type: ignore + + @distributed_trace_async + async def begin_train(self, large_person_group_id: str, **kwargs: Any) -> AsyncLROPoller[None]: + """Submit a Large Person Group training task. Training is a crucial step that only a trained Large + Person Group can be used by "Identify From Large Person Group". + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/train-large-person-group for + more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :return: An instance of AsyncLROPoller that returns None + :rtype: ~azure.core.polling.AsyncLROPoller[None] + :raises ~azure.core.exceptions.HttpResponseError: + """ + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[None] = kwargs.pop("cls", None) + polling: Union[bool, AsyncPollingMethod] = kwargs.pop("polling", True) + lro_delay = kwargs.pop("polling_interval", self._config.polling_interval) + cont_token: Optional[str] = kwargs.pop("continuation_token", None) + if cont_token is None: + raw_result = await self._train_initial( + large_person_group_id=large_person_group_id, + cls=lambda x, y, z: x, + headers=_headers, + params=_params, + **kwargs + ) + await raw_result.http_response.read() # type: ignore + kwargs.pop("error_map", None) + + def get_long_running_output(pipeline_response): # pylint: disable=inconsistent-return-statements + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + + if polling is True: + polling_method: AsyncPollingMethod = cast( + AsyncPollingMethod, + AsyncLROBasePolling(lro_delay, path_format_arguments=path_format_arguments, **kwargs), + ) + elif polling is False: + polling_method = cast(AsyncPollingMethod, AsyncNoPolling()) + else: + polling_method = polling + if cont_token: + return AsyncLROPoller[None].from_continuation_token( + polling_method=polling_method, + continuation_token=cont_token, + client=self._client, + deserialization_callback=get_long_running_output, + ) + return AsyncLROPoller[None](self._client, raw_result, get_long_running_output, polling_method) # type: ignore + + @overload + async def create_person( + self, large_person_group_id: str, body: JSON, *, content_type: str = "application/json", **kwargs: Any + ) -> _models.CreatePersonResult: + """Create a new person in a specified Large Person Group. To add face to this person, please call + "Add Large Person Group Person Face". + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/create-large-person-group-person + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: CreatePersonResult. The CreatePersonResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.CreatePersonResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def create_person( + self, + large_person_group_id: str, + *, + name: str, + content_type: str = "application/json", + user_data: Optional[str] = None, + **kwargs: Any + ) -> _models.CreatePersonResult: + """Create a new person in a specified Large Person Group. To add face to this person, please call + "Add Large Person Group Person Face". + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/create-large-person-group-person + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :keyword name: User defined name, maximum length is 128. Required. + :paramtype name: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :keyword user_data: Optional user defined data. Length should not exceed 16K. Default value is + None. + :paramtype user_data: str + :return: CreatePersonResult. The CreatePersonResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.CreatePersonResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def create_person( + self, large_person_group_id: str, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + ) -> _models.CreatePersonResult: + """Create a new person in a specified Large Person Group. To add face to this person, please call + "Add Large Person Group Person Face". + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/create-large-person-group-person + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: CreatePersonResult. The CreatePersonResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.CreatePersonResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace_async + async def create_person( + self, + large_person_group_id: str, + body: Union[JSON, IO[bytes]] = _Unset, + *, + name: str = _Unset, + user_data: Optional[str] = None, + **kwargs: Any + ) -> _models.CreatePersonResult: + """Create a new person in a specified Large Person Group. To add face to this person, please call + "Add Large Person Group Person Face". + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/create-large-person-group-person + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword name: User defined name, maximum length is 128. Required. + :paramtype name: str + :keyword user_data: Optional user defined data. Length should not exceed 16K. Default value is + None. + :paramtype user_data: str + :return: CreatePersonResult. The CreatePersonResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.CreatePersonResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[_models.CreatePersonResult] = kwargs.pop("cls", None) + + if body is _Unset: + if name is _Unset: + raise TypeError("missing required argument: name") + body = {"name": name, "userData": user_data} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_large_person_group_create_person_request( + large_person_group_id=large_person_group_id, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.CreatePersonResult, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace_async + async def delete_person(self, large_person_group_id: str, person_id: str, **kwargs: Any) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/delete-large-person-group-person + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[None] = kwargs.pop("cls", None) + + _request = build_large_person_group_delete_person_request( + large_person_group_id=large_person_group_id, + person_id=person_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + @distributed_trace_async + async def get_person( + self, large_person_group_id: str, person_id: str, **kwargs: Any + ) -> _models.LargePersonGroupPerson: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/get-large-person-group-person + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :return: LargePersonGroupPerson. The LargePersonGroupPerson is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.LargePersonGroupPerson + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[_models.LargePersonGroupPerson] = kwargs.pop("cls", None) + + _request = build_large_person_group_get_person_request( + large_person_group_id=large_person_group_id, + person_id=person_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.LargePersonGroupPerson, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + async def update_person( + self, + large_person_group_id: str, + person_id: str, + body: JSON, + *, + content_type: str = "application/json", + **kwargs: Any + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/update-large-person-group-person + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def update_person( + self, + large_person_group_id: str, + person_id: str, + *, + content_type: str = "application/json", + name: Optional[str] = None, + user_data: Optional[str] = None, + **kwargs: Any + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/update-large-person-group-person + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :keyword name: User defined name, maximum length is 128. Default value is None. + :paramtype name: str + :keyword user_data: Optional user defined data. Length should not exceed 16K. Default value is + None. + :paramtype user_data: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def update_person( + self, + large_person_group_id: str, + person_id: str, + body: IO[bytes], + *, + content_type: str = "application/json", + **kwargs: Any + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/update-large-person-group-person + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace_async + async def update_person( + self, + large_person_group_id: str, + person_id: str, + body: Union[JSON, IO[bytes]] = _Unset, + *, + name: Optional[str] = None, + user_data: Optional[str] = None, + **kwargs: Any + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/update-large-person-group-person + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword name: User defined name, maximum length is 128. Default value is None. + :paramtype name: str + :keyword user_data: Optional user defined data. Length should not exceed 16K. Default value is + None. + :paramtype user_data: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[None] = kwargs.pop("cls", None) + + if body is _Unset: + body = {"name": name, "userData": user_data} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_large_person_group_update_person_request( + large_person_group_id=large_person_group_id, + person_id=person_id, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + @distributed_trace_async + async def get_persons( + self, large_person_group_id: str, *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any + ) -> List[_models.LargePersonGroupPerson]: + """List all persons' information in the specified Large Person Group, including personId, name, + userData and persistedFaceIds of registered person faces. + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/get-large-person-group-persons + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :keyword start: List resources greater than the "start". It contains no more than 64 + characters. Default is empty. Default value is None. + :paramtype start: str + :keyword top: The number of items to list, ranging in [1, 1000]. Default is 1000. Default value + is None. + :paramtype top: int + :return: list of LargePersonGroupPerson + :rtype: list[~azure.ai.vision.face.models.LargePersonGroupPerson] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[List[_models.LargePersonGroupPerson]] = kwargs.pop("cls", None) + + _request = build_large_person_group_get_persons_request( + large_person_group_id=large_person_group_id, + start=start, + top=top, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.LargePersonGroupPerson], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + async def add_face_from_url( + self, + large_person_group_id: str, + person_id: str, + body: JSON, + *, + target_face: Optional[List[int]] = None, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + user_data: Optional[str] = None, + content_type: str = "application/json", + **kwargs: Any + ) -> _models.AddFaceResult: + """Add a face to a person into a Large Person Group for face identification or verification. + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/add-large-person-group-person-face-from-url + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :param body: Required. + :type body: JSON + :keyword target_face: A face rectangle to specify the target face to be added to a person, in + the format of 'targetFace=left,top,width,height'. Default value is None. + :paramtype target_face: list[int] + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". + Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword user_data: User-provided data attached to the face. The size limit is 1K. Default + value is None. + :paramtype user_data: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: AddFaceResult. The AddFaceResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.AddFaceResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def add_face_from_url( + self, + large_person_group_id: str, + person_id: str, + *, + url: str, + target_face: Optional[List[int]] = None, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + user_data: Optional[str] = None, + content_type: str = "application/json", + **kwargs: Any + ) -> _models.AddFaceResult: + """Add a face to a person into a Large Person Group for face identification or verification. + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/add-large-person-group-person-face-from-url + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :keyword url: URL of input image. Required. + :paramtype url: str + :keyword target_face: A face rectangle to specify the target face to be added to a person, in + the format of 'targetFace=left,top,width,height'. Default value is None. + :paramtype target_face: list[int] + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". + Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword user_data: User-provided data attached to the face. The size limit is 1K. Default + value is None. + :paramtype user_data: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: AddFaceResult. The AddFaceResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.AddFaceResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def add_face_from_url( + self, + large_person_group_id: str, + person_id: str, + body: IO[bytes], + *, + target_face: Optional[List[int]] = None, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + user_data: Optional[str] = None, + content_type: str = "application/json", + **kwargs: Any + ) -> _models.AddFaceResult: + """Add a face to a person into a Large Person Group for face identification or verification. + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/add-large-person-group-person-face-from-url + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :param body: Required. + :type body: IO[bytes] + :keyword target_face: A face rectangle to specify the target face to be added to a person, in + the format of 'targetFace=left,top,width,height'. Default value is None. + :paramtype target_face: list[int] + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". + Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword user_data: User-provided data attached to the face. The size limit is 1K. Default + value is None. + :paramtype user_data: str + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: AddFaceResult. The AddFaceResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.AddFaceResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace_async + async def add_face_from_url( + self, + large_person_group_id: str, + person_id: str, + body: Union[JSON, IO[bytes]] = _Unset, + *, + url: str = _Unset, + target_face: Optional[List[int]] = None, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + user_data: Optional[str] = None, + **kwargs: Any + ) -> _models.AddFaceResult: + """Add a face to a person into a Large Person Group for face identification or verification. + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/add-large-person-group-person-face-from-url + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword url: URL of input image. Required. + :paramtype url: str + :keyword target_face: A face rectangle to specify the target face to be added to a person, in + the format of 'targetFace=left,top,width,height'. Default value is None. + :paramtype target_face: list[int] + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". + Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword user_data: User-provided data attached to the face. The size limit is 1K. Default + value is None. + :paramtype user_data: str + :return: AddFaceResult. The AddFaceResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.AddFaceResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[_models.AddFaceResult] = kwargs.pop("cls", None) + + if body is _Unset: + if url is _Unset: + raise TypeError("missing required argument: url") + body = {"url": url} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_large_person_group_add_face_from_url_request( + large_person_group_id=large_person_group_id, + person_id=person_id, + target_face=target_face, + detection_model=detection_model, + user_data=user_data, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.AddFaceResult, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace_async + async def add_face( + self, + large_person_group_id: str, + person_id: str, + image_content: bytes, + *, + target_face: Optional[List[int]] = None, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + user_data: Optional[str] = None, + **kwargs: Any + ) -> _models.AddFaceResult: + """Add a face to a person into a Large Person Group for face identification or verification. + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/add-large-person-group-person-face + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :param image_content: The image to be analyzed. Required. + :type image_content: bytes + :keyword target_face: A face rectangle to specify the target face to be added to a person, in + the format of 'targetFace=left,top,width,height'. Default value is None. + :paramtype target_face: list[int] + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". + Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword user_data: User-provided data attached to the face. The size limit is 1K. Default + value is None. + :paramtype user_data: str + :return: AddFaceResult. The AddFaceResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.AddFaceResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: str = kwargs.pop("content_type", _headers.pop("content-type", "application/octet-stream")) + cls: ClsType[_models.AddFaceResult] = kwargs.pop("cls", None) + + _content = image_content + + _request = build_large_person_group_add_face_request( + large_person_group_id=large_person_group_id, + person_id=person_id, + target_face=target_face, + detection_model=detection_model, + user_data=user_data, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.AddFaceResult, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace_async + async def delete_face( + self, large_person_group_id: str, person_id: str, persisted_face_id: str, **kwargs: Any + ) -> None: + """Delete a face from a person in a Large Person Group by specified largePersonGroupId, personId + and persistedFaceId. + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/delete-large-person-group-person-face + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :param persisted_face_id: Face ID of the face. Required. + :type persisted_face_id: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[None] = kwargs.pop("cls", None) + + _request = build_large_person_group_delete_face_request( + large_person_group_id=large_person_group_id, + person_id=person_id, + persisted_face_id=persisted_face_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + @distributed_trace_async + async def get_face( + self, large_person_group_id: str, person_id: str, persisted_face_id: str, **kwargs: Any + ) -> _models.LargePersonGroupPersonFace: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/get-large-person-group-person-face + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :param persisted_face_id: Face ID of the face. Required. + :type persisted_face_id: str + :return: LargePersonGroupPersonFace. The LargePersonGroupPersonFace is compatible with + MutableMapping + :rtype: ~azure.ai.vision.face.models.LargePersonGroupPersonFace + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[_models.LargePersonGroupPersonFace] = kwargs.pop("cls", None) + + _request = build_large_person_group_get_face_request( + large_person_group_id=large_person_group_id, + person_id=person_id, + persisted_face_id=persisted_face_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.LargePersonGroupPersonFace, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + async def update_face( + self, + large_person_group_id: str, + person_id: str, + persisted_face_id: str, + body: JSON, + *, + content_type: str = "application/json", + **kwargs: Any + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/update-large-person-group-person-face + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :param persisted_face_id: Face ID of the face. Required. + :type persisted_face_id: str + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def update_face( + self, + large_person_group_id: str, + person_id: str, + persisted_face_id: str, + *, + content_type: str = "application/json", + user_data: Optional[str] = None, + **kwargs: Any + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/update-large-person-group-person-face + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :param persisted_face_id: Face ID of the face. Required. + :type persisted_face_id: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :keyword user_data: User-provided data attached to the face. The length limit is 1K. Default + value is None. + :paramtype user_data: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def update_face( + self, + large_person_group_id: str, + person_id: str, + persisted_face_id: str, + body: IO[bytes], + *, + content_type: str = "application/json", + **kwargs: Any + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/update-large-person-group-person-face + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :param persisted_face_id: Face ID of the face. Required. + :type persisted_face_id: str + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace_async + async def update_face( + self, + large_person_group_id: str, + person_id: str, + persisted_face_id: str, + body: Union[JSON, IO[bytes]] = _Unset, + *, + user_data: Optional[str] = None, + **kwargs: Any + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/update-large-person-group-person-face + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :param persisted_face_id: Face ID of the face. Required. + :type persisted_face_id: str + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword user_data: User-provided data attached to the face. The length limit is 1K. Default + value is None. + :paramtype user_data: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[None] = kwargs.pop("cls", None) + + if body is _Unset: + body = {"userData": user_data} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_large_person_group_update_face_request( + large_person_group_id=large_person_group_id, + person_id=person_id, + persisted_face_id=persisted_face_id, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = await self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + +class FaceClientOperationsMixin(FaceClientMixinABC): + + @overload + async def _detect_from_url( + self, + body: JSON, + *, + content_type: str = "application/json", + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + return_face_id: Optional[bool] = None, + return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, + return_face_landmarks: Optional[bool] = None, + return_recognition_model: Optional[bool] = None, + face_id_time_to_live: Optional[int] = None, + **kwargs: Any + ) -> List[_models.FaceDetectionResult]: ... + @overload + async def _detect_from_url( + self, + *, + url: str, + content_type: str = "application/json", + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + return_face_id: Optional[bool] = None, + return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, + return_face_landmarks: Optional[bool] = None, + return_recognition_model: Optional[bool] = None, + face_id_time_to_live: Optional[int] = None, + **kwargs: Any + ) -> List[_models.FaceDetectionResult]: ... + @overload + async def _detect_from_url( + self, + body: IO[bytes], + *, + content_type: str = "application/json", + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + return_face_id: Optional[bool] = None, + return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, + return_face_landmarks: Optional[bool] = None, + return_recognition_model: Optional[bool] = None, + face_id_time_to_live: Optional[int] = None, + **kwargs: Any + ) -> List[_models.FaceDetectionResult]: ... + + @distributed_trace_async + async def _detect_from_url( + self, + body: Union[JSON, IO[bytes]] = _Unset, + *, + url: str = _Unset, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + return_face_id: Optional[bool] = None, + return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, + return_face_landmarks: Optional[bool] = None, + return_recognition_model: Optional[bool] = None, + face_id_time_to_live: Optional[int] = None, + **kwargs: Any + ) -> List[_models.FaceDetectionResult]: + """Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, + and attributes. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-detection-operations/detect-from-url for more + details. + + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword url: URL of input image. Required. + :paramtype url: str + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. 'detection_03' is recommended since its accuracy is improved on + smaller faces (64x64 pixels) and rotated face orientations. Known values are: "detection_01", + "detection_02", and "detection_03". Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword recognition_model: The 'recognitionModel' associated with the detected faceIds. + Supported 'recognitionModel' values include 'recognition_01', 'recognition_02', + 'recognition_03' or 'recognition_04'. The default value is 'recognition_01'. 'recognition_04' + is recommended since its accuracy is improved on faces wearing masks compared with + 'recognition_03', and its overall accuracy is improved compared with 'recognition_01' and + 'recognition_02'. Known values are: "recognition_01", "recognition_02", "recognition_03", and + "recognition_04". Default value is None. + :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel + :keyword return_face_id: Return faceIds of the detected faces or not. The default value is + true. Default value is None. + :paramtype return_face_id: bool + :keyword return_face_attributes: Analyze and return the one or more specified face attributes + in the comma-separated string like 'returnFaceAttributes=headPose,glasses'. Face attribute + analysis has additional computational and time cost. Default value is None. + :paramtype return_face_attributes: list[str or ~azure.ai.vision.face.models.FaceAttributeType] + :keyword return_face_landmarks: Return face landmarks of the detected faces or not. The default + value is false. Default value is None. + :paramtype return_face_landmarks: bool + :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is + false. This is only applicable when returnFaceId = true. Default value is None. + :paramtype return_recognition_model: bool + :keyword face_id_time_to_live: The number of seconds for the face ID being cached. Supported + range from 60 seconds up to 86400 seconds. The default value is 86400 (24 hours). Default value + is None. + :paramtype face_id_time_to_live: int + :return: list of FaceDetectionResult + :rtype: list[~azure.ai.vision.face.models.FaceDetectionResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("content-type", None)) + cls: ClsType[List[_models.FaceDetectionResult]] = kwargs.pop("cls", None) + + if body is _Unset: + if url is _Unset: + raise TypeError("missing required argument: url") + body = {"url": url} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_face_detect_from_url_request( + detection_model=detection_model, + recognition_model=recognition_model, + return_face_id=return_face_id, + return_face_attributes=return_face_attributes, + return_face_landmarks=return_face_landmarks, + return_recognition_model=return_recognition_model, + face_id_time_to_live=face_id_time_to_live, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.FaceDetectionResult], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace_async + async def _detect( + self, + image_content: bytes, + *, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + return_face_id: Optional[bool] = None, + return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, + return_face_landmarks: Optional[bool] = None, + return_recognition_model: Optional[bool] = None, + face_id_time_to_live: Optional[int] = None, + **kwargs: Any + ) -> List[_models.FaceDetectionResult]: + """Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, + and attributes. + + Please refer to https://learn.microsoft.com/rest/api/face/face-detection-operations/detect for + more details. + + :param image_content: The input image binary. Required. + :type image_content: bytes + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. 'detection_03' is recommended since its accuracy is improved on + smaller faces (64x64 pixels) and rotated face orientations. Known values are: "detection_01", + "detection_02", and "detection_03". Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword recognition_model: The 'recognitionModel' associated with the detected faceIds. + Supported 'recognitionModel' values include 'recognition_01', 'recognition_02', + 'recognition_03' or 'recognition_04'. The default value is 'recognition_01'. 'recognition_04' + is recommended since its accuracy is improved on faces wearing masks compared with + 'recognition_03', and its overall accuracy is improved compared with 'recognition_01' and + 'recognition_02'. Known values are: "recognition_01", "recognition_02", "recognition_03", and + "recognition_04". Default value is None. + :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel + :keyword return_face_id: Return faceIds of the detected faces or not. The default value is + true. Default value is None. + :paramtype return_face_id: bool + :keyword return_face_attributes: Analyze and return the one or more specified face attributes + in the comma-separated string like 'returnFaceAttributes=headPose,glasses'. Face attribute + analysis has additional computational and time cost. Default value is None. + :paramtype return_face_attributes: list[str or ~azure.ai.vision.face.models.FaceAttributeType] + :keyword return_face_landmarks: Return face landmarks of the detected faces or not. The default + value is false. Default value is None. + :paramtype return_face_landmarks: bool + :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is + false. This is only applicable when returnFaceId = true. Default value is None. + :paramtype return_recognition_model: bool + :keyword face_id_time_to_live: The number of seconds for the face ID being cached. Supported + range from 60 seconds up to 86400 seconds. The default value is 86400 (24 hours). Default value + is None. + :paramtype face_id_time_to_live: int + :return: list of FaceDetectionResult + :rtype: list[~azure.ai.vision.face.models.FaceDetectionResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: str = kwargs.pop("content_type", _headers.pop("content-type", "application/octet-stream")) + cls: ClsType[List[_models.FaceDetectionResult]] = kwargs.pop("cls", None) + + _content = image_content + + _request = build_face_detect_request( + detection_model=detection_model, + recognition_model=recognition_model, + return_face_id=return_face_id, + return_face_attributes=return_face_attributes, + return_face_landmarks=return_face_landmarks, + return_recognition_model=return_recognition_model, + face_id_time_to_live=face_id_time_to_live, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.FaceDetectionResult], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + async def find_similar( + self, body: JSON, *, content_type: str = "application/json", **kwargs: Any + ) -> List[_models.FaceFindSimilarResult]: + """Given query face's faceId, to search the similar-looking faces from a faceId array. A faceId + array contains the faces created by Detect. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/find-similar for more + details. + + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: list of FaceFindSimilarResult + :rtype: list[~azure.ai.vision.face.models.FaceFindSimilarResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def find_similar( + self, + *, + face_id: str, + face_ids: List[str], + content_type: str = "application/json", + max_num_of_candidates_returned: Optional[int] = None, + mode: Optional[Union[str, _models.FindSimilarMatchMode]] = None, + **kwargs: Any + ) -> List[_models.FaceFindSimilarResult]: + """Given query face's faceId, to search the similar-looking faces from a faceId array. A faceId + array contains the faces created by Detect. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/find-similar for more + details. + + :keyword face_id: faceId of the query face. User needs to call "Detect" first to get a valid + faceId. Note that this faceId is not persisted and will expire 24 hours after the detection + call. Required. + :paramtype face_id: str + :keyword face_ids: An array of candidate faceIds. All of them are created by "Detect" and the + faceIds will expire 24 hours after the detection call. The number of faceIds is limited to + 1000. Required. + :paramtype face_ids: list[str] + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :keyword max_num_of_candidates_returned: The number of top similar faces returned. The valid + range is [1, 1000]. Default value is 20. Default value is None. + :paramtype max_num_of_candidates_returned: int + :keyword mode: Similar face searching mode. It can be 'matchPerson' or 'matchFace'. Default + value is 'matchPerson'. Known values are: "matchPerson" and "matchFace". Default value is None. + :paramtype mode: str or ~azure.ai.vision.face.models.FindSimilarMatchMode + :return: list of FaceFindSimilarResult + :rtype: list[~azure.ai.vision.face.models.FaceFindSimilarResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def find_similar( + self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + ) -> List[_models.FaceFindSimilarResult]: + """Given query face's faceId, to search the similar-looking faces from a faceId array. A faceId + array contains the faces created by Detect. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/find-similar for more + details. + + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: list of FaceFindSimilarResult + :rtype: list[~azure.ai.vision.face.models.FaceFindSimilarResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace_async + async def find_similar( + self, + body: Union[JSON, IO[bytes]] = _Unset, + *, + face_id: str = _Unset, + face_ids: List[str] = _Unset, + max_num_of_candidates_returned: Optional[int] = None, + mode: Optional[Union[str, _models.FindSimilarMatchMode]] = None, + **kwargs: Any + ) -> List[_models.FaceFindSimilarResult]: + """Given query face's faceId, to search the similar-looking faces from a faceId array. A faceId + array contains the faces created by Detect. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/find-similar for more + details. + + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword face_id: faceId of the query face. User needs to call "Detect" first to get a valid + faceId. Note that this faceId is not persisted and will expire 24 hours after the detection + call. Required. + :paramtype face_id: str + :keyword face_ids: An array of candidate faceIds. All of them are created by "Detect" and the + faceIds will expire 24 hours after the detection call. The number of faceIds is limited to + 1000. Required. + :paramtype face_ids: list[str] + :keyword max_num_of_candidates_returned: The number of top similar faces returned. The valid + range is [1, 1000]. Default value is 20. Default value is None. + :paramtype max_num_of_candidates_returned: int + :keyword mode: Similar face searching mode. It can be 'matchPerson' or 'matchFace'. Default + value is 'matchPerson'. Known values are: "matchPerson" and "matchFace". Default value is None. + :paramtype mode: str or ~azure.ai.vision.face.models.FindSimilarMatchMode + :return: list of FaceFindSimilarResult + :rtype: list[~azure.ai.vision.face.models.FaceFindSimilarResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[List[_models.FaceFindSimilarResult]] = kwargs.pop("cls", None) + + if body is _Unset: + if face_id is _Unset: + raise TypeError("missing required argument: face_id") + if face_ids is _Unset: + raise TypeError("missing required argument: face_ids") + body = { + "faceId": face_id, + "faceIds": face_ids, + "maxNumOfCandidatesReturned": max_num_of_candidates_returned, + "mode": mode, + } + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_face_find_similar_request( + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.FaceFindSimilarResult], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + async def verify_face_to_face( + self, body: JSON, *, content_type: str = "application/json", **kwargs: Any + ) -> _models.FaceVerificationResult: + """Verify whether two faces belong to a same person. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/verify-face-to-face for + more details. + + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: FaceVerificationResult. The FaceVerificationResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceVerificationResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def verify_face_to_face( + self, *, face_id1: str, face_id2: str, content_type: str = "application/json", **kwargs: Any + ) -> _models.FaceVerificationResult: + """Verify whether two faces belong to a same person. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/verify-face-to-face for + more details. + + :keyword face_id1: The faceId of one face, come from "Detect". Required. + :paramtype face_id1: str + :keyword face_id2: The faceId of another face, come from "Detect". Required. + :paramtype face_id2: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: FaceVerificationResult. The FaceVerificationResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceVerificationResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def verify_face_to_face( + self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + ) -> _models.FaceVerificationResult: + """Verify whether two faces belong to a same person. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/verify-face-to-face for + more details. + + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: FaceVerificationResult. The FaceVerificationResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceVerificationResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace_async + async def verify_face_to_face( + self, body: Union[JSON, IO[bytes]] = _Unset, *, face_id1: str = _Unset, face_id2: str = _Unset, **kwargs: Any + ) -> _models.FaceVerificationResult: + """Verify whether two faces belong to a same person. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/verify-face-to-face for + more details. + + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword face_id1: The faceId of one face, come from "Detect". Required. + :paramtype face_id1: str + :keyword face_id2: The faceId of another face, come from "Detect". Required. + :paramtype face_id2: str + :return: FaceVerificationResult. The FaceVerificationResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceVerificationResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[_models.FaceVerificationResult] = kwargs.pop("cls", None) + + if body is _Unset: + if face_id1 is _Unset: + raise TypeError("missing required argument: face_id1") + if face_id2 is _Unset: + raise TypeError("missing required argument: face_id2") + body = {"faceId1": face_id1, "faceId2": face_id2} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_face_verify_face_to_face_request( + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.FaceVerificationResult, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + async def group( + self, body: JSON, *, content_type: str = "application/json", **kwargs: Any + ) -> _models.FaceGroupingResult: + """Divide candidate faces into groups based on face similarity. + + Please refer to https://learn.microsoft.com/rest/api/face/face-recognition-operations/group for + more details. + + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: FaceGroupingResult. The FaceGroupingResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceGroupingResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def group( + self, *, face_ids: List[str], content_type: str = "application/json", **kwargs: Any + ) -> _models.FaceGroupingResult: + """Divide candidate faces into groups based on face similarity. + + Please refer to https://learn.microsoft.com/rest/api/face/face-recognition-operations/group for + more details. + + :keyword face_ids: Array of candidate faceIds created by "Detect". The maximum is 1000 faces. + Required. + :paramtype face_ids: list[str] + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: FaceGroupingResult. The FaceGroupingResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceGroupingResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def group( + self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + ) -> _models.FaceGroupingResult: + """Divide candidate faces into groups based on face similarity. + + Please refer to https://learn.microsoft.com/rest/api/face/face-recognition-operations/group for + more details. + + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: FaceGroupingResult. The FaceGroupingResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceGroupingResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace_async + async def group( + self, body: Union[JSON, IO[bytes]] = _Unset, *, face_ids: List[str] = _Unset, **kwargs: Any + ) -> _models.FaceGroupingResult: + """Divide candidate faces into groups based on face similarity. + + Please refer to https://learn.microsoft.com/rest/api/face/face-recognition-operations/group for + more details. + + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword face_ids: Array of candidate faceIds created by "Detect". The maximum is 1000 faces. + Required. + :paramtype face_ids: list[str] + :return: FaceGroupingResult. The FaceGroupingResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceGroupingResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[_models.FaceGroupingResult] = kwargs.pop("cls", None) + + if body is _Unset: + if face_ids is _Unset: + raise TypeError("missing required argument: face_ids") + body = {"faceIds": face_ids} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_face_group_request( + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.FaceGroupingResult, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + async def find_similar_from_large_face_list( + self, body: JSON, *, content_type: str = "application/json", **kwargs: Any + ) -> List[_models.FaceFindSimilarResult]: + """Given query face's faceId, to search the similar-looking faces from a Large Face List. A + 'largeFaceListId' is created by Create Large Face List. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/find-similar-from-large-face-list + for more details. + + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: list of FaceFindSimilarResult + :rtype: list[~azure.ai.vision.face.models.FaceFindSimilarResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def find_similar_from_large_face_list( + self, + *, + face_id: str, + large_face_list_id: str, + content_type: str = "application/json", + max_num_of_candidates_returned: Optional[int] = None, + mode: Optional[Union[str, _models.FindSimilarMatchMode]] = None, + **kwargs: Any + ) -> List[_models.FaceFindSimilarResult]: + """Given query face's faceId, to search the similar-looking faces from a Large Face List. A + 'largeFaceListId' is created by Create Large Face List. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/find-similar-from-large-face-list + for more details. + + :keyword face_id: faceId of the query face. User needs to call "Detect" first to get a valid + faceId. Note that this faceId is not persisted and will expire 24 hours after the detection + call. Required. + :paramtype face_id: str + :keyword large_face_list_id: An existing user-specified unique candidate Large Face List, + created in "Create Large Face List". Large Face List contains a set of persistedFaceIds which + are persisted and will never expire. Required. + :paramtype large_face_list_id: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :keyword max_num_of_candidates_returned: The number of top similar faces returned. The valid + range is [1, 1000]. Default value is 20. Default value is None. + :paramtype max_num_of_candidates_returned: int + :keyword mode: Similar face searching mode. It can be 'matchPerson' or 'matchFace'. Default + value is 'matchPerson'. Known values are: "matchPerson" and "matchFace". Default value is None. + :paramtype mode: str or ~azure.ai.vision.face.models.FindSimilarMatchMode + :return: list of FaceFindSimilarResult + :rtype: list[~azure.ai.vision.face.models.FaceFindSimilarResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def find_similar_from_large_face_list( + self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + ) -> List[_models.FaceFindSimilarResult]: + """Given query face's faceId, to search the similar-looking faces from a Large Face List. A + 'largeFaceListId' is created by Create Large Face List. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/find-similar-from-large-face-list + for more details. + + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: list of FaceFindSimilarResult + :rtype: list[~azure.ai.vision.face.models.FaceFindSimilarResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace_async + async def find_similar_from_large_face_list( + self, + body: Union[JSON, IO[bytes]] = _Unset, + *, + face_id: str = _Unset, + large_face_list_id: str = _Unset, + max_num_of_candidates_returned: Optional[int] = None, + mode: Optional[Union[str, _models.FindSimilarMatchMode]] = None, + **kwargs: Any + ) -> List[_models.FaceFindSimilarResult]: + """Given query face's faceId, to search the similar-looking faces from a Large Face List. A + 'largeFaceListId' is created by Create Large Face List. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/find-similar-from-large-face-list + for more details. + + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword face_id: faceId of the query face. User needs to call "Detect" first to get a valid + faceId. Note that this faceId is not persisted and will expire 24 hours after the detection + call. Required. + :paramtype face_id: str + :keyword large_face_list_id: An existing user-specified unique candidate Large Face List, + created in "Create Large Face List". Large Face List contains a set of persistedFaceIds which + are persisted and will never expire. Required. + :paramtype large_face_list_id: str + :keyword max_num_of_candidates_returned: The number of top similar faces returned. The valid + range is [1, 1000]. Default value is 20. Default value is None. + :paramtype max_num_of_candidates_returned: int + :keyword mode: Similar face searching mode. It can be 'matchPerson' or 'matchFace'. Default + value is 'matchPerson'. Known values are: "matchPerson" and "matchFace". Default value is None. + :paramtype mode: str or ~azure.ai.vision.face.models.FindSimilarMatchMode + :return: list of FaceFindSimilarResult + :rtype: list[~azure.ai.vision.face.models.FaceFindSimilarResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[List[_models.FaceFindSimilarResult]] = kwargs.pop("cls", None) + + if body is _Unset: + if face_id is _Unset: + raise TypeError("missing required argument: face_id") + if large_face_list_id is _Unset: + raise TypeError("missing required argument: large_face_list_id") + body = { + "faceId": face_id, + "largeFaceListId": large_face_list_id, + "maxNumOfCandidatesReturned": max_num_of_candidates_returned, + "mode": mode, + } + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_face_find_similar_from_large_face_list_request( + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.FaceFindSimilarResult], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + async def identify_from_large_person_group( + self, body: JSON, *, content_type: str = "application/json", **kwargs: Any + ) -> List[_models.FaceIdentificationResult]: + """1-to-many identification to find the closest matches of the specific query person face from a + Large Person Group. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/identify-from-person-group + for more details. + + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: list of FaceIdentificationResult + :rtype: list[~azure.ai.vision.face.models.FaceIdentificationResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def identify_from_large_person_group( + self, + *, + face_ids: List[str], + large_person_group_id: str, + content_type: str = "application/json", + max_num_of_candidates_returned: Optional[int] = None, + confidence_threshold: Optional[float] = None, + **kwargs: Any + ) -> List[_models.FaceIdentificationResult]: + """1-to-many identification to find the closest matches of the specific query person face from a + Large Person Group. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/identify-from-person-group + for more details. + + :keyword face_ids: Array of query faces faceIds, created by the "Detect". Each of the faces are + identified independently. The valid number of faceIds is between [1, 10]. Required. + :paramtype face_ids: list[str] + :keyword large_person_group_id: largePersonGroupId of the target Large Person Group, created by + "Create Large Person Group". Parameter personGroupId and largePersonGroupId should not be + provided at the same time. Required. + :paramtype large_person_group_id: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :keyword max_num_of_candidates_returned: The range of maxNumOfCandidatesReturned is between 1 + and 100. Default value is 10. Default value is None. + :paramtype max_num_of_candidates_returned: int + :keyword confidence_threshold: Customized identification confidence threshold, in the range of + [0, 1]. Advanced user can tweak this value to override default internal threshold for better + precision on their scenario data. Note there is no guarantee of this threshold value working on + other data and after algorithm updates. Default value is None. + :paramtype confidence_threshold: float + :return: list of FaceIdentificationResult + :rtype: list[~azure.ai.vision.face.models.FaceIdentificationResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def identify_from_large_person_group( + self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + ) -> List[_models.FaceIdentificationResult]: + """1-to-many identification to find the closest matches of the specific query person face from a + Large Person Group. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/identify-from-person-group + for more details. + + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: list of FaceIdentificationResult + :rtype: list[~azure.ai.vision.face.models.FaceIdentificationResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace_async + async def identify_from_large_person_group( + self, + body: Union[JSON, IO[bytes]] = _Unset, + *, + face_ids: List[str] = _Unset, + large_person_group_id: str = _Unset, + max_num_of_candidates_returned: Optional[int] = None, + confidence_threshold: Optional[float] = None, + **kwargs: Any + ) -> List[_models.FaceIdentificationResult]: + """1-to-many identification to find the closest matches of the specific query person face from a + Large Person Group. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/identify-from-person-group + for more details. + + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword face_ids: Array of query faces faceIds, created by the "Detect". Each of the faces are + identified independently. The valid number of faceIds is between [1, 10]. Required. + :paramtype face_ids: list[str] + :keyword large_person_group_id: largePersonGroupId of the target Large Person Group, created by + "Create Large Person Group". Parameter personGroupId and largePersonGroupId should not be + provided at the same time. Required. + :paramtype large_person_group_id: str + :keyword max_num_of_candidates_returned: The range of maxNumOfCandidatesReturned is between 1 + and 100. Default value is 10. Default value is None. + :paramtype max_num_of_candidates_returned: int + :keyword confidence_threshold: Customized identification confidence threshold, in the range of + [0, 1]. Advanced user can tweak this value to override default internal threshold for better + precision on their scenario data. Note there is no guarantee of this threshold value working on + other data and after algorithm updates. Default value is None. + :paramtype confidence_threshold: float + :return: list of FaceIdentificationResult + :rtype: list[~azure.ai.vision.face.models.FaceIdentificationResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[List[_models.FaceIdentificationResult]] = kwargs.pop("cls", None) + + if body is _Unset: + if face_ids is _Unset: + raise TypeError("missing required argument: face_ids") + if large_person_group_id is _Unset: + raise TypeError("missing required argument: large_person_group_id") + body = { + "confidenceThreshold": confidence_threshold, + "faceIds": face_ids, + "largePersonGroupId": large_person_group_id, + "maxNumOfCandidatesReturned": max_num_of_candidates_returned, + } + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_face_identify_from_large_person_group_request( + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.FaceIdentificationResult], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + async def verify_from_large_person_group( + self, body: JSON, *, content_type: str = "application/json", **kwargs: Any + ) -> _models.FaceVerificationResult: + """Verify whether a face belongs to a person in a Large Person Group. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/verify-from-large-person-group + for more details. + + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: FaceVerificationResult. The FaceVerificationResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceVerificationResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def verify_from_large_person_group( + self, + *, + face_id: str, + large_person_group_id: str, + person_id: str, + content_type: str = "application/json", + **kwargs: Any + ) -> _models.FaceVerificationResult: + """Verify whether a face belongs to a person in a Large Person Group. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/verify-from-large-person-group + for more details. + + :keyword face_id: The faceId of the face, come from "Detect". Required. + :paramtype face_id: str + :keyword large_person_group_id: Using existing largePersonGroupId and personId for fast loading + a specified person. largePersonGroupId is created in "Create Large Person Group". Required. + :paramtype large_person_group_id: str + :keyword person_id: Specify a certain person in Large Person Group. Required. + :paramtype person_id: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: FaceVerificationResult. The FaceVerificationResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceVerificationResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def verify_from_large_person_group( + self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + ) -> _models.FaceVerificationResult: + """Verify whether a face belongs to a person in a Large Person Group. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/verify-from-large-person-group + for more details. + + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: FaceVerificationResult. The FaceVerificationResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceVerificationResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace_async + async def verify_from_large_person_group( + self, + body: Union[JSON, IO[bytes]] = _Unset, + *, + face_id: str = _Unset, + large_person_group_id: str = _Unset, + person_id: str = _Unset, + **kwargs: Any + ) -> _models.FaceVerificationResult: + """Verify whether a face belongs to a person in a Large Person Group. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/verify-from-large-person-group + for more details. + + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword face_id: The faceId of the face, come from "Detect". Required. + :paramtype face_id: str + :keyword large_person_group_id: Using existing largePersonGroupId and personId for fast loading + a specified person. largePersonGroupId is created in "Create Large Person Group". Required. + :paramtype large_person_group_id: str + :keyword person_id: Specify a certain person in Large Person Group. Required. + :paramtype person_id: str + :return: FaceVerificationResult. The FaceVerificationResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceVerificationResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[_models.FaceVerificationResult] = kwargs.pop("cls", None) + + if body is _Unset: + if face_id is _Unset: + raise TypeError("missing required argument: face_id") + if large_person_group_id is _Unset: + raise TypeError("missing required argument: large_person_group_id") + if person_id is _Unset: + raise TypeError("missing required argument: person_id") + body = {"faceId": face_id, "largePersonGroupId": large_person_group_id, "personId": person_id} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_face_verify_from_large_person_group_request( + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.FaceVerificationResult, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + +class FaceSessionClientOperationsMixin(FaceSessionClientMixinABC): + + @overload + async def create_liveness_session( + self, body: _models.CreateLivenessSessionContent, *, content_type: str = "application/json", **kwargs: Any + ) -> _models.CreateLivenessSessionResult: + """Create a new detect liveness session. + + Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/create-liveness-session + for more details. + + :param body: Body parameter. Required. + :type body: ~azure.ai.vision.face.models.CreateLivenessSessionContent + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: CreateLivenessSessionResult. The CreateLivenessSessionResult is compatible with + MutableMapping + :rtype: ~azure.ai.vision.face.models.CreateLivenessSessionResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def create_liveness_session( + self, body: JSON, *, content_type: str = "application/json", **kwargs: Any + ) -> _models.CreateLivenessSessionResult: + """Create a new detect liveness session. + + Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/create-liveness-session + for more details. + + :param body: Body parameter. Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: CreateLivenessSessionResult. The CreateLivenessSessionResult is compatible with + MutableMapping + :rtype: ~azure.ai.vision.face.models.CreateLivenessSessionResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def create_liveness_session( + self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + ) -> _models.CreateLivenessSessionResult: + """Create a new detect liveness session. + + Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/create-liveness-session + for more details. + + :param body: Body parameter. Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: CreateLivenessSessionResult. The CreateLivenessSessionResult is compatible with + MutableMapping + :rtype: ~azure.ai.vision.face.models.CreateLivenessSessionResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace_async + async def create_liveness_session( + self, body: Union[_models.CreateLivenessSessionContent, JSON, IO[bytes]], **kwargs: Any + ) -> _models.CreateLivenessSessionResult: + """Create a new detect liveness session. + + Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/create-liveness-session + for more details. + + :param body: Body parameter. Is one of the following types: CreateLivenessSessionContent, JSON, + IO[bytes] Required. + :type body: ~azure.ai.vision.face.models.CreateLivenessSessionContent or JSON or IO[bytes] + :return: CreateLivenessSessionResult. The CreateLivenessSessionResult is compatible with + MutableMapping + :rtype: ~azure.ai.vision.face.models.CreateLivenessSessionResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[_models.CreateLivenessSessionResult] = kwargs.pop("cls", None) + + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_face_session_create_liveness_session_request( + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.CreateLivenessSessionResult, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace_async + async def delete_liveness_session(self, session_id: str, **kwargs: Any) -> None: + """Delete all session related information for matching the specified session id. + + Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/delete-liveness-session + for more details. + + :param session_id: The unique ID to reference this session. Required. + :type session_id: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[None] = kwargs.pop("cls", None) + + _request = build_face_session_delete_liveness_session_request( + session_id=session_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + @distributed_trace_async + async def get_liveness_session_result(self, session_id: str, **kwargs: Any) -> _models.LivenessSession: + """Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/get-liveness-session-result + for more details. + + :param session_id: The unique ID to reference this session. Required. + :type session_id: str + :return: LivenessSession. The LivenessSession is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.LivenessSession + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[_models.LivenessSession] = kwargs.pop("cls", None) + + _request = build_face_session_get_liveness_session_result_request( + session_id=session_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.LivenessSession, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace_async + async def get_liveness_sessions( + self, *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any + ) -> List[_models.LivenessSessionItem]: + """Lists sessions for /detectLiveness/SingleModal. + + Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/get-liveness-sessions for + more details. + + :keyword start: List resources greater than the "start". It contains no more than 64 + characters. Default is empty. Default value is None. + :paramtype start: str + :keyword top: The number of items to list, ranging in [1, 1000]. Default is 1000. Default value + is None. + :paramtype top: int + :return: list of LivenessSessionItem + :rtype: list[~azure.ai.vision.face.models.LivenessSessionItem] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[List[_models.LivenessSessionItem]] = kwargs.pop("cls", None) + + _request = build_face_session_get_liveness_sessions_request( + start=start, + top=top, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.LivenessSessionItem], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace_async + async def get_liveness_session_audit_entries( + self, session_id: str, *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any + ) -> List[_models.LivenessSessionAuditEntry]: + """Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/get-liveness-session-audit-entries + for more details. + + :param session_id: The unique ID to reference this session. Required. + :type session_id: str + :keyword start: List resources greater than the "start". It contains no more than 64 + characters. Default is empty. Default value is None. + :paramtype start: str + :keyword top: The number of items to list, ranging in [1, 1000]. Default is 1000. Default value + is None. + :paramtype top: int + :return: list of LivenessSessionAuditEntry + :rtype: list[~azure.ai.vision.face.models.LivenessSessionAuditEntry] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[List[_models.LivenessSessionAuditEntry]] = kwargs.pop("cls", None) + + _request = build_face_session_get_liveness_session_audit_entries_request( + session_id=session_id, + start=start, + top=top, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.LivenessSessionAuditEntry], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + async def _create_liveness_with_verify_session( + self, + body: _models.CreateLivenessWithVerifySessionContent, + *, + content_type: str = "application/json", + **kwargs: Any + ) -> _models.CreateLivenessWithVerifySessionResult: ... + @overload + async def _create_liveness_with_verify_session( + self, body: JSON, *, content_type: str = "application/json", **kwargs: Any + ) -> _models.CreateLivenessWithVerifySessionResult: ... + @overload + async def _create_liveness_with_verify_session( + self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + ) -> _models.CreateLivenessWithVerifySessionResult: ... + + @distributed_trace_async + async def _create_liveness_with_verify_session( + self, body: Union[_models.CreateLivenessWithVerifySessionContent, JSON, IO[bytes]], **kwargs: Any + ) -> _models.CreateLivenessWithVerifySessionResult: + """Create a new liveness session with verify. Client device submits VerifyImage during the + /detectLivenessWithVerify/singleModal call. + + Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/create-liveness-with-verify-session + for more details. + + :param body: Body parameter. Is one of the following types: + CreateLivenessWithVerifySessionContent, JSON, IO[bytes] Required. + :type body: ~azure.ai.vision.face.models.CreateLivenessWithVerifySessionContent or JSON or + IO[bytes] + :return: CreateLivenessWithVerifySessionResult. The CreateLivenessWithVerifySessionResult is + compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.CreateLivenessWithVerifySessionResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[_models.CreateLivenessWithVerifySessionResult] = kwargs.pop("cls", None) + + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_face_session_create_liveness_with_verify_session_request( + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.CreateLivenessWithVerifySessionResult, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + async def _create_liveness_with_verify_session_with_verify_image( # pylint: disable=name-too-long + self, body: _models.CreateLivenessWithVerifySessionMultipartContent, **kwargs: Any + ) -> _models.CreateLivenessWithVerifySessionResult: ... + @overload + async def _create_liveness_with_verify_session_with_verify_image( # pylint: disable=name-too-long + self, body: JSON, **kwargs: Any + ) -> _models.CreateLivenessWithVerifySessionResult: ... + + @distributed_trace_async + async def _create_liveness_with_verify_session_with_verify_image( # pylint: disable=name-too-long + self, body: Union[_models.CreateLivenessWithVerifySessionMultipartContent, JSON], **kwargs: Any + ) -> _models.CreateLivenessWithVerifySessionResult: + """Create a new liveness session with verify. Provide the verify image during session creation. + + Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/create-liveness-with-verify-session-with-verify-image + for more details. + + :param body: Request content of liveness with verify session creation. Is either a + CreateLivenessWithVerifySessionMultipartContent type or a JSON type. Required. + :type body: ~azure.ai.vision.face.models.CreateLivenessWithVerifySessionMultipartContent or + JSON + :return: CreateLivenessWithVerifySessionResult. The CreateLivenessWithVerifySessionResult is + compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.CreateLivenessWithVerifySessionResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[_models.CreateLivenessWithVerifySessionResult] = kwargs.pop("cls", None) + + _body = body.as_dict() if isinstance(body, _model_base.Model) else body + _file_fields: List[str] = ["VerifyImage"] + _data_fields: List[str] = ["Parameters"] + _files, _data = prepare_multipart_form_data(_body, _file_fields, _data_fields) + + _request = build_face_session_create_liveness_with_verify_session_with_verify_image_request( + files=_files, + data=_data, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.CreateLivenessWithVerifySessionResult, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace_async + async def delete_liveness_with_verify_session(self, session_id: str, **kwargs: Any) -> None: + """Delete all session related information for matching the specified session id. + + Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/delete-liveness-with-verify-session + for more details. + + :param session_id: The unique ID to reference this session. Required. + :type session_id: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[None] = kwargs.pop("cls", None) + + _request = build_face_session_delete_liveness_with_verify_session_request( + session_id=session_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + @distributed_trace_async + async def get_liveness_with_verify_session_result( + self, session_id: str, **kwargs: Any + ) -> _models.LivenessWithVerifySession: + """Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/get-liveness-with-verify-session-result + for more details. + + :param session_id: The unique ID to reference this session. Required. + :type session_id: str + :return: LivenessWithVerifySession. The LivenessWithVerifySession is compatible with + MutableMapping + :rtype: ~azure.ai.vision.face.models.LivenessWithVerifySession + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[_models.LivenessWithVerifySession] = kwargs.pop("cls", None) + + _request = build_face_session_get_liveness_with_verify_session_result_request( + session_id=session_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.LivenessWithVerifySession, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace_async + async def get_liveness_with_verify_sessions( + self, *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any + ) -> List[_models.LivenessSessionItem]: + """Lists sessions for /detectLivenessWithVerify/SingleModal. + + Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/get-liveness-with-verify-sessions + for more details. + + :keyword start: List resources greater than the "start". It contains no more than 64 + characters. Default is empty. Default value is None. + :paramtype start: str + :keyword top: The number of items to list, ranging in [1, 1000]. Default is 1000. Default value + is None. + :paramtype top: int + :return: list of LivenessSessionItem + :rtype: list[~azure.ai.vision.face.models.LivenessSessionItem] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[List[_models.LivenessSessionItem]] = kwargs.pop("cls", None) + + _request = build_face_session_get_liveness_with_verify_sessions_request( + start=start, + top=top, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.LivenessSessionItem], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace_async + async def get_liveness_with_verify_session_audit_entries( # pylint: disable=name-too-long + self, session_id: str, *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any + ) -> List[_models.LivenessSessionAuditEntry]: + """Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/get-liveness-with-verify-session-audit-entries + for more details. + + :param session_id: The unique ID to reference this session. Required. + :type session_id: str + :keyword start: List resources greater than the "start". It contains no more than 64 + characters. Default is empty. Default value is None. + :paramtype start: str + :keyword top: The number of items to list, ranging in [1, 1000]. Default is 1000. Default value + is None. + :paramtype top: int + :return: list of LivenessSessionAuditEntry + :rtype: list[~azure.ai.vision.face.models.LivenessSessionAuditEntry] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[List[_models.LivenessSessionAuditEntry]] = kwargs.pop("cls", None) + + _request = build_face_session_get_liveness_with_verify_session_audit_entries_request( + session_id=session_id, + start=start, + top=top, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.LivenessSessionAuditEntry], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + async def detect_from_session_image( + self, + body: JSON, + *, + content_type: str = "application/json", + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + return_face_id: Optional[bool] = None, + return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, + return_face_landmarks: Optional[bool] = None, + return_recognition_model: Optional[bool] = None, + face_id_time_to_live: Optional[int] = None, + **kwargs: Any + ) -> List[_models.FaceDetectionResult]: + """Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, + and attributes. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-detection-operations/detect-from-session-image-id + for more details. + + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. 'detection_03' is recommended since its accuracy is improved on + smaller faces (64x64 pixels) and rotated face orientations. Known values are: "detection_01", + "detection_02", and "detection_03". Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword recognition_model: The 'recognitionModel' associated with the detected faceIds. + Supported 'recognitionModel' values include 'recognition_01', 'recognition_02', + 'recognition_03' or 'recognition_04'. The default value is 'recognition_01'. 'recognition_04' + is recommended since its accuracy is improved on faces wearing masks compared with + 'recognition_03', and its overall accuracy is improved compared with 'recognition_01' and + 'recognition_02'. Known values are: "recognition_01", "recognition_02", "recognition_03", and + "recognition_04". Default value is None. + :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel + :keyword return_face_id: Return faceIds of the detected faces or not. The default value is + true. Default value is None. + :paramtype return_face_id: bool + :keyword return_face_attributes: Analyze and return the one or more specified face attributes + in the comma-separated string like 'returnFaceAttributes=headPose,glasses'. Face attribute + analysis has additional computational and time cost. Default value is None. + :paramtype return_face_attributes: list[str or ~azure.ai.vision.face.models.FaceAttributeType] + :keyword return_face_landmarks: Return face landmarks of the detected faces or not. The default + value is false. Default value is None. + :paramtype return_face_landmarks: bool + :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is + false. This is only applicable when returnFaceId = true. Default value is None. + :paramtype return_recognition_model: bool + :keyword face_id_time_to_live: The number of seconds for the face ID being cached. Supported + range from 60 seconds up to 86400 seconds. The default value is 86400 (24 hours). Default value + is None. + :paramtype face_id_time_to_live: int + :return: list of FaceDetectionResult + :rtype: list[~azure.ai.vision.face.models.FaceDetectionResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def detect_from_session_image( + self, + *, + session_image_id: str, + content_type: str = "application/json", + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + return_face_id: Optional[bool] = None, + return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, + return_face_landmarks: Optional[bool] = None, + return_recognition_model: Optional[bool] = None, + face_id_time_to_live: Optional[int] = None, + **kwargs: Any + ) -> List[_models.FaceDetectionResult]: + """Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, + and attributes. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-detection-operations/detect-from-session-image-id + for more details. + + :keyword session_image_id: Id of session image. Required. + :paramtype session_image_id: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. 'detection_03' is recommended since its accuracy is improved on + smaller faces (64x64 pixels) and rotated face orientations. Known values are: "detection_01", + "detection_02", and "detection_03". Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword recognition_model: The 'recognitionModel' associated with the detected faceIds. + Supported 'recognitionModel' values include 'recognition_01', 'recognition_02', + 'recognition_03' or 'recognition_04'. The default value is 'recognition_01'. 'recognition_04' + is recommended since its accuracy is improved on faces wearing masks compared with + 'recognition_03', and its overall accuracy is improved compared with 'recognition_01' and + 'recognition_02'. Known values are: "recognition_01", "recognition_02", "recognition_03", and + "recognition_04". Default value is None. + :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel + :keyword return_face_id: Return faceIds of the detected faces or not. The default value is + true. Default value is None. + :paramtype return_face_id: bool + :keyword return_face_attributes: Analyze and return the one or more specified face attributes + in the comma-separated string like 'returnFaceAttributes=headPose,glasses'. Face attribute + analysis has additional computational and time cost. Default value is None. + :paramtype return_face_attributes: list[str or ~azure.ai.vision.face.models.FaceAttributeType] + :keyword return_face_landmarks: Return face landmarks of the detected faces or not. The default + value is false. Default value is None. + :paramtype return_face_landmarks: bool + :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is + false. This is only applicable when returnFaceId = true. Default value is None. + :paramtype return_recognition_model: bool + :keyword face_id_time_to_live: The number of seconds for the face ID being cached. Supported + range from 60 seconds up to 86400 seconds. The default value is 86400 (24 hours). Default value + is None. + :paramtype face_id_time_to_live: int + :return: list of FaceDetectionResult + :rtype: list[~azure.ai.vision.face.models.FaceDetectionResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + async def detect_from_session_image( + self, + body: IO[bytes], + *, + content_type: str = "application/json", + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + return_face_id: Optional[bool] = None, + return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, + return_face_landmarks: Optional[bool] = None, + return_recognition_model: Optional[bool] = None, + face_id_time_to_live: Optional[int] = None, + **kwargs: Any + ) -> List[_models.FaceDetectionResult]: + """Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, + and attributes. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-detection-operations/detect-from-session-image-id + for more details. + + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. 'detection_03' is recommended since its accuracy is improved on + smaller faces (64x64 pixels) and rotated face orientations. Known values are: "detection_01", + "detection_02", and "detection_03". Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword recognition_model: The 'recognitionModel' associated with the detected faceIds. + Supported 'recognitionModel' values include 'recognition_01', 'recognition_02', + 'recognition_03' or 'recognition_04'. The default value is 'recognition_01'. 'recognition_04' + is recommended since its accuracy is improved on faces wearing masks compared with + 'recognition_03', and its overall accuracy is improved compared with 'recognition_01' and + 'recognition_02'. Known values are: "recognition_01", "recognition_02", "recognition_03", and + "recognition_04". Default value is None. + :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel + :keyword return_face_id: Return faceIds of the detected faces or not. The default value is + true. Default value is None. + :paramtype return_face_id: bool + :keyword return_face_attributes: Analyze and return the one or more specified face attributes + in the comma-separated string like 'returnFaceAttributes=headPose,glasses'. Face attribute + analysis has additional computational and time cost. Default value is None. + :paramtype return_face_attributes: list[str or ~azure.ai.vision.face.models.FaceAttributeType] + :keyword return_face_landmarks: Return face landmarks of the detected faces or not. The default + value is false. Default value is None. + :paramtype return_face_landmarks: bool + :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is + false. This is only applicable when returnFaceId = true. Default value is None. + :paramtype return_recognition_model: bool + :keyword face_id_time_to_live: The number of seconds for the face ID being cached. Supported + range from 60 seconds up to 86400 seconds. The default value is 86400 (24 hours). Default value + is None. + :paramtype face_id_time_to_live: int + :return: list of FaceDetectionResult + :rtype: list[~azure.ai.vision.face.models.FaceDetectionResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace_async + @api_version_validation( + method_added_on="v1.2-preview.1", + params_added_on={ + "v1.2-preview.1": [ + "content_type", + "detection_model", + "recognition_model", + "return_face_id", + "return_face_attributes", + "return_face_landmarks", + "return_recognition_model", + "face_id_time_to_live", + "accept", + ] + }, + ) + async def detect_from_session_image( + self, + body: Union[JSON, IO[bytes]] = _Unset, + *, + session_image_id: str = _Unset, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + return_face_id: Optional[bool] = None, + return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, + return_face_landmarks: Optional[bool] = None, + return_recognition_model: Optional[bool] = None, + face_id_time_to_live: Optional[int] = None, + **kwargs: Any + ) -> List[_models.FaceDetectionResult]: + """Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, + and attributes. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-detection-operations/detect-from-session-image-id + for more details. + + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword session_image_id: Id of session image. Required. + :paramtype session_image_id: str + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. 'detection_03' is recommended since its accuracy is improved on + smaller faces (64x64 pixels) and rotated face orientations. Known values are: "detection_01", + "detection_02", and "detection_03". Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword recognition_model: The 'recognitionModel' associated with the detected faceIds. + Supported 'recognitionModel' values include 'recognition_01', 'recognition_02', + 'recognition_03' or 'recognition_04'. The default value is 'recognition_01'. 'recognition_04' + is recommended since its accuracy is improved on faces wearing masks compared with + 'recognition_03', and its overall accuracy is improved compared with 'recognition_01' and + 'recognition_02'. Known values are: "recognition_01", "recognition_02", "recognition_03", and + "recognition_04". Default value is None. + :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel + :keyword return_face_id: Return faceIds of the detected faces or not. The default value is + true. Default value is None. + :paramtype return_face_id: bool + :keyword return_face_attributes: Analyze and return the one or more specified face attributes + in the comma-separated string like 'returnFaceAttributes=headPose,glasses'. Face attribute + analysis has additional computational and time cost. Default value is None. + :paramtype return_face_attributes: list[str or ~azure.ai.vision.face.models.FaceAttributeType] + :keyword return_face_landmarks: Return face landmarks of the detected faces or not. The default + value is false. Default value is None. + :paramtype return_face_landmarks: bool + :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is + false. This is only applicable when returnFaceId = true. Default value is None. + :paramtype return_recognition_model: bool + :keyword face_id_time_to_live: The number of seconds for the face ID being cached. Supported + range from 60 seconds up to 86400 seconds. The default value is 86400 (24 hours). Default value + is None. + :paramtype face_id_time_to_live: int + :return: list of FaceDetectionResult + :rtype: list[~azure.ai.vision.face.models.FaceDetectionResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("content-type", None)) + cls: ClsType[List[_models.FaceDetectionResult]] = kwargs.pop("cls", None) + + if body is _Unset: + if session_image_id is _Unset: + raise TypeError("missing required argument: session_image_id") + body = {"sessionImageId": session_image_id} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_face_session_detect_from_session_image_request( + detection_model=detection_model, + recognition_model=recognition_model, + return_face_id=return_face_id, + return_face_attributes=return_face_attributes, + return_face_landmarks=return_face_landmarks, + return_recognition_model=return_recognition_model, + face_id_time_to_live=face_id_time_to_live, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.FaceDetectionResult], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace_async + @api_version_validation( + method_added_on="v1.2-preview.1", + params_added_on={"v1.2-preview.1": ["session_image_id", "accept"]}, + ) + async def get_session_image(self, session_image_id: str, **kwargs: Any) -> AsyncIterator[bytes]: + """Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/get-session-image for + more details. + + :param session_image_id: The request ID of the image to be retrieved. Required. + :type session_image_id: str + :return: AsyncIterator[bytes] + :rtype: AsyncIterator[bytes] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[AsyncIterator[bytes]] = kwargs.pop("cls", None) + + _request = build_face_session_get_session_image_request( + session_image_id=session_image_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", True) + pipeline_response: PipelineResponse = await self._client._pipeline.run( # type: ignore # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + await response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + response_headers = {} + response_headers["content-type"] = self._deserialize("str", response.headers.get("content-type")) + + deserialized = response.iter_bytes() + + if cls: + return cls(pipeline_response, deserialized, response_headers) # type: ignore + + return deserialized # type: ignore diff --git a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/_operations/_patch.py b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/operations/_patch.py similarity index 100% rename from sdk/face/azure-ai-vision-face/azure/ai/vision/face/_operations/_patch.py rename to sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/operations/_patch.py diff --git a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/models/__init__.py b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/models/__init__.py index 7fceddaf6baa..420ac616cabd 100644 --- a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/models/__init__.py +++ b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/models/__init__.py @@ -7,12 +7,16 @@ # -------------------------------------------------------------------------- from ._models import AccessoryItem +from ._models import AddFaceResult from ._models import AuditLivenessResponseInfo from ._models import AuditRequestInfo from ._models import BlurProperties from ._models import CreateLivenessSessionContent from ._models import CreateLivenessSessionResult +from ._models import CreateLivenessWithVerifySessionContent +from ._models import CreateLivenessWithVerifySessionMultipartContent from ._models import CreateLivenessWithVerifySessionResult +from ._models import CreatePersonResult from ._models import ExposureProperties from ._models import FaceAttributes from ._models import FaceDetectionResult @@ -20,14 +24,22 @@ from ._models import FaceErrorResponse from ._models import FaceFindSimilarResult from ._models import FaceGroupingResult +from ._models import FaceIdentificationCandidate +from ._models import FaceIdentificationResult from ._models import FaceLandmarks from ._models import FaceRectangle +from ._models import FaceTrainingResult from ._models import FaceVerificationResult from ._models import FacialHair from ._models import HairColor from ._models import HairProperties from ._models import HeadPose from ._models import LandmarkCoordinate +from ._models import LargeFaceList +from ._models import LargeFaceListFace +from ._models import LargePersonGroup +from ._models import LargePersonGroupPerson +from ._models import LargePersonGroupPersonFace from ._models import LivenessOutputsTarget from ._models import LivenessResponseBody from ._models import LivenessSession @@ -47,6 +59,7 @@ from ._enums import FaceDetectionModel from ._enums import FaceImageType from ._enums import FaceLivenessDecision +from ._enums import FaceOperationStatus from ._enums import FaceRecognitionModel from ._enums import FaceSessionStatus from ._enums import FindSimilarMatchMode @@ -71,12 +84,16 @@ "FaceAttributeTypeRecognition03", "FaceAttributeTypeRecognition04", "AccessoryItem", + "AddFaceResult", "AuditLivenessResponseInfo", "AuditRequestInfo", "BlurProperties", "CreateLivenessSessionContent", "CreateLivenessSessionResult", + "CreateLivenessWithVerifySessionContent", + "CreateLivenessWithVerifySessionMultipartContent", "CreateLivenessWithVerifySessionResult", + "CreatePersonResult", "ExposureProperties", "FaceAttributes", "FaceDetectionResult", @@ -84,14 +101,22 @@ "FaceErrorResponse", "FaceFindSimilarResult", "FaceGroupingResult", + "FaceIdentificationCandidate", + "FaceIdentificationResult", "FaceLandmarks", "FaceRectangle", + "FaceTrainingResult", "FaceVerificationResult", "FacialHair", "HairColor", "HairProperties", "HeadPose", "LandmarkCoordinate", + "LargeFaceList", + "LargeFaceListFace", + "LargePersonGroup", + "LargePersonGroupPerson", + "LargePersonGroupPersonFace", "LivenessOutputsTarget", "LivenessResponseBody", "LivenessSession", @@ -110,6 +135,7 @@ "FaceDetectionModel", "FaceImageType", "FaceLivenessDecision", + "FaceOperationStatus", "FaceRecognitionModel", "FaceSessionStatus", "FindSimilarMatchMode", diff --git a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/models/_enums.py b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/models/_enums.py index 40f978842f55..81c3a86b106d 100644 --- a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/models/_enums.py +++ b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/models/_enums.py @@ -87,14 +87,14 @@ class FaceAttributeType(str, Enum, metaclass=CaseInsensitiveEnumMeta): class FaceDetectionModel(str, Enum, metaclass=CaseInsensitiveEnumMeta): """The detection model for the face.""" - DETECTION_01 = "detection_01" + DETECTION01 = "detection_01" """The default detection model. Recommend for near frontal face detection. For scenarios with exceptionally large angle (head-pose) faces, occluded faces or wrong image orientation, the faces in such cases may not be detected.""" - DETECTION_02 = "detection_02" + DETECTION02 = "detection_02" """Detection model released in 2019 May with improved accuracy especially on small, side and blurry faces.""" - DETECTION_03 = "detection_03" + DETECTION03 = "detection_03" """Detection model released in 2021 February with improved accuracy especially on small faces.""" @@ -120,17 +120,30 @@ class FaceLivenessDecision(str, Enum, metaclass=CaseInsensitiveEnumMeta): """The algorithm has classified the target face as a spoof.""" +class FaceOperationStatus(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """The status of long running operation.""" + + NOT_STARTED = "notStarted" + """The operation is not started.""" + RUNNING = "running" + """The operation is still running.""" + SUCCEEDED = "succeeded" + """The operation is succeeded.""" + FAILED = "failed" + """The operation is failed.""" + + class FaceRecognitionModel(str, Enum, metaclass=CaseInsensitiveEnumMeta): """The recognition model for the face.""" - RECOGNITION_01 = "recognition_01" + RECOGNITION01 = "recognition_01" """The default recognition model for "Detect". All those faceIds created before 2019 March are bonded with this recognition model.""" - RECOGNITION_02 = "recognition_02" + RECOGNITION02 = "recognition_02" """Recognition model released in 2019 March.""" - RECOGNITION_03 = "recognition_03" + RECOGNITION03 = "recognition_03" """Recognition model released in 2020 May.""" - RECOGNITION_04 = "recognition_04" + RECOGNITION04 = "recognition_04" """Recognition model released in 2021 February. It's recommended to use this recognition model for better recognition accuracy.""" @@ -192,27 +205,25 @@ class HairColorType(str, Enum, metaclass=CaseInsensitiveEnumMeta): class LivenessModel(str, Enum, metaclass=CaseInsensitiveEnumMeta): """The model version used for liveness classification.""" - V2020_02_15_PREVIEW_01 = "2020-02-15-preview.01" - V2021_11_12_PREVIEW_03 = "2021-11-12-preview.03" - V2022_10_15_PREVIEW_04 = "2022-10-15-preview.04" - V2023_03_02_PREVIEW_05 = "2023-03-02-preview.05" + V2022_10_15_PREVIEW04 = "2022-10-15-preview.04" + V2023_12_20_PREVIEW06 = "2023-12-20-preview.06" class LivenessOperationMode(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """The liveness operation mode to drive the client’s end-user experience.""" + """The liveness operation mode to drive the client's end-user experience.""" PASSIVE = "Passive" """Utilizes a passive liveness technique that requires no additional actions from the user. Requires normal indoor lighting and high screen brightness for optimal performance. And thus, this mode has a narrow operational envelope and will not be suitable for scenarios that - requires the end-user’s to be in bright lighting conditions. Note: this is the only supported + requires the end-user's to be in bright lighting conditions. Note: this is the only supported mode for the Mobile (iOS and Android) solution.""" PASSIVE_ACTIVE = "PassiveActive" """This mode utilizes a hybrid passive or active liveness technique that necessitates user cooperation. It is optimized to require active motion only under suboptimal lighting conditions. Unlike the passive mode, this mode has no lighting restrictions, and thus offering a broader operational envelope. This mode is preferable on Web based solutions due to the lack - of automatic screen brightness control available on browsers which hinders the Passive mode’s + of automatic screen brightness control available on browsers which hinders the Passive mode's operational envelope on Web based solutions.""" @@ -254,5 +265,7 @@ class QualityForRecognition(str, Enum, metaclass=CaseInsensitiveEnumMeta): class Versions(str, Enum, metaclass=CaseInsensitiveEnumMeta): """API versions for Azure AI Face API.""" - V1_1_PREVIEW_1 = "v1.1-preview.1" + V1_1_PREVIEW1 = "v1.1-preview.1" """v1.1-preview.1""" + V1_2_PREVIEW1 = "v1.2-preview.1" + """v1.2-preview.1""" diff --git a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/models/_models.py b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/models/_models.py index 06bb6e5db83c..df4774e39456 100644 --- a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/models/_models.py +++ b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/models/_models.py @@ -1,5 +1,5 @@ -# coding=utf-8 # pylint: disable=too-many-lines +# coding=utf-8 # -------------------------------------------------------------------------- # Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the MIT License. See License.txt in the project root for license information. @@ -8,28 +8,19 @@ # -------------------------------------------------------------------------- import datetime -import sys from typing import Any, List, Mapping, Optional, TYPE_CHECKING, Union, overload from .. import _model_base from .._model_base import rest_field from .._vendor import FileType -if sys.version_info >= (3, 9): - from collections.abc import MutableMapping -else: - from typing import MutableMapping # type: ignore # pylint: disable=ungrouped-imports - if TYPE_CHECKING: - # pylint: disable=unused-import,ungrouped-imports from .. import models as _models -JSON = MutableMapping[str, Any] # pylint: disable=unsubscriptable-object class AccessoryItem(_model_base.Model): """Accessory item and corresponding confidence level. - All required parameters must be populated in order to send to server. :ivar type: Type of the accessory. Required. Known values are: "headwear", "glasses", and "mask". @@ -62,10 +53,42 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles super().__init__(*args, **kwargs) +class AddFaceResult(_model_base.Model): + """Response body for adding face. + + + :ivar persisted_face_id: Persisted Face ID of the added face, which is persisted and will not + expire. Different from faceId which is created in "Detect" and will expire in 24 hours after + the detection call. Required. + :vartype persisted_face_id: str + """ + + persisted_face_id: str = rest_field(name="persistedFaceId") + """Persisted Face ID of the added face, which is persisted and will not expire. Different from + faceId which is created in \"Detect\" and will expire in 24 hours after the detection call. + Required.""" + + @overload + def __init__( + self, + *, + persisted_face_id: str, + ): ... + + @overload + def __init__(self, mapping: Mapping[str, Any]): + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useless-super-delegation + super().__init__(*args, **kwargs) + + class AuditLivenessResponseInfo(_model_base.Model): """Audit entry for a response in the session. - All required parameters must be populated in order to send to server. :ivar body: The response body. The schema of this field will depend on the request.url and request.method used by the client. Required. @@ -108,7 +131,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class AuditRequestInfo(_model_base.Model): """Audit entry for a request in the session. - All required parameters must be populated in order to send to server. :ivar url: The relative URL and query of the liveness request. Required. :vartype url: str @@ -158,7 +180,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class BlurProperties(_model_base.Model): """Properties describing any presence of blur within the image. - All required parameters must be populated in order to send to server. :ivar blur_level: An enum value indicating level of blurriness. Required. Known values are: "low", "medium", and "high". @@ -193,7 +214,7 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class CreateLivenessSessionContent(_model_base.Model): - """Request for creating liveness session. + """Request model for creating liveness session. All required parameters must be populated in order to send to server. @@ -209,6 +230,12 @@ class CreateLivenessSessionContent(_model_base.Model): 'deviceCorrelationId' via the Vision SDK. Default is false, and 'deviceCorrelationId' must be set in this request body. :vartype device_correlation_id_set_in_client: bool + :ivar enable_session_image: Whether or not store the session image. + :vartype enable_session_image: bool + :ivar liveness_single_modal_model: The model version used for liveness classification. This is + an optional parameter, and if this is not specified, then the latest supported model version + will be chosen. Known values are: "2022-10-15-preview.04" and "2023-12-20-preview.06". + :vartype liveness_single_modal_model: str or ~azure.ai.vision.face.models.LivenessModel :ivar device_correlation_id: Unique Guid per each end-user device. This is to provide rate limiting and anti-hammering. If 'deviceCorrelationIdSetInClient' is true in this request, this 'deviceCorrelationId' must be null. @@ -229,6 +256,14 @@ class CreateLivenessSessionContent(_model_base.Model): device_correlation_id_set_in_client: Optional[bool] = rest_field(name="deviceCorrelationIdSetInClient") """Whether or not to allow client to set their own 'deviceCorrelationId' via the Vision SDK. Default is false, and 'deviceCorrelationId' must be set in this request body.""" + enable_session_image: Optional[bool] = rest_field(name="enableSessionImage") + """Whether or not store the session image.""" + liveness_single_modal_model: Optional[Union[str, "_models.LivenessModel"]] = rest_field( + name="livenessSingleModalModel" + ) + """The model version used for liveness classification. This is an optional parameter, and if this + is not specified, then the latest supported model version will be chosen. Known values are: + \"2022-10-15-preview.04\" and \"2023-12-20-preview.06\".""" device_correlation_id: Optional[str] = rest_field(name="deviceCorrelationId") """Unique Guid per each end-user device. This is to provide rate limiting and anti-hammering. If 'deviceCorrelationIdSetInClient' is true in this request, this 'deviceCorrelationId' must be @@ -243,6 +278,8 @@ def __init__( liveness_operation_mode: Union[str, "_models.LivenessOperationMode"], send_results_to_client: Optional[bool] = None, device_correlation_id_set_in_client: Optional[bool] = None, + enable_session_image: Optional[bool] = None, + liveness_single_modal_model: Optional[Union[str, "_models.LivenessModel"]] = None, device_correlation_id: Optional[str] = None, auth_token_time_to_live_in_seconds: Optional[int] = None, ): ... @@ -261,7 +298,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class CreateLivenessSessionResult(_model_base.Model): """Response of liveness session creation. - All required parameters must be populated in order to send to server. :ivar session_id: The unique session ID of the created session. It will expire 48 hours after it was created or may be deleted sooner using the corresponding Session DELETE operation. @@ -301,28 +337,137 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class CreateLivenessWithVerifySessionContent(_model_base.Model): + """Request for creating liveness with verify session. + + All required parameters must be populated in order to send to server. + + :ivar liveness_operation_mode: Type of liveness mode the client should follow. Required. Known + values are: "Passive" and "PassiveActive". + :vartype liveness_operation_mode: str or ~azure.ai.vision.face.models.LivenessOperationMode + :ivar send_results_to_client: Whether or not to allow a '200 - Success' response body to be + sent to the client, which may be undesirable for security reasons. Default is false, clients + will receive a '204 - NoContent' empty body response. Regardless of selection, calling Session + GetResult will always contain a response body enabling business logic to be implemented. + :vartype send_results_to_client: bool + :ivar device_correlation_id_set_in_client: Whether or not to allow client to set their own + 'deviceCorrelationId' via the Vision SDK. Default is false, and 'deviceCorrelationId' must be + set in this request body. + :vartype device_correlation_id_set_in_client: bool + :ivar enable_session_image: Whether or not store the session image. + :vartype enable_session_image: bool + :ivar liveness_single_modal_model: The model version used for liveness classification. This is + an optional parameter, and if this is not specified, then the latest supported model version + will be chosen. Known values are: "2022-10-15-preview.04" and "2023-12-20-preview.06". + :vartype liveness_single_modal_model: str or ~azure.ai.vision.face.models.LivenessModel + :ivar device_correlation_id: Unique Guid per each end-user device. This is to provide rate + limiting and anti-hammering. If 'deviceCorrelationIdSetInClient' is true in this request, this + 'deviceCorrelationId' must be null. + :vartype device_correlation_id: str + :ivar auth_token_time_to_live_in_seconds: Seconds the session should last for. Range is 60 to + 86400 seconds. Default value is 600. + :vartype auth_token_time_to_live_in_seconds: int + :ivar return_verify_image_hash: Whether or not return the verify image hash. + :vartype return_verify_image_hash: bool + :ivar verify_confidence_threshold: Threshold for confidence of the face verification. + :vartype verify_confidence_threshold: float + """ + + liveness_operation_mode: Union[str, "_models.LivenessOperationMode"] = rest_field(name="livenessOperationMode") + """Type of liveness mode the client should follow. Required. Known values are: \"Passive\" and + \"PassiveActive\".""" + send_results_to_client: Optional[bool] = rest_field(name="sendResultsToClient") + """Whether or not to allow a '200 - Success' response body to be sent to the client, which may be + undesirable for security reasons. Default is false, clients will receive a '204 - NoContent' + empty body response. Regardless of selection, calling Session GetResult will always contain a + response body enabling business logic to be implemented.""" + device_correlation_id_set_in_client: Optional[bool] = rest_field(name="deviceCorrelationIdSetInClient") + """Whether or not to allow client to set their own 'deviceCorrelationId' via the Vision SDK. + Default is false, and 'deviceCorrelationId' must be set in this request body.""" + enable_session_image: Optional[bool] = rest_field(name="enableSessionImage") + """Whether or not store the session image.""" + liveness_single_modal_model: Optional[Union[str, "_models.LivenessModel"]] = rest_field( + name="livenessSingleModalModel" + ) + """The model version used for liveness classification. This is an optional parameter, and if this + is not specified, then the latest supported model version will be chosen. Known values are: + \"2022-10-15-preview.04\" and \"2023-12-20-preview.06\".""" + device_correlation_id: Optional[str] = rest_field(name="deviceCorrelationId") + """Unique Guid per each end-user device. This is to provide rate limiting and anti-hammering. If + 'deviceCorrelationIdSetInClient' is true in this request, this 'deviceCorrelationId' must be + null.""" + auth_token_time_to_live_in_seconds: Optional[int] = rest_field(name="authTokenTimeToLiveInSeconds") + """Seconds the session should last for. Range is 60 to 86400 seconds. Default value is 600.""" + return_verify_image_hash: Optional[bool] = rest_field(name="returnVerifyImageHash") + """Whether or not return the verify image hash.""" + verify_confidence_threshold: Optional[float] = rest_field(name="verifyConfidenceThreshold") + """Threshold for confidence of the face verification.""" + + @overload + def __init__( + self, + *, + liveness_operation_mode: Union[str, "_models.LivenessOperationMode"], + send_results_to_client: Optional[bool] = None, + device_correlation_id_set_in_client: Optional[bool] = None, + enable_session_image: Optional[bool] = None, + liveness_single_modal_model: Optional[Union[str, "_models.LivenessModel"]] = None, + device_correlation_id: Optional[str] = None, + auth_token_time_to_live_in_seconds: Optional[int] = None, + return_verify_image_hash: Optional[bool] = None, + verify_confidence_threshold: Optional[float] = None, + ): ... + + @overload + def __init__(self, mapping: Mapping[str, Any]): + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useless-super-delegation + super().__init__(*args, **kwargs) + + +class CreateLivenessWithVerifySessionMultipartContent(_model_base.Model): # pylint: disable=name-too-long """Request of liveness with verify session creation. All required parameters must be populated in order to send to server. :ivar parameters: The parameters for creating session. Required. - :vartype parameters: ~azure.ai.vision.face.models.CreateLivenessSessionContent + :vartype parameters: ~azure.ai.vision.face.models.CreateLivenessWithVerifySessionContent :ivar verify_image: The image stream for verify. Content-Disposition header field for this part must have filename. Required. - :vartype verify_image: bytes + :vartype verify_image: ~azure.ai.vision.face._vendor.FileType """ - parameters: "_models.CreateLivenessSessionContent" = rest_field(name="Parameters") + parameters: "_models.CreateLivenessWithVerifySessionContent" = rest_field(name="Parameters") """The parameters for creating session. Required.""" verify_image: FileType = rest_field(name="VerifyImage", is_multipart_file_input=True) """The image stream for verify. Content-Disposition header field for this part must have filename. Required.""" + @overload + def __init__( + self, + *, + parameters: "_models.CreateLivenessWithVerifySessionContent", + verify_image: FileType, + ): ... + + @overload + def __init__(self, mapping: Mapping[str, Any]): + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useless-super-delegation + super().__init__(*args, **kwargs) + class CreateLivenessWithVerifySessionResult(_model_base.Model): """Response of liveness session with verify creation with verify image provided. - All required parameters must be populated in order to send to server. :ivar session_id: The unique session ID of the created session. It will expire 48 hours after it was created or may be deleted sooner using the corresponding Session DELETE operation. @@ -366,10 +511,38 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles super().__init__(*args, **kwargs) +class CreatePersonResult(_model_base.Model): + """Response of create person. + + + :ivar person_id: Person ID of the person. Required. + :vartype person_id: str + """ + + person_id: str = rest_field(name="personId") + """Person ID of the person. Required.""" + + @overload + def __init__( + self, + *, + person_id: str, + ): ... + + @overload + def __init__(self, mapping: Mapping[str, Any]): + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useless-super-delegation + super().__init__(*args, **kwargs) + + class ExposureProperties(_model_base.Model): """Properties describing exposure level of the image. - All required parameters must be populated in order to send to server. :ivar exposure_level: An enum value indicating level of exposure. Required. Known values are: "underExposure", "goodExposure", and "overExposure". @@ -504,7 +677,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class FaceDetectionResult(_model_base.Model): """Response for detect API. - All required parameters must be populated in order to send to server. :ivar face_id: Unique faceId of the detected face, created by detection API and it will expire 24 hours after the detection call. To return this, it requires 'returnFaceId' parameter to be @@ -564,7 +736,6 @@ class FaceError(_model_base.Model): """The error object. For comprehensive details on error codes and messages returned by the Face Service, please refer to the following link: https://aka.ms/face-error-codes-and-messages. - All required parameters must be populated in order to send to server. :ivar code: One of a server-defined set of error codes. Required. :vartype code: str @@ -599,7 +770,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class FaceErrorResponse(_model_base.Model): """A response containing error details. - All required parameters must be populated in order to send to server. :ivar error: The error object. Required. :vartype error: ~azure.ai.vision.face.models.FaceError @@ -629,7 +799,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class FaceFindSimilarResult(_model_base.Model): """Response body for find similar face operation. - All required parameters must be populated in order to send to server. :ivar confidence: Confidence value of the candidate. The higher confidence, the more similar. Range between [0,1]. Required. @@ -675,7 +844,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class FaceGroupingResult(_model_base.Model): """Response body for group face operation. - All required parameters must be populated in order to send to server. :ivar groups: A partition of the original faces based on face similarity. Groups are ranked by number of faces. Required. @@ -710,10 +878,83 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles super().__init__(*args, **kwargs) +class FaceIdentificationCandidate(_model_base.Model): + """Candidate for identify call. + + + :ivar person_id: personId of candidate person. Required. + :vartype person_id: str + :ivar confidence: Confidence value of the candidate. The higher confidence, the more similar. + Range between [0,1]. Required. + :vartype confidence: float + """ + + person_id: str = rest_field(name="personId") + """personId of candidate person. Required.""" + confidence: float = rest_field() + """Confidence value of the candidate. The higher confidence, the more similar. Range between + [0,1]. Required.""" + + @overload + def __init__( + self, + *, + person_id: str, + confidence: float, + ): ... + + @overload + def __init__(self, mapping: Mapping[str, Any]): + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useless-super-delegation + super().__init__(*args, **kwargs) + + +class FaceIdentificationResult(_model_base.Model): + """Identify result. + + + :ivar face_id: faceId of the query face. Required. + :vartype face_id: str + :ivar candidates: Identified person candidates for that face (ranked by confidence). Array size + should be no larger than input maxNumOfCandidatesReturned. If no person is identified, will + return an empty array. Required. + :vartype candidates: list[~azure.ai.vision.face.models.FaceIdentificationCandidate] + """ + + face_id: str = rest_field(name="faceId") + """faceId of the query face. Required.""" + candidates: List["_models.FaceIdentificationCandidate"] = rest_field() + """Identified person candidates for that face (ranked by confidence). Array size should be no + larger than input maxNumOfCandidatesReturned. If no person is identified, will return an empty + array. Required.""" + + @overload + def __init__( + self, + *, + face_id: str, + candidates: List["_models.FaceIdentificationCandidate"], + ): ... + + @overload + def __init__(self, mapping: Mapping[str, Any]): + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useless-super-delegation + super().__init__(*args, **kwargs) + + class FaceLandmarks(_model_base.Model): # pylint: disable=too-many-instance-attributes """A collection of 27-point face landmarks pointing to the important positions of face components. - All required parameters must be populated in order to send to server. :ivar pupil_left: The coordinates of the left eye pupil. Required. :vartype pupil_left: ~azure.ai.vision.face.models.LandmarkCoordinate @@ -873,7 +1114,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class FaceRectangle(_model_base.Model): """A rectangle within which a face can be found. - All required parameters must be populated in order to send to server. :ivar top: The distance from the top edge if the image to the top edge of the rectangle, in pixels. Required. @@ -919,10 +1159,71 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles super().__init__(*args, **kwargs) +class FaceTrainingResult(_model_base.Model): + """Training result of a container. + + + :ivar status: Training status of the container. Required. Known values are: "notStarted", + "running", "succeeded", and "failed". + :vartype status: str or ~azure.ai.vision.face.models.FaceOperationStatus + :ivar created_date_time: A combined UTC date and time string that describes the created time of + the person group, large person group or large face list. Required. + :vartype created_date_time: ~datetime.datetime + :ivar last_action_date_time: A combined UTC date and time string that describes the last modify + time of the person group, large person group or large face list, could be null value when the + group is not successfully trained. Required. + :vartype last_action_date_time: ~datetime.datetime + :ivar last_successful_training_date_time: A combined UTC date and time string that describes + the last successful training time of the person group, large person group or large face list. + Required. + :vartype last_successful_training_date_time: ~datetime.datetime + :ivar message: Show failure message when training failed (omitted when training succeed). + :vartype message: str + """ + + status: Union[str, "_models.FaceOperationStatus"] = rest_field() + """Training status of the container. Required. Known values are: \"notStarted\", \"running\", + \"succeeded\", and \"failed\".""" + created_date_time: datetime.datetime = rest_field(name="createdDateTime", format="rfc3339") + """A combined UTC date and time string that describes the created time of the person group, large + person group or large face list. Required.""" + last_action_date_time: datetime.datetime = rest_field(name="lastActionDateTime", format="rfc3339") + """A combined UTC date and time string that describes the last modify time of the person group, + large person group or large face list, could be null value when the group is not successfully + trained. Required.""" + last_successful_training_date_time: datetime.datetime = rest_field( + name="lastSuccessfulTrainingDateTime", format="rfc3339" + ) + """A combined UTC date and time string that describes the last successful training time of the + person group, large person group or large face list. Required.""" + message: Optional[str] = rest_field() + """Show failure message when training failed (omitted when training succeed).""" + + @overload + def __init__( + self, + *, + status: Union[str, "_models.FaceOperationStatus"], + created_date_time: datetime.datetime, + last_action_date_time: datetime.datetime, + last_successful_training_date_time: datetime.datetime, + message: Optional[str] = None, + ): ... + + @overload + def __init__(self, mapping: Mapping[str, Any]): + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useless-super-delegation + super().__init__(*args, **kwargs) + + class FaceVerificationResult(_model_base.Model): """Verify result. - All required parameters must be populated in order to send to server. :ivar is_identical: True if the two faces belong to the same person or the face belongs to the person, otherwise false. Required. @@ -965,7 +1266,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class FacialHair(_model_base.Model): """Properties describing facial hair attributes. - All required parameters must be populated in order to send to server. :ivar moustache: A number ranging from 0 to 1 indicating a level of confidence associated with a property. Required. @@ -1011,7 +1311,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class HairColor(_model_base.Model): """An array of candidate colors and confidence level in the presence of each. - All required parameters must be populated in order to send to server. :ivar color: Name of the hair color. Required. Known values are: "unknown", "white", "gray", "blond", "brown", "red", "black", and "other". @@ -1048,7 +1347,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class HairProperties(_model_base.Model): """Properties describing hair attributes. - All required parameters must be populated in order to send to server. :ivar bald: A number describing confidence level of whether the person is bald. Required. :vartype bald: float @@ -1089,7 +1387,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class HeadPose(_model_base.Model): """3-D roll/yaw/pitch angles for face direction. - All required parameters must be populated in order to send to server. :ivar pitch: Value of angles. Required. :vartype pitch: float @@ -1129,7 +1426,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class LandmarkCoordinate(_model_base.Model): """Landmark coordinates within an image. - All required parameters must be populated in order to send to server. :ivar x: The horizontal component, in pixels. Required. :vartype x: float @@ -1161,10 +1457,225 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles super().__init__(*args, **kwargs) +class LargeFaceList(_model_base.Model): + """Large face list is a list of faces, up to 1,000,000 faces. + + Readonly variables are only populated by the server, and will be ignored when sending a request. + + + :ivar name: User defined name, maximum length is 128. Required. + :vartype name: str + :ivar user_data: Optional user defined data. Length should not exceed 16K. + :vartype user_data: str + :ivar recognition_model: Name of recognition model. Recognition model is used when the face + features are extracted and associated with detected faceIds. Known values are: + "recognition_01", "recognition_02", "recognition_03", and "recognition_04". + :vartype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel + :ivar large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :vartype large_face_list_id: str + """ + + name: str = rest_field() + """User defined name, maximum length is 128. Required.""" + user_data: Optional[str] = rest_field(name="userData") + """Optional user defined data. Length should not exceed 16K.""" + recognition_model: Optional[Union[str, "_models.FaceRecognitionModel"]] = rest_field(name="recognitionModel") + """Name of recognition model. Recognition model is used when the face features are extracted and + associated with detected faceIds. Known values are: \"recognition_01\", \"recognition_02\", + \"recognition_03\", and \"recognition_04\".""" + large_face_list_id: str = rest_field(name="largeFaceListId", visibility=["read"]) + """Valid character is letter in lower case or digit or '-' or '_', maximum length is 64. Required.""" + + @overload + def __init__( + self, + *, + name: str, + user_data: Optional[str] = None, + recognition_model: Optional[Union[str, "_models.FaceRecognitionModel"]] = None, + ): ... + + @overload + def __init__(self, mapping: Mapping[str, Any]): + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useless-super-delegation + super().__init__(*args, **kwargs) + + +class LargeFaceListFace(_model_base.Model): + """Face resource for large face list. + + Readonly variables are only populated by the server, and will be ignored when sending a request. + + + :ivar persisted_face_id: Face ID of the face. Required. + :vartype persisted_face_id: str + :ivar user_data: User-provided data attached to the face. The length limit is 1K. + :vartype user_data: str + """ + + persisted_face_id: str = rest_field(name="persistedFaceId", visibility=["read"]) + """Face ID of the face. Required.""" + user_data: Optional[str] = rest_field(name="userData") + """User-provided data attached to the face. The length limit is 1K.""" + + @overload + def __init__( + self, + *, + user_data: Optional[str] = None, + ): ... + + @overload + def __init__(self, mapping: Mapping[str, Any]): + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useless-super-delegation + super().__init__(*args, **kwargs) + + +class LargePersonGroup(_model_base.Model): + """The container of the uploaded person data, including face recognition feature, and up to + 1,000,000 people. + + Readonly variables are only populated by the server, and will be ignored when sending a request. + + + :ivar name: User defined name, maximum length is 128. Required. + :vartype name: str + :ivar user_data: Optional user defined data. Length should not exceed 16K. + :vartype user_data: str + :ivar recognition_model: Name of recognition model. Recognition model is used when the face + features are extracted and associated with detected faceIds. Known values are: + "recognition_01", "recognition_02", "recognition_03", and "recognition_04". + :vartype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel + :ivar large_person_group_id: ID of the container. Required. + :vartype large_person_group_id: str + """ + + name: str = rest_field() + """User defined name, maximum length is 128. Required.""" + user_data: Optional[str] = rest_field(name="userData") + """Optional user defined data. Length should not exceed 16K.""" + recognition_model: Optional[Union[str, "_models.FaceRecognitionModel"]] = rest_field(name="recognitionModel") + """Name of recognition model. Recognition model is used when the face features are extracted and + associated with detected faceIds. Known values are: \"recognition_01\", \"recognition_02\", + \"recognition_03\", and \"recognition_04\".""" + large_person_group_id: str = rest_field(name="largePersonGroupId", visibility=["read"]) + """ID of the container. Required.""" + + @overload + def __init__( + self, + *, + name: str, + user_data: Optional[str] = None, + recognition_model: Optional[Union[str, "_models.FaceRecognitionModel"]] = None, + ): ... + + @overload + def __init__(self, mapping: Mapping[str, Any]): + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useless-super-delegation + super().__init__(*args, **kwargs) + + +class LargePersonGroupPerson(_model_base.Model): + """The person in a specified large person group. To add face to this person, please call "Add + Large Person Group Person Face". + + Readonly variables are only populated by the server, and will be ignored when sending a request. + + + :ivar person_id: ID of the person. Required. + :vartype person_id: str + :ivar name: User defined name, maximum length is 128. Required. + :vartype name: str + :ivar user_data: Optional user defined data. Length should not exceed 16K. + :vartype user_data: str + :ivar persisted_face_ids: Face ids of registered faces in the person. + :vartype persisted_face_ids: list[str] + """ + + person_id: str = rest_field(name="personId", visibility=["read"]) + """ID of the person. Required.""" + name: str = rest_field() + """User defined name, maximum length is 128. Required.""" + user_data: Optional[str] = rest_field(name="userData") + """Optional user defined data. Length should not exceed 16K.""" + persisted_face_ids: Optional[List[str]] = rest_field(name="persistedFaceIds") + """Face ids of registered faces in the person.""" + + @overload + def __init__( + self, + *, + name: str, + user_data: Optional[str] = None, + persisted_face_ids: Optional[List[str]] = None, + ): ... + + @overload + def __init__(self, mapping: Mapping[str, Any]): + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useless-super-delegation + super().__init__(*args, **kwargs) + + +class LargePersonGroupPersonFace(_model_base.Model): + """Face resource for large person group person. + + Readonly variables are only populated by the server, and will be ignored when sending a request. + + + :ivar persisted_face_id: Face ID of the face. Required. + :vartype persisted_face_id: str + :ivar user_data: User-provided data attached to the face. The length limit is 1K. + :vartype user_data: str + """ + + persisted_face_id: str = rest_field(name="persistedFaceId", visibility=["read"]) + """Face ID of the face. Required.""" + user_data: Optional[str] = rest_field(name="userData") + """User-provided data attached to the face. The length limit is 1K.""" + + @overload + def __init__( + self, + *, + user_data: Optional[str] = None, + ): ... + + @overload + def __init__(self, mapping: Mapping[str, Any]): + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useless-super-delegation + super().__init__(*args, **kwargs) + + class LivenessOutputsTarget(_model_base.Model): """The liveness classification for target face. - All required parameters must be populated in order to send to server. :ivar face_rectangle: The face region where the liveness classification was made on. Required. :vartype face_rectangle: ~azure.ai.vision.face.models.FaceRectangle @@ -1221,8 +1732,7 @@ class LivenessResponseBody(_model_base.Model): :ivar target: Specific targets used for liveness classification. :vartype target: ~azure.ai.vision.face.models.LivenessOutputsTarget :ivar model_version_used: The model version used for liveness classification. Known values are: - "2020-02-15-preview.01", "2021-11-12-preview.03", "2022-10-15-preview.04", and - "2023-03-02-preview.05". + "2022-10-15-preview.04" and "2023-12-20-preview.06". :vartype model_version_used: str or ~azure.ai.vision.face.models.LivenessModel :ivar verify_result: The face verification output. Only available when the request is liveness with verify. @@ -1235,9 +1745,8 @@ class LivenessResponseBody(_model_base.Model): target: Optional["_models.LivenessOutputsTarget"] = rest_field() """Specific targets used for liveness classification.""" model_version_used: Optional[Union[str, "_models.LivenessModel"]] = rest_field(name="modelVersionUsed") - """The model version used for liveness classification. Known values are: - \"2020-02-15-preview.01\", \"2021-11-12-preview.03\", \"2022-10-15-preview.04\", and - \"2023-03-02-preview.05\".""" + """The model version used for liveness classification. Known values are: \"2022-10-15-preview.04\" + and \"2023-12-20-preview.06\".""" verify_result: Optional["_models.LivenessWithVerifyOutputs"] = rest_field(name="verifyResult") """The face verification output. Only available when the request is liveness with verify.""" @@ -1267,7 +1776,6 @@ class LivenessSession(_model_base.Model): Readonly variables are only populated by the server, and will be ignored when sending a request. - All required parameters must be populated in order to send to server. :ivar id: The unique ID to reference this session. Required. :vartype id: str @@ -1338,7 +1846,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class LivenessSessionAuditEntry(_model_base.Model): """Audit entry for a request in session. - All required parameters must be populated in order to send to server. :ivar id: The unique id to refer to this audit request. Use this id with the 'start' query parameter to continue on to the next page of audit results. Required. @@ -1364,6 +1871,10 @@ class LivenessSessionAuditEntry(_model_base.Model): service has been compromised and the result should not be trusted. For more information, see how to guides on how to leverage this value to secure your end-to-end solution. Required. :vartype digest: str + :ivar session_image_id: The image ID of the session request. + :vartype session_image_id: str + :ivar verify_image_hash: The sha256 hash of the verify-image in the request. + :vartype verify_image_hash: str """ id: int = rest_field() @@ -1389,6 +1900,10 @@ class LivenessSessionAuditEntry(_model_base.Model): server calculated digest, then the message integrity between the client and service has been compromised and the result should not be trusted. For more information, see how to guides on how to leverage this value to secure your end-to-end solution. Required.""" + session_image_id: Optional[str] = rest_field(name="sessionImageId") + """The image ID of the session request.""" + verify_image_hash: Optional[str] = rest_field(name="verifyImageHash") + """The sha256 hash of the verify-image in the request.""" @overload def __init__( @@ -1402,6 +1917,8 @@ def __init__( request: "_models.AuditRequestInfo", response: "_models.AuditLivenessResponseInfo", digest: str, + session_image_id: Optional[str] = None, + verify_image_hash: Optional[str] = None, ): ... @overload @@ -1420,7 +1937,6 @@ class LivenessSessionItem(_model_base.Model): Readonly variables are only populated by the server, and will be ignored when sending a request. - All required parameters must be populated in order to send to server. :ivar id: The unique ID to reference this session. Required. :vartype id: str @@ -1479,7 +1995,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class LivenessWithVerifyImage(_model_base.Model): """The detail of face for verification. - All required parameters must be populated in order to send to server. :ivar face_rectangle: The face region where the comparison image's classification was made. Required. @@ -1517,7 +2032,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class LivenessWithVerifyOutputs(_model_base.Model): """The face verification output. - All required parameters must be populated in order to send to server. :ivar verify_image: The detail of face for verification. Required. :vartype verify_image: ~azure.ai.vision.face.models.LivenessWithVerifyImage @@ -1560,7 +2074,6 @@ class LivenessWithVerifySession(_model_base.Model): Readonly variables are only populated by the server, and will be ignored when sending a request. - All required parameters must be populated in order to send to server. :ivar id: The unique ID to reference this session. Required. :vartype id: str @@ -1631,7 +2144,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class MaskProperties(_model_base.Model): """Properties describing the presence of a mask on a given face. - All required parameters must be populated in order to send to server. :ivar nose_and_mouth_covered: A boolean value indicating whether nose and mouth are covered. Required. @@ -1669,7 +2181,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class NoiseProperties(_model_base.Model): """Properties describing noise level of the image. - All required parameters must be populated in order to send to server. :ivar noise_level: An enum value indicating level of noise. Required. Known values are: "low", "medium", and "high". @@ -1710,7 +2221,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: # pylint: disable=useles class OcclusionProperties(_model_base.Model): """Properties describing occlusions on a given face. - All required parameters must be populated in order to send to server. :ivar forehead_occluded: A boolean value indicating whether forehead is occluded. Required. :vartype forehead_occluded: bool diff --git a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/_operations/__init__.py b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/operations/__init__.py similarity index 84% rename from sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/_operations/__init__.py rename to sdk/face/azure-ai-vision-face/azure/ai/vision/face/operations/__init__.py index 366e660e06db..d69ac05180a1 100644 --- a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/_operations/__init__.py +++ b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/operations/__init__.py @@ -6,6 +6,8 @@ # Changes may cause incorrect behavior and will be lost if the code is regenerated. # -------------------------------------------------------------------------- +from ._operations import LargeFaceListOperations +from ._operations import LargePersonGroupOperations from ._operations import FaceClientOperationsMixin from ._operations import FaceSessionClientOperationsMixin @@ -14,6 +16,8 @@ from ._patch import patch_sdk as _patch_sdk __all__ = [ + "LargeFaceListOperations", + "LargePersonGroupOperations", "FaceClientOperationsMixin", "FaceSessionClientOperationsMixin", ] diff --git a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/operations/_operations.py b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/operations/_operations.py new file mode 100644 index 000000000000..d67833568b43 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/operations/_operations.py @@ -0,0 +1,7267 @@ +# pylint: disable=too-many-lines +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- +from io import IOBase +import json +import sys +from typing import Any, Callable, Dict, IO, Iterator, List, Optional, TypeVar, Union, cast, overload + +from azure.core.exceptions import ( + ClientAuthenticationError, + HttpResponseError, + ResourceExistsError, + ResourceNotFoundError, + ResourceNotModifiedError, + StreamClosedError, + StreamConsumedError, + map_error, +) +from azure.core.pipeline import PipelineResponse +from azure.core.polling import LROPoller, NoPolling, PollingMethod +from azure.core.polling.base_polling import LROBasePolling +from azure.core.rest import HttpRequest, HttpResponse +from azure.core.tracing.decorator import distributed_trace +from azure.core.utils import case_insensitive_dict + +from .. import _model_base, models as _models +from .._model_base import SdkJSONEncoder, _deserialize +from .._serialization import Serializer +from .._validation import api_version_validation +from .._vendor import FaceClientMixinABC, FaceSessionClientMixinABC, prepare_multipart_form_data + +if sys.version_info >= (3, 9): + from collections.abc import MutableMapping +else: + from typing import MutableMapping # type: ignore +JSON = MutableMapping[str, Any] # pylint: disable=unsubscriptable-object +_Unset: Any = object() +T = TypeVar("T") +ClsType = Optional[Callable[[PipelineResponse[HttpRequest, HttpResponse], T, Dict[str, Any]], Any]] + +_SERIALIZER = Serializer() +_SERIALIZER.client_side_validation = False + + +def build_large_face_list_create_request(large_face_list_id: str, **kwargs: Any) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largefacelists/{largeFaceListId}" + path_format_arguments = { + "largeFaceListId": _SERIALIZER.url("large_face_list_id", large_face_list_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct headers + if content_type is not None: + _headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str") + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="PUT", url=_url, headers=_headers, **kwargs) + + +def build_large_face_list_delete_request(large_face_list_id: str, **kwargs: Any) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largefacelists/{largeFaceListId}" + path_format_arguments = { + "largeFaceListId": _SERIALIZER.url("large_face_list_id", large_face_list_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="DELETE", url=_url, headers=_headers, **kwargs) + + +def build_large_face_list_get_request( + large_face_list_id: str, *, return_recognition_model: Optional[bool] = None, **kwargs: Any +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) + + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largefacelists/{largeFaceListId}" + path_format_arguments = { + "largeFaceListId": _SERIALIZER.url("large_face_list_id", large_face_list_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct parameters + if return_recognition_model is not None: + _params["returnRecognitionModel"] = _SERIALIZER.query( + "return_recognition_model", return_recognition_model, "bool" + ) + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="GET", url=_url, params=_params, headers=_headers, **kwargs) + + +def build_large_face_list_update_request(large_face_list_id: str, **kwargs: Any) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largefacelists/{largeFaceListId}" + path_format_arguments = { + "largeFaceListId": _SERIALIZER.url("large_face_list_id", large_face_list_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct headers + if content_type is not None: + _headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str") + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="PATCH", url=_url, headers=_headers, **kwargs) + + +def build_large_face_list_get_large_face_lists_request( # pylint: disable=name-too-long + *, + start: Optional[str] = None, + top: Optional[int] = None, + return_recognition_model: Optional[bool] = None, + **kwargs: Any, +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) + + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largefacelists" + + # Construct parameters + if start is not None: + _params["start"] = _SERIALIZER.query("start", start, "str") + if top is not None: + _params["top"] = _SERIALIZER.query("top", top, "int") + if return_recognition_model is not None: + _params["returnRecognitionModel"] = _SERIALIZER.query( + "return_recognition_model", return_recognition_model, "bool" + ) + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="GET", url=_url, params=_params, headers=_headers, **kwargs) + + +def build_large_face_list_get_training_status_request( # pylint: disable=name-too-long + large_face_list_id: str, **kwargs: Any +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largefacelists/{largeFaceListId}/training" + path_format_arguments = { + "largeFaceListId": _SERIALIZER.url("large_face_list_id", large_face_list_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="GET", url=_url, headers=_headers, **kwargs) + + +def build_large_face_list_train_request(large_face_list_id: str, **kwargs: Any) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largefacelists/{largeFaceListId}/train" + path_format_arguments = { + "largeFaceListId": _SERIALIZER.url("large_face_list_id", large_face_list_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="POST", url=_url, headers=_headers, **kwargs) + + +def build_large_face_list_add_face_from_url_request( # pylint: disable=name-too-long + large_face_list_id: str, + *, + target_face: Optional[List[int]] = None, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + user_data: Optional[str] = None, + **kwargs: Any, +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largefacelists/{largeFaceListId}/persistedfaces" + path_format_arguments = { + "largeFaceListId": _SERIALIZER.url("large_face_list_id", large_face_list_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct parameters + if target_face is not None: + _params["targetFace"] = _SERIALIZER.query("target_face", target_face, "[int]", div=",") + if detection_model is not None: + _params["detectionModel"] = _SERIALIZER.query("detection_model", detection_model, "str") + if user_data is not None: + _params["userData"] = _SERIALIZER.query("user_data", user_data, "str") + + # Construct headers + if content_type is not None: + _headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str") + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="POST", url=_url, params=_params, headers=_headers, **kwargs) + + +def build_large_face_list_add_face_request( + large_face_list_id: str, + *, + target_face: Optional[List[int]] = None, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + user_data: Optional[str] = None, + **kwargs: Any, +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) + + content_type: str = kwargs.pop("content_type") + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largefacelists/{largeFaceListId}/persistedfaces" + path_format_arguments = { + "largeFaceListId": _SERIALIZER.url("large_face_list_id", large_face_list_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct parameters + if target_face is not None: + _params["targetFace"] = _SERIALIZER.query("target_face", target_face, "[int]", div=",") + if detection_model is not None: + _params["detectionModel"] = _SERIALIZER.query("detection_model", detection_model, "str") + if user_data is not None: + _params["userData"] = _SERIALIZER.query("user_data", user_data, "str") + + # Construct headers + _headers["content-type"] = _SERIALIZER.header("content_type", content_type, "str") + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="POST", url=_url, params=_params, headers=_headers, **kwargs) + + +def build_large_face_list_delete_face_request( # pylint: disable=name-too-long + large_face_list_id: str, persisted_face_id: str, **kwargs: Any +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largefacelists/{largeFaceListId}/persistedfaces/{persistedFaceId}" + path_format_arguments = { + "largeFaceListId": _SERIALIZER.url("large_face_list_id", large_face_list_id, "str"), + "persistedFaceId": _SERIALIZER.url("persisted_face_id", persisted_face_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="DELETE", url=_url, headers=_headers, **kwargs) + + +def build_large_face_list_get_face_request( + large_face_list_id: str, persisted_face_id: str, **kwargs: Any +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largefacelists/{largeFaceListId}/persistedfaces/{persistedFaceId}" + path_format_arguments = { + "largeFaceListId": _SERIALIZER.url("large_face_list_id", large_face_list_id, "str"), + "persistedFaceId": _SERIALIZER.url("persisted_face_id", persisted_face_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="GET", url=_url, headers=_headers, **kwargs) + + +def build_large_face_list_update_face_request( # pylint: disable=name-too-long + large_face_list_id: str, persisted_face_id: str, **kwargs: Any +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largefacelists/{largeFaceListId}/persistedfaces/{persistedFaceId}" + path_format_arguments = { + "largeFaceListId": _SERIALIZER.url("large_face_list_id", large_face_list_id, "str"), + "persistedFaceId": _SERIALIZER.url("persisted_face_id", persisted_face_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct headers + if content_type is not None: + _headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str") + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="PATCH", url=_url, headers=_headers, **kwargs) + + +def build_large_face_list_get_faces_request( + large_face_list_id: str, *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) + + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largefacelists/{largeFaceListId}/persistedfaces" + path_format_arguments = { + "largeFaceListId": _SERIALIZER.url("large_face_list_id", large_face_list_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct parameters + if start is not None: + _params["start"] = _SERIALIZER.query("start", start, "str") + if top is not None: + _params["top"] = _SERIALIZER.query("top", top, "int") + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="GET", url=_url, params=_params, headers=_headers, **kwargs) + + +def build_large_person_group_create_request(large_person_group_id: str, **kwargs: Any) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largepersongroups/{largePersonGroupId}" + path_format_arguments = { + "largePersonGroupId": _SERIALIZER.url("large_person_group_id", large_person_group_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct headers + if content_type is not None: + _headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str") + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="PUT", url=_url, headers=_headers, **kwargs) + + +def build_large_person_group_delete_request(large_person_group_id: str, **kwargs: Any) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largepersongroups/{largePersonGroupId}" + path_format_arguments = { + "largePersonGroupId": _SERIALIZER.url("large_person_group_id", large_person_group_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="DELETE", url=_url, headers=_headers, **kwargs) + + +def build_large_person_group_get_request( + large_person_group_id: str, *, return_recognition_model: Optional[bool] = None, **kwargs: Any +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) + + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largepersongroups/{largePersonGroupId}" + path_format_arguments = { + "largePersonGroupId": _SERIALIZER.url("large_person_group_id", large_person_group_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct parameters + if return_recognition_model is not None: + _params["returnRecognitionModel"] = _SERIALIZER.query( + "return_recognition_model", return_recognition_model, "bool" + ) + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="GET", url=_url, params=_params, headers=_headers, **kwargs) + + +def build_large_person_group_update_request(large_person_group_id: str, **kwargs: Any) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largepersongroups/{largePersonGroupId}" + path_format_arguments = { + "largePersonGroupId": _SERIALIZER.url("large_person_group_id", large_person_group_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct headers + if content_type is not None: + _headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str") + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="PATCH", url=_url, headers=_headers, **kwargs) + + +def build_large_person_group_get_large_person_groups_request( # pylint: disable=name-too-long + *, + start: Optional[str] = None, + top: Optional[int] = None, + return_recognition_model: Optional[bool] = None, + **kwargs: Any, +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) + + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largepersongroups" + + # Construct parameters + if start is not None: + _params["start"] = _SERIALIZER.query("start", start, "str") + if top is not None: + _params["top"] = _SERIALIZER.query("top", top, "int") + if return_recognition_model is not None: + _params["returnRecognitionModel"] = _SERIALIZER.query( + "return_recognition_model", return_recognition_model, "bool" + ) + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="GET", url=_url, params=_params, headers=_headers, **kwargs) + + +def build_large_person_group_get_training_status_request( # pylint: disable=name-too-long + large_person_group_id: str, **kwargs: Any +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largepersongroups/{largePersonGroupId}/training" + path_format_arguments = { + "largePersonGroupId": _SERIALIZER.url("large_person_group_id", large_person_group_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="GET", url=_url, headers=_headers, **kwargs) + + +def build_large_person_group_train_request(large_person_group_id: str, **kwargs: Any) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largepersongroups/{largePersonGroupId}/train" + path_format_arguments = { + "largePersonGroupId": _SERIALIZER.url("large_person_group_id", large_person_group_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="POST", url=_url, headers=_headers, **kwargs) + + +def build_large_person_group_create_person_request( # pylint: disable=name-too-long + large_person_group_id: str, **kwargs: Any +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largepersongroups/{largePersonGroupId}/persons" + path_format_arguments = { + "largePersonGroupId": _SERIALIZER.url("large_person_group_id", large_person_group_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct headers + if content_type is not None: + _headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str") + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="POST", url=_url, headers=_headers, **kwargs) + + +def build_large_person_group_delete_person_request( # pylint: disable=name-too-long + large_person_group_id: str, person_id: str, **kwargs: Any +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largepersongroups/{largePersonGroupId}/persons/{personId}" + path_format_arguments = { + "largePersonGroupId": _SERIALIZER.url("large_person_group_id", large_person_group_id, "str"), + "personId": _SERIALIZER.url("person_id", person_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="DELETE", url=_url, headers=_headers, **kwargs) + + +def build_large_person_group_get_person_request( # pylint: disable=name-too-long + large_person_group_id: str, person_id: str, **kwargs: Any +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largepersongroups/{largePersonGroupId}/persons/{personId}" + path_format_arguments = { + "largePersonGroupId": _SERIALIZER.url("large_person_group_id", large_person_group_id, "str"), + "personId": _SERIALIZER.url("person_id", person_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="GET", url=_url, headers=_headers, **kwargs) + + +def build_large_person_group_update_person_request( # pylint: disable=name-too-long + large_person_group_id: str, person_id: str, **kwargs: Any +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largepersongroups/{largePersonGroupId}/persons/{personId}" + path_format_arguments = { + "largePersonGroupId": _SERIALIZER.url("large_person_group_id", large_person_group_id, "str"), + "personId": _SERIALIZER.url("person_id", person_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct headers + if content_type is not None: + _headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str") + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="PATCH", url=_url, headers=_headers, **kwargs) + + +def build_large_person_group_get_persons_request( # pylint: disable=name-too-long + large_person_group_id: str, *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) + + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largepersongroups/{largePersonGroupId}/persons" + path_format_arguments = { + "largePersonGroupId": _SERIALIZER.url("large_person_group_id", large_person_group_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct parameters + if start is not None: + _params["start"] = _SERIALIZER.query("start", start, "str") + if top is not None: + _params["top"] = _SERIALIZER.query("top", top, "int") + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="GET", url=_url, params=_params, headers=_headers, **kwargs) + + +def build_large_person_group_add_face_from_url_request( # pylint: disable=name-too-long + large_person_group_id: str, + person_id: str, + *, + target_face: Optional[List[int]] = None, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + user_data: Optional[str] = None, + **kwargs: Any, +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largepersongroups/{largePersonGroupId}/persons/{personId}/persistedfaces" + path_format_arguments = { + "largePersonGroupId": _SERIALIZER.url("large_person_group_id", large_person_group_id, "str"), + "personId": _SERIALIZER.url("person_id", person_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct parameters + if target_face is not None: + _params["targetFace"] = _SERIALIZER.query("target_face", target_face, "[int]", div=",") + if detection_model is not None: + _params["detectionModel"] = _SERIALIZER.query("detection_model", detection_model, "str") + if user_data is not None: + _params["userData"] = _SERIALIZER.query("user_data", user_data, "str") + + # Construct headers + if content_type is not None: + _headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str") + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="POST", url=_url, params=_params, headers=_headers, **kwargs) + + +def build_large_person_group_add_face_request( # pylint: disable=name-too-long + large_person_group_id: str, + person_id: str, + *, + target_face: Optional[List[int]] = None, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + user_data: Optional[str] = None, + **kwargs: Any, +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) + + content_type: str = kwargs.pop("content_type") + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largepersongroups/{largePersonGroupId}/persons/{personId}/persistedfaces" + path_format_arguments = { + "largePersonGroupId": _SERIALIZER.url("large_person_group_id", large_person_group_id, "str"), + "personId": _SERIALIZER.url("person_id", person_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct parameters + if target_face is not None: + _params["targetFace"] = _SERIALIZER.query("target_face", target_face, "[int]", div=",") + if detection_model is not None: + _params["detectionModel"] = _SERIALIZER.query("detection_model", detection_model, "str") + if user_data is not None: + _params["userData"] = _SERIALIZER.query("user_data", user_data, "str") + + # Construct headers + _headers["content-type"] = _SERIALIZER.header("content_type", content_type, "str") + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="POST", url=_url, params=_params, headers=_headers, **kwargs) + + +def build_large_person_group_delete_face_request( # pylint: disable=name-too-long + large_person_group_id: str, person_id: str, persisted_face_id: str, **kwargs: Any +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largepersongroups/{largePersonGroupId}/persons/{personId}/persistedfaces/{persistedFaceId}" + path_format_arguments = { + "largePersonGroupId": _SERIALIZER.url("large_person_group_id", large_person_group_id, "str"), + "personId": _SERIALIZER.url("person_id", person_id, "str"), + "persistedFaceId": _SERIALIZER.url("persisted_face_id", persisted_face_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="DELETE", url=_url, headers=_headers, **kwargs) + + +def build_large_person_group_get_face_request( # pylint: disable=name-too-long + large_person_group_id: str, person_id: str, persisted_face_id: str, **kwargs: Any +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largepersongroups/{largePersonGroupId}/persons/{personId}/persistedfaces/{persistedFaceId}" + path_format_arguments = { + "largePersonGroupId": _SERIALIZER.url("large_person_group_id", large_person_group_id, "str"), + "personId": _SERIALIZER.url("person_id", person_id, "str"), + "persistedFaceId": _SERIALIZER.url("persisted_face_id", persisted_face_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="GET", url=_url, headers=_headers, **kwargs) + + +def build_large_person_group_update_face_request( # pylint: disable=name-too-long + large_person_group_id: str, person_id: str, persisted_face_id: str, **kwargs: Any +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/largepersongroups/{largePersonGroupId}/persons/{personId}/persistedfaces/{persistedFaceId}" + path_format_arguments = { + "largePersonGroupId": _SERIALIZER.url("large_person_group_id", large_person_group_id, "str"), + "personId": _SERIALIZER.url("person_id", person_id, "str"), + "persistedFaceId": _SERIALIZER.url("persisted_face_id", persisted_face_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct headers + if content_type is not None: + _headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str") + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="PATCH", url=_url, headers=_headers, **kwargs) + + +def build_face_detect_from_url_request( + *, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + return_face_id: Optional[bool] = None, + return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, + return_face_landmarks: Optional[bool] = None, + return_recognition_model: Optional[bool] = None, + face_id_time_to_live: Optional[int] = None, + **kwargs: Any, +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("content-type", None)) + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/detect" + + # Construct parameters + if detection_model is not None: + _params["detectionModel"] = _SERIALIZER.query("detection_model", detection_model, "str") + if recognition_model is not None: + _params["recognitionModel"] = _SERIALIZER.query("recognition_model", recognition_model, "str") + if return_face_id is not None: + _params["returnFaceId"] = _SERIALIZER.query("return_face_id", return_face_id, "bool") + if return_face_attributes is not None: + _params["returnFaceAttributes"] = _SERIALIZER.query( + "return_face_attributes", return_face_attributes, "[str]", div="," + ) + if return_face_landmarks is not None: + _params["returnFaceLandmarks"] = _SERIALIZER.query("return_face_landmarks", return_face_landmarks, "bool") + if return_recognition_model is not None: + _params["returnRecognitionModel"] = _SERIALIZER.query( + "return_recognition_model", return_recognition_model, "bool" + ) + if face_id_time_to_live is not None: + _params["faceIdTimeToLive"] = _SERIALIZER.query("face_id_time_to_live", face_id_time_to_live, "int") + + # Construct headers + if content_type is not None: + _headers["content-type"] = _SERIALIZER.header("content_type", content_type, "str") + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="POST", url=_url, params=_params, headers=_headers, **kwargs) + + +def build_face_detect_request( + *, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + return_face_id: Optional[bool] = None, + return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, + return_face_landmarks: Optional[bool] = None, + return_recognition_model: Optional[bool] = None, + face_id_time_to_live: Optional[int] = None, + **kwargs: Any, +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) + + content_type: str = kwargs.pop("content_type") + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/detect" + + # Construct parameters + if detection_model is not None: + _params["detectionModel"] = _SERIALIZER.query("detection_model", detection_model, "str") + if recognition_model is not None: + _params["recognitionModel"] = _SERIALIZER.query("recognition_model", recognition_model, "str") + if return_face_id is not None: + _params["returnFaceId"] = _SERIALIZER.query("return_face_id", return_face_id, "bool") + if return_face_attributes is not None: + _params["returnFaceAttributes"] = _SERIALIZER.query( + "return_face_attributes", return_face_attributes, "[str]", div="," + ) + if return_face_landmarks is not None: + _params["returnFaceLandmarks"] = _SERIALIZER.query("return_face_landmarks", return_face_landmarks, "bool") + if return_recognition_model is not None: + _params["returnRecognitionModel"] = _SERIALIZER.query( + "return_recognition_model", return_recognition_model, "bool" + ) + if face_id_time_to_live is not None: + _params["faceIdTimeToLive"] = _SERIALIZER.query("face_id_time_to_live", face_id_time_to_live, "int") + + # Construct headers + _headers["content-type"] = _SERIALIZER.header("content_type", content_type, "str") + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="POST", url=_url, params=_params, headers=_headers, **kwargs) + + +def build_face_find_similar_request(**kwargs: Any) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/findsimilars" + + # Construct headers + if content_type is not None: + _headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str") + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="POST", url=_url, headers=_headers, **kwargs) + + +def build_face_verify_face_to_face_request(**kwargs: Any) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/verify" + + # Construct headers + if content_type is not None: + _headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str") + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="POST", url=_url, headers=_headers, **kwargs) + + +def build_face_group_request(**kwargs: Any) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/group" + + # Construct headers + if content_type is not None: + _headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str") + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="POST", url=_url, headers=_headers, **kwargs) + + +def build_face_find_similar_from_large_face_list_request(**kwargs: Any) -> HttpRequest: # pylint: disable=name-too-long + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/findsimilars" + + # Construct headers + if content_type is not None: + _headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str") + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="POST", url=_url, headers=_headers, **kwargs) + + +def build_face_identify_from_large_person_group_request(**kwargs: Any) -> HttpRequest: # pylint: disable=name-too-long + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/identify" + + # Construct headers + if content_type is not None: + _headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str") + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="POST", url=_url, headers=_headers, **kwargs) + + +def build_face_verify_from_large_person_group_request(**kwargs: Any) -> HttpRequest: # pylint: disable=name-too-long + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/verify" + + # Construct headers + if content_type is not None: + _headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str") + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="POST", url=_url, headers=_headers, **kwargs) + + +def build_face_session_create_liveness_session_request(**kwargs: Any) -> HttpRequest: # pylint: disable=name-too-long + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/detectLiveness/singleModal/sessions" + + # Construct headers + if content_type is not None: + _headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str") + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="POST", url=_url, headers=_headers, **kwargs) + + +def build_face_session_delete_liveness_session_request( # pylint: disable=name-too-long + session_id: str, **kwargs: Any +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/detectLiveness/singleModal/sessions/{sessionId}" + path_format_arguments = { + "sessionId": _SERIALIZER.url("session_id", session_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="DELETE", url=_url, headers=_headers, **kwargs) + + +def build_face_session_get_liveness_session_result_request( # pylint: disable=name-too-long + session_id: str, **kwargs: Any +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/detectLiveness/singleModal/sessions/{sessionId}" + path_format_arguments = { + "sessionId": _SERIALIZER.url("session_id", session_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="GET", url=_url, headers=_headers, **kwargs) + + +def build_face_session_get_liveness_sessions_request( # pylint: disable=name-too-long + *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) + + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/detectLiveness/singleModal/sessions" + + # Construct parameters + if start is not None: + _params["start"] = _SERIALIZER.query("start", start, "str") + if top is not None: + _params["top"] = _SERIALIZER.query("top", top, "int") + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="GET", url=_url, params=_params, headers=_headers, **kwargs) + + +def build_face_session_get_liveness_session_audit_entries_request( # pylint: disable=name-too-long + session_id: str, *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) + + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/detectLiveness/singleModal/sessions/{sessionId}/audit" + path_format_arguments = { + "sessionId": _SERIALIZER.url("session_id", session_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct parameters + if start is not None: + _params["start"] = _SERIALIZER.query("start", start, "str") + if top is not None: + _params["top"] = _SERIALIZER.query("top", top, "int") + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="GET", url=_url, params=_params, headers=_headers, **kwargs) + + +def build_face_session_create_liveness_with_verify_session_request( # pylint: disable=name-too-long + **kwargs: Any, +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/detectLivenessWithVerify/singleModal/sessions" + + # Construct headers + if content_type is not None: + _headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str") + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="POST", url=_url, headers=_headers, **kwargs) + + +def build_face_session_create_liveness_with_verify_session_with_verify_image_request( # pylint: disable=name-too-long + **kwargs: Any, +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/detectLivenessWithVerify/singleModal/sessions" + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="POST", url=_url, headers=_headers, **kwargs) + + +def build_face_session_delete_liveness_with_verify_session_request( # pylint: disable=name-too-long + session_id: str, **kwargs: Any +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/detectLivenessWithVerify/singleModal/sessions/{sessionId}" + path_format_arguments = { + "sessionId": _SERIALIZER.url("session_id", session_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="DELETE", url=_url, headers=_headers, **kwargs) + + +def build_face_session_get_liveness_with_verify_session_result_request( # pylint: disable=name-too-long + session_id: str, **kwargs: Any +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/detectLivenessWithVerify/singleModal/sessions/{sessionId}" + path_format_arguments = { + "sessionId": _SERIALIZER.url("session_id", session_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="GET", url=_url, headers=_headers, **kwargs) + + +def build_face_session_get_liveness_with_verify_sessions_request( # pylint: disable=name-too-long + *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) + + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/detectLivenessWithVerify/singleModal/sessions" + + # Construct parameters + if start is not None: + _params["start"] = _SERIALIZER.query("start", start, "str") + if top is not None: + _params["top"] = _SERIALIZER.query("top", top, "int") + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="GET", url=_url, params=_params, headers=_headers, **kwargs) + + +def build_face_session_get_liveness_with_verify_session_audit_entries_request( # pylint: disable=name-too-long + session_id: str, *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) + + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/detectLivenessWithVerify/singleModal/sessions/{sessionId}/audit" + path_format_arguments = { + "sessionId": _SERIALIZER.url("session_id", session_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct parameters + if start is not None: + _params["start"] = _SERIALIZER.query("start", start, "str") + if top is not None: + _params["top"] = _SERIALIZER.query("top", top, "int") + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="GET", url=_url, params=_params, headers=_headers, **kwargs) + + +def build_face_session_detect_from_session_image_request( # pylint: disable=name-too-long + *, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + return_face_id: Optional[bool] = None, + return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, + return_face_landmarks: Optional[bool] = None, + return_recognition_model: Optional[bool] = None, + face_id_time_to_live: Optional[int] = None, + **kwargs: Any, +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("content-type", None)) + accept = _headers.pop("Accept", "application/json") + + # Construct URL + _url = "/detect" + + # Construct parameters + if detection_model is not None: + _params["detectionModel"] = _SERIALIZER.query("detection_model", detection_model, "str") + if recognition_model is not None: + _params["recognitionModel"] = _SERIALIZER.query("recognition_model", recognition_model, "str") + if return_face_id is not None: + _params["returnFaceId"] = _SERIALIZER.query("return_face_id", return_face_id, "bool") + if return_face_attributes is not None: + _params["returnFaceAttributes"] = _SERIALIZER.query( + "return_face_attributes", return_face_attributes, "[str]", div="," + ) + if return_face_landmarks is not None: + _params["returnFaceLandmarks"] = _SERIALIZER.query("return_face_landmarks", return_face_landmarks, "bool") + if return_recognition_model is not None: + _params["returnRecognitionModel"] = _SERIALIZER.query( + "return_recognition_model", return_recognition_model, "bool" + ) + if face_id_time_to_live is not None: + _params["faceIdTimeToLive"] = _SERIALIZER.query("face_id_time_to_live", face_id_time_to_live, "int") + + # Construct headers + if content_type is not None: + _headers["content-type"] = _SERIALIZER.header("content_type", content_type, "str") + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="POST", url=_url, params=_params, headers=_headers, **kwargs) + + +def build_face_session_get_session_image_request( # pylint: disable=name-too-long + session_image_id: str, **kwargs: Any +) -> HttpRequest: + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + + accept = _headers.pop("Accept", "application/octet-stream") + + # Construct URL + _url = "/session/sessionImages/{sessionImageId}" + path_format_arguments = { + "sessionImageId": _SERIALIZER.url("session_image_id", session_image_id, "str"), + } + + _url: str = _url.format(**path_format_arguments) # type: ignore + + # Construct headers + _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") + + return HttpRequest(method="GET", url=_url, headers=_headers, **kwargs) + + +class LargeFaceListOperations: + """ + .. warning:: + **DO NOT** instantiate this class directly. + + Instead, you should access the following operations through + :class:`~azure.ai.vision.face.FaceAdministrationClient`'s + :attr:`large_face_list` attribute. + """ + + def __init__(self, *args, **kwargs): + input_args = list(args) + self._client = input_args.pop(0) if input_args else kwargs.pop("client") + self._config = input_args.pop(0) if input_args else kwargs.pop("config") + self._serialize = input_args.pop(0) if input_args else kwargs.pop("serializer") + self._deserialize = input_args.pop(0) if input_args else kwargs.pop("deserializer") + + @overload + def create( + self, large_face_list_id: str, body: JSON, *, content_type: str = "application/json", **kwargs: Any + ) -> None: + """Create an empty Large Face List with user-specified largeFaceListId, name, an optional userData + and recognitionModel. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/create-large-face-list for more + details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def create( + self, + large_face_list_id: str, + *, + name: str, + content_type: str = "application/json", + user_data: Optional[str] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + **kwargs: Any, + ) -> None: + """Create an empty Large Face List with user-specified largeFaceListId, name, an optional userData + and recognitionModel. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/create-large-face-list for more + details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :keyword name: User defined name, maximum length is 128. Required. + :paramtype name: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :keyword user_data: Optional user defined data. Length should not exceed 16K. Default value is + None. + :paramtype user_data: str + :keyword recognition_model: The 'recognitionModel' associated with this face list. Supported + 'recognitionModel' values include 'recognition_01', 'recognition_02, 'recognition_03', and + 'recognition_04'. The default value is 'recognition_01'. 'recognition_04' is recommended since + its accuracy is improved on faces wearing masks compared with 'recognition_03', and its overall + accuracy is improved compared with 'recognition_01' and 'recognition_02'. Known values are: + "recognition_01", "recognition_02", "recognition_03", and "recognition_04". Default value is + None. + :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def create( + self, large_face_list_id: str, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + ) -> None: + """Create an empty Large Face List with user-specified largeFaceListId, name, an optional userData + and recognitionModel. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/create-large-face-list for more + details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace + def create( # pylint: disable=inconsistent-return-statements + self, + large_face_list_id: str, + body: Union[JSON, IO[bytes]] = _Unset, + *, + name: str = _Unset, + user_data: Optional[str] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + **kwargs: Any, + ) -> None: + """Create an empty Large Face List with user-specified largeFaceListId, name, an optional userData + and recognitionModel. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/create-large-face-list for more + details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword name: User defined name, maximum length is 128. Required. + :paramtype name: str + :keyword user_data: Optional user defined data. Length should not exceed 16K. Default value is + None. + :paramtype user_data: str + :keyword recognition_model: The 'recognitionModel' associated with this face list. Supported + 'recognitionModel' values include 'recognition_01', 'recognition_02, 'recognition_03', and + 'recognition_04'. The default value is 'recognition_01'. 'recognition_04' is recommended since + its accuracy is improved on faces wearing masks compared with 'recognition_03', and its overall + accuracy is improved compared with 'recognition_01' and 'recognition_02'. Known values are: + "recognition_01", "recognition_02", "recognition_03", and "recognition_04". Default value is + None. + :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[None] = kwargs.pop("cls", None) + + if body is _Unset: + if name is _Unset: + raise TypeError("missing required argument: name") + body = {"name": name, "recognitionModel": recognition_model, "userData": user_data} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_large_face_list_create_request( + large_face_list_id=large_face_list_id, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + @distributed_trace + def delete(self, large_face_list_id: str, **kwargs: Any) -> None: # pylint: disable=inconsistent-return-statements + """Delete a face from a Large Face List by specified largeFaceListId and persistedFaceId. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/delete-large-face-list for more + details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[None] = kwargs.pop("cls", None) + + _request = build_large_face_list_delete_request( + large_face_list_id=large_face_list_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + @distributed_trace + def get( + self, large_face_list_id: str, *, return_recognition_model: Optional[bool] = None, **kwargs: Any + ) -> _models.LargeFaceList: + """Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/get-large-face-list for more + details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is + false. Default value is None. + :paramtype return_recognition_model: bool + :return: LargeFaceList. The LargeFaceList is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.LargeFaceList + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[_models.LargeFaceList] = kwargs.pop("cls", None) + + _request = build_large_face_list_get_request( + large_face_list_id=large_face_list_id, + return_recognition_model=return_recognition_model, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.LargeFaceList, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + def update( + self, large_face_list_id: str, body: JSON, *, content_type: str = "application/json", **kwargs: Any + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/update-large-face-list for more + details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def update( + self, + large_face_list_id: str, + *, + content_type: str = "application/json", + name: Optional[str] = None, + user_data: Optional[str] = None, + **kwargs: Any, + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/update-large-face-list for more + details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :keyword name: User defined name, maximum length is 128. Default value is None. + :paramtype name: str + :keyword user_data: Optional user defined data. Length should not exceed 16K. Default value is + None. + :paramtype user_data: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def update( + self, large_face_list_id: str, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/update-large-face-list for more + details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace + def update( # pylint: disable=inconsistent-return-statements + self, + large_face_list_id: str, + body: Union[JSON, IO[bytes]] = _Unset, + *, + name: Optional[str] = None, + user_data: Optional[str] = None, + **kwargs: Any, + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/update-large-face-list for more + details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword name: User defined name, maximum length is 128. Default value is None. + :paramtype name: str + :keyword user_data: Optional user defined data. Length should not exceed 16K. Default value is + None. + :paramtype user_data: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[None] = kwargs.pop("cls", None) + + if body is _Unset: + body = {"name": name, "userData": user_data} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_large_face_list_update_request( + large_face_list_id=large_face_list_id, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + @distributed_trace + def get_large_face_lists( + self, + *, + start: Optional[str] = None, + top: Optional[int] = None, + return_recognition_model: Optional[bool] = None, + **kwargs: Any, + ) -> List[_models.LargeFaceList]: + """List Large Face Lists' information of largeFaceListId, name, userData and recognitionModel. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/get-large-face-lists for more + details. + + :keyword start: List resources greater than the "start". It contains no more than 64 + characters. Default is empty. Default value is None. + :paramtype start: str + :keyword top: The number of items to list, ranging in [1, 1000]. Default is 1000. Default value + is None. + :paramtype top: int + :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is + false. Default value is None. + :paramtype return_recognition_model: bool + :return: list of LargeFaceList + :rtype: list[~azure.ai.vision.face.models.LargeFaceList] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[List[_models.LargeFaceList]] = kwargs.pop("cls", None) + + _request = build_large_face_list_get_large_face_lists_request( + start=start, + top=top, + return_recognition_model=return_recognition_model, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.LargeFaceList], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace + def get_training_status(self, large_face_list_id: str, **kwargs: Any) -> _models.FaceTrainingResult: + """Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/get-large-face-list-training-status + for more details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :return: FaceTrainingResult. The FaceTrainingResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceTrainingResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[_models.FaceTrainingResult] = kwargs.pop("cls", None) + + _request = build_large_face_list_get_training_status_request( + large_face_list_id=large_face_list_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.FaceTrainingResult, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + def _train_initial(self, large_face_list_id: str, **kwargs: Any) -> Iterator[bytes]: + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[Iterator[bytes]] = kwargs.pop("cls", None) + + _request = build_large_face_list_train_request( + large_face_list_id=large_face_list_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = True + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [202]: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + response_headers = {} + response_headers["operation-Location"] = self._deserialize("str", response.headers.get("operation-Location")) + + deserialized = response.iter_bytes() + + if cls: + return cls(pipeline_response, deserialized, response_headers) # type: ignore + + return deserialized # type: ignore + + @distributed_trace + def begin_train(self, large_face_list_id: str, **kwargs: Any) -> LROPoller[None]: + """Submit a Large Face List training task. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/train-large-face-list for more + details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :return: An instance of LROPoller that returns None + :rtype: ~azure.core.polling.LROPoller[None] + :raises ~azure.core.exceptions.HttpResponseError: + """ + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[None] = kwargs.pop("cls", None) + polling: Union[bool, PollingMethod] = kwargs.pop("polling", True) + lro_delay = kwargs.pop("polling_interval", self._config.polling_interval) + cont_token: Optional[str] = kwargs.pop("continuation_token", None) + if cont_token is None: + raw_result = self._train_initial( + large_face_list_id=large_face_list_id, cls=lambda x, y, z: x, headers=_headers, params=_params, **kwargs + ) + raw_result.http_response.read() # type: ignore + kwargs.pop("error_map", None) + + def get_long_running_output(pipeline_response): # pylint: disable=inconsistent-return-statements + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + + if polling is True: + polling_method: PollingMethod = cast( + PollingMethod, LROBasePolling(lro_delay, path_format_arguments=path_format_arguments, **kwargs) + ) + elif polling is False: + polling_method = cast(PollingMethod, NoPolling()) + else: + polling_method = polling + if cont_token: + return LROPoller[None].from_continuation_token( + polling_method=polling_method, + continuation_token=cont_token, + client=self._client, + deserialization_callback=get_long_running_output, + ) + return LROPoller[None](self._client, raw_result, get_long_running_output, polling_method) # type: ignore + + @overload + def add_face_from_url( + self, + large_face_list_id: str, + body: JSON, + *, + target_face: Optional[List[int]] = None, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + user_data: Optional[str] = None, + content_type: str = "application/json", + **kwargs: Any, + ) -> _models.AddFaceResult: + """Add a face to a specified Large Face List, up to 1,000,000 faces. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/add-large-face-list-face-from-url + for more details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param body: Required. + :type body: JSON + :keyword target_face: A face rectangle to specify the target face to be added to a person, in + the format of 'targetFace=left,top,width,height'. Default value is None. + :paramtype target_face: list[int] + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". + Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword user_data: User-provided data attached to the face. The size limit is 1K. Default + value is None. + :paramtype user_data: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: AddFaceResult. The AddFaceResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.AddFaceResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def add_face_from_url( + self, + large_face_list_id: str, + *, + url: str, + target_face: Optional[List[int]] = None, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + user_data: Optional[str] = None, + content_type: str = "application/json", + **kwargs: Any, + ) -> _models.AddFaceResult: + """Add a face to a specified Large Face List, up to 1,000,000 faces. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/add-large-face-list-face-from-url + for more details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :keyword url: URL of input image. Required. + :paramtype url: str + :keyword target_face: A face rectangle to specify the target face to be added to a person, in + the format of 'targetFace=left,top,width,height'. Default value is None. + :paramtype target_face: list[int] + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". + Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword user_data: User-provided data attached to the face. The size limit is 1K. Default + value is None. + :paramtype user_data: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: AddFaceResult. The AddFaceResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.AddFaceResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def add_face_from_url( + self, + large_face_list_id: str, + body: IO[bytes], + *, + target_face: Optional[List[int]] = None, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + user_data: Optional[str] = None, + content_type: str = "application/json", + **kwargs: Any, + ) -> _models.AddFaceResult: + """Add a face to a specified Large Face List, up to 1,000,000 faces. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/add-large-face-list-face-from-url + for more details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param body: Required. + :type body: IO[bytes] + :keyword target_face: A face rectangle to specify the target face to be added to a person, in + the format of 'targetFace=left,top,width,height'. Default value is None. + :paramtype target_face: list[int] + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". + Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword user_data: User-provided data attached to the face. The size limit is 1K. Default + value is None. + :paramtype user_data: str + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: AddFaceResult. The AddFaceResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.AddFaceResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace + def add_face_from_url( + self, + large_face_list_id: str, + body: Union[JSON, IO[bytes]] = _Unset, + *, + url: str = _Unset, + target_face: Optional[List[int]] = None, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + user_data: Optional[str] = None, + **kwargs: Any, + ) -> _models.AddFaceResult: + """Add a face to a specified Large Face List, up to 1,000,000 faces. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/add-large-face-list-face-from-url + for more details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword url: URL of input image. Required. + :paramtype url: str + :keyword target_face: A face rectangle to specify the target face to be added to a person, in + the format of 'targetFace=left,top,width,height'. Default value is None. + :paramtype target_face: list[int] + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". + Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword user_data: User-provided data attached to the face. The size limit is 1K. Default + value is None. + :paramtype user_data: str + :return: AddFaceResult. The AddFaceResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.AddFaceResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[_models.AddFaceResult] = kwargs.pop("cls", None) + + if body is _Unset: + if url is _Unset: + raise TypeError("missing required argument: url") + body = {"url": url} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_large_face_list_add_face_from_url_request( + large_face_list_id=large_face_list_id, + target_face=target_face, + detection_model=detection_model, + user_data=user_data, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.AddFaceResult, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace + def add_face( + self, + large_face_list_id: str, + image_content: bytes, + *, + target_face: Optional[List[int]] = None, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + user_data: Optional[str] = None, + **kwargs: Any, + ) -> _models.AddFaceResult: + """Add a face to a specified Large Face List, up to 1,000,000 faces. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/add-large-face-list-face for + more details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param image_content: The image to be analyzed. Required. + :type image_content: bytes + :keyword target_face: A face rectangle to specify the target face to be added to a person, in + the format of 'targetFace=left,top,width,height'. Default value is None. + :paramtype target_face: list[int] + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". + Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword user_data: User-provided data attached to the face. The size limit is 1K. Default + value is None. + :paramtype user_data: str + :return: AddFaceResult. The AddFaceResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.AddFaceResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: str = kwargs.pop("content_type", _headers.pop("content-type", "application/octet-stream")) + cls: ClsType[_models.AddFaceResult] = kwargs.pop("cls", None) + + _content = image_content + + _request = build_large_face_list_add_face_request( + large_face_list_id=large_face_list_id, + target_face=target_face, + detection_model=detection_model, + user_data=user_data, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.AddFaceResult, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace + def delete_face( # pylint: disable=inconsistent-return-statements + self, large_face_list_id: str, persisted_face_id: str, **kwargs: Any + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/delete-large-face-list-face for + more details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param persisted_face_id: Face ID of the face. Required. + :type persisted_face_id: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[None] = kwargs.pop("cls", None) + + _request = build_large_face_list_delete_face_request( + large_face_list_id=large_face_list_id, + persisted_face_id=persisted_face_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + @distributed_trace + def get_face(self, large_face_list_id: str, persisted_face_id: str, **kwargs: Any) -> _models.LargeFaceListFace: + """Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/get-large-face-list-face for + more details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param persisted_face_id: Face ID of the face. Required. + :type persisted_face_id: str + :return: LargeFaceListFace. The LargeFaceListFace is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.LargeFaceListFace + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[_models.LargeFaceListFace] = kwargs.pop("cls", None) + + _request = build_large_face_list_get_face_request( + large_face_list_id=large_face_list_id, + persisted_face_id=persisted_face_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.LargeFaceListFace, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + def update_face( + self, + large_face_list_id: str, + persisted_face_id: str, + body: JSON, + *, + content_type: str = "application/json", + **kwargs: Any, + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/update-large-face-list-face for + more details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param persisted_face_id: Face ID of the face. Required. + :type persisted_face_id: str + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def update_face( + self, + large_face_list_id: str, + persisted_face_id: str, + *, + content_type: str = "application/json", + user_data: Optional[str] = None, + **kwargs: Any, + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/update-large-face-list-face for + more details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param persisted_face_id: Face ID of the face. Required. + :type persisted_face_id: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :keyword user_data: User-provided data attached to the face. The length limit is 1K. Default + value is None. + :paramtype user_data: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def update_face( + self, + large_face_list_id: str, + persisted_face_id: str, + body: IO[bytes], + *, + content_type: str = "application/json", + **kwargs: Any, + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/update-large-face-list-face for + more details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param persisted_face_id: Face ID of the face. Required. + :type persisted_face_id: str + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace + def update_face( # pylint: disable=inconsistent-return-statements + self, + large_face_list_id: str, + persisted_face_id: str, + body: Union[JSON, IO[bytes]] = _Unset, + *, + user_data: Optional[str] = None, + **kwargs: Any, + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/update-large-face-list-face for + more details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :param persisted_face_id: Face ID of the face. Required. + :type persisted_face_id: str + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword user_data: User-provided data attached to the face. The length limit is 1K. Default + value is None. + :paramtype user_data: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[None] = kwargs.pop("cls", None) + + if body is _Unset: + body = {"userData": user_data} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_large_face_list_update_face_request( + large_face_list_id=large_face_list_id, + persisted_face_id=persisted_face_id, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + @distributed_trace + def get_faces( + self, large_face_list_id: str, *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any + ) -> List[_models.LargeFaceListFace]: + """List faces' persistedFaceId and userData in a specified Large Face List. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-list-operations/get-large-face-list-faces for + more details. + + :param large_face_list_id: Valid character is letter in lower case or digit or '-' or '_', + maximum length is 64. Required. + :type large_face_list_id: str + :keyword start: List resources greater than the "start". It contains no more than 64 + characters. Default is empty. Default value is None. + :paramtype start: str + :keyword top: The number of items to list, ranging in [1, 1000]. Default is 1000. Default value + is None. + :paramtype top: int + :return: list of LargeFaceListFace + :rtype: list[~azure.ai.vision.face.models.LargeFaceListFace] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[List[_models.LargeFaceListFace]] = kwargs.pop("cls", None) + + _request = build_large_face_list_get_faces_request( + large_face_list_id=large_face_list_id, + start=start, + top=top, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.LargeFaceListFace], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + +class LargePersonGroupOperations: + """ + .. warning:: + **DO NOT** instantiate this class directly. + + Instead, you should access the following operations through + :class:`~azure.ai.vision.face.FaceAdministrationClient`'s + :attr:`large_person_group` attribute. + """ + + def __init__(self, *args, **kwargs): + input_args = list(args) + self._client = input_args.pop(0) if input_args else kwargs.pop("client") + self._config = input_args.pop(0) if input_args else kwargs.pop("config") + self._serialize = input_args.pop(0) if input_args else kwargs.pop("serializer") + self._deserialize = input_args.pop(0) if input_args else kwargs.pop("deserializer") + + @overload + def create( + self, large_person_group_id: str, body: JSON, *, content_type: str = "application/json", **kwargs: Any + ) -> None: + """Create a new Large Person Group with user-specified largePersonGroupId, name, an optional + userData and recognitionModel. + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/create-large-person-group for + more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def create( + self, + large_person_group_id: str, + *, + name: str, + content_type: str = "application/json", + user_data: Optional[str] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + **kwargs: Any, + ) -> None: + """Create a new Large Person Group with user-specified largePersonGroupId, name, an optional + userData and recognitionModel. + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/create-large-person-group for + more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :keyword name: User defined name, maximum length is 128. Required. + :paramtype name: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :keyword user_data: Optional user defined data. Length should not exceed 16K. Default value is + None. + :paramtype user_data: str + :keyword recognition_model: The 'recognitionModel' associated with this face list. Supported + 'recognitionModel' values include 'recognition_01', 'recognition_02, 'recognition_03', and + 'recognition_04'. The default value is 'recognition_01'. 'recognition_04' is recommended since + its accuracy is improved on faces wearing masks compared with 'recognition_03', and its overall + accuracy is improved compared with 'recognition_01' and 'recognition_02'. Known values are: + "recognition_01", "recognition_02", "recognition_03", and "recognition_04". Default value is + None. + :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def create( + self, large_person_group_id: str, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + ) -> None: + """Create a new Large Person Group with user-specified largePersonGroupId, name, an optional + userData and recognitionModel. + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/create-large-person-group for + more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace + def create( # pylint: disable=inconsistent-return-statements + self, + large_person_group_id: str, + body: Union[JSON, IO[bytes]] = _Unset, + *, + name: str = _Unset, + user_data: Optional[str] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + **kwargs: Any, + ) -> None: + """Create a new Large Person Group with user-specified largePersonGroupId, name, an optional + userData and recognitionModel. + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/create-large-person-group for + more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword name: User defined name, maximum length is 128. Required. + :paramtype name: str + :keyword user_data: Optional user defined data. Length should not exceed 16K. Default value is + None. + :paramtype user_data: str + :keyword recognition_model: The 'recognitionModel' associated with this face list. Supported + 'recognitionModel' values include 'recognition_01', 'recognition_02, 'recognition_03', and + 'recognition_04'. The default value is 'recognition_01'. 'recognition_04' is recommended since + its accuracy is improved on faces wearing masks compared with 'recognition_03', and its overall + accuracy is improved compared with 'recognition_01' and 'recognition_02'. Known values are: + "recognition_01", "recognition_02", "recognition_03", and "recognition_04". Default value is + None. + :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[None] = kwargs.pop("cls", None) + + if body is _Unset: + if name is _Unset: + raise TypeError("missing required argument: name") + body = {"name": name, "recognitionModel": recognition_model, "userData": user_data} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_large_person_group_create_request( + large_person_group_id=large_person_group_id, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + @distributed_trace + def delete( # pylint: disable=inconsistent-return-statements + self, large_person_group_id: str, **kwargs: Any + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/delete-large-person-group for + more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[None] = kwargs.pop("cls", None) + + _request = build_large_person_group_delete_request( + large_person_group_id=large_person_group_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + @distributed_trace + def get( + self, large_person_group_id: str, *, return_recognition_model: Optional[bool] = None, **kwargs: Any + ) -> _models.LargePersonGroup: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/get-large-person-group for + more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is + false. Default value is None. + :paramtype return_recognition_model: bool + :return: LargePersonGroup. The LargePersonGroup is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.LargePersonGroup + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[_models.LargePersonGroup] = kwargs.pop("cls", None) + + _request = build_large_person_group_get_request( + large_person_group_id=large_person_group_id, + return_recognition_model=return_recognition_model, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.LargePersonGroup, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + def update( + self, large_person_group_id: str, body: JSON, *, content_type: str = "application/json", **kwargs: Any + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/update-large-person-group for + more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def update( + self, + large_person_group_id: str, + *, + content_type: str = "application/json", + name: Optional[str] = None, + user_data: Optional[str] = None, + **kwargs: Any, + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/update-large-person-group for + more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :keyword name: User defined name, maximum length is 128. Default value is None. + :paramtype name: str + :keyword user_data: Optional user defined data. Length should not exceed 16K. Default value is + None. + :paramtype user_data: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def update( + self, large_person_group_id: str, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/update-large-person-group for + more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace + def update( # pylint: disable=inconsistent-return-statements + self, + large_person_group_id: str, + body: Union[JSON, IO[bytes]] = _Unset, + *, + name: Optional[str] = None, + user_data: Optional[str] = None, + **kwargs: Any, + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/update-large-person-group for + more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword name: User defined name, maximum length is 128. Default value is None. + :paramtype name: str + :keyword user_data: Optional user defined data. Length should not exceed 16K. Default value is + None. + :paramtype user_data: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[None] = kwargs.pop("cls", None) + + if body is _Unset: + body = {"name": name, "userData": user_data} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_large_person_group_update_request( + large_person_group_id=large_person_group_id, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + @distributed_trace + def get_large_person_groups( + self, + *, + start: Optional[str] = None, + top: Optional[int] = None, + return_recognition_model: Optional[bool] = None, + **kwargs: Any, + ) -> List[_models.LargePersonGroup]: + """List all existing Large Person Groups' largePersonGroupId, name, userData and recognitionModel. + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/get-large-person-groups for + more details. + + :keyword start: List resources greater than the "start". It contains no more than 64 + characters. Default is empty. Default value is None. + :paramtype start: str + :keyword top: The number of items to list, ranging in [1, 1000]. Default is 1000. Default value + is None. + :paramtype top: int + :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is + false. Default value is None. + :paramtype return_recognition_model: bool + :return: list of LargePersonGroup + :rtype: list[~azure.ai.vision.face.models.LargePersonGroup] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[List[_models.LargePersonGroup]] = kwargs.pop("cls", None) + + _request = build_large_person_group_get_large_person_groups_request( + start=start, + top=top, + return_recognition_model=return_recognition_model, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.LargePersonGroup], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace + def get_training_status(self, large_person_group_id: str, **kwargs: Any) -> _models.FaceTrainingResult: + """To check Large Person Group training status completed or still ongoing. Large Person Group + training is an asynchronous operation triggered by "Train Large Person Group" API. + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/get-large-person-group-training-status + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :return: FaceTrainingResult. The FaceTrainingResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceTrainingResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[_models.FaceTrainingResult] = kwargs.pop("cls", None) + + _request = build_large_person_group_get_training_status_request( + large_person_group_id=large_person_group_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.FaceTrainingResult, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + def _train_initial(self, large_person_group_id: str, **kwargs: Any) -> Iterator[bytes]: + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[Iterator[bytes]] = kwargs.pop("cls", None) + + _request = build_large_person_group_train_request( + large_person_group_id=large_person_group_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = True + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [202]: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + response_headers = {} + response_headers["operation-Location"] = self._deserialize("str", response.headers.get("operation-Location")) + + deserialized = response.iter_bytes() + + if cls: + return cls(pipeline_response, deserialized, response_headers) # type: ignore + + return deserialized # type: ignore + + @distributed_trace + def begin_train(self, large_person_group_id: str, **kwargs: Any) -> LROPoller[None]: + """Submit a Large Person Group training task. Training is a crucial step that only a trained Large + Person Group can be used by "Identify From Large Person Group". + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/train-large-person-group for + more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :return: An instance of LROPoller that returns None + :rtype: ~azure.core.polling.LROPoller[None] + :raises ~azure.core.exceptions.HttpResponseError: + """ + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[None] = kwargs.pop("cls", None) + polling: Union[bool, PollingMethod] = kwargs.pop("polling", True) + lro_delay = kwargs.pop("polling_interval", self._config.polling_interval) + cont_token: Optional[str] = kwargs.pop("continuation_token", None) + if cont_token is None: + raw_result = self._train_initial( + large_person_group_id=large_person_group_id, + cls=lambda x, y, z: x, + headers=_headers, + params=_params, + **kwargs, + ) + raw_result.http_response.read() # type: ignore + kwargs.pop("error_map", None) + + def get_long_running_output(pipeline_response): # pylint: disable=inconsistent-return-statements + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + + if polling is True: + polling_method: PollingMethod = cast( + PollingMethod, LROBasePolling(lro_delay, path_format_arguments=path_format_arguments, **kwargs) + ) + elif polling is False: + polling_method = cast(PollingMethod, NoPolling()) + else: + polling_method = polling + if cont_token: + return LROPoller[None].from_continuation_token( + polling_method=polling_method, + continuation_token=cont_token, + client=self._client, + deserialization_callback=get_long_running_output, + ) + return LROPoller[None](self._client, raw_result, get_long_running_output, polling_method) # type: ignore + + @overload + def create_person( + self, large_person_group_id: str, body: JSON, *, content_type: str = "application/json", **kwargs: Any + ) -> _models.CreatePersonResult: + """Create a new person in a specified Large Person Group. To add face to this person, please call + "Add Large Person Group Person Face". + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/create-large-person-group-person + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: CreatePersonResult. The CreatePersonResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.CreatePersonResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def create_person( + self, + large_person_group_id: str, + *, + name: str, + content_type: str = "application/json", + user_data: Optional[str] = None, + **kwargs: Any, + ) -> _models.CreatePersonResult: + """Create a new person in a specified Large Person Group. To add face to this person, please call + "Add Large Person Group Person Face". + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/create-large-person-group-person + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :keyword name: User defined name, maximum length is 128. Required. + :paramtype name: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :keyword user_data: Optional user defined data. Length should not exceed 16K. Default value is + None. + :paramtype user_data: str + :return: CreatePersonResult. The CreatePersonResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.CreatePersonResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def create_person( + self, large_person_group_id: str, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + ) -> _models.CreatePersonResult: + """Create a new person in a specified Large Person Group. To add face to this person, please call + "Add Large Person Group Person Face". + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/create-large-person-group-person + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: CreatePersonResult. The CreatePersonResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.CreatePersonResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace + def create_person( + self, + large_person_group_id: str, + body: Union[JSON, IO[bytes]] = _Unset, + *, + name: str = _Unset, + user_data: Optional[str] = None, + **kwargs: Any, + ) -> _models.CreatePersonResult: + """Create a new person in a specified Large Person Group. To add face to this person, please call + "Add Large Person Group Person Face". + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/create-large-person-group-person + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword name: User defined name, maximum length is 128. Required. + :paramtype name: str + :keyword user_data: Optional user defined data. Length should not exceed 16K. Default value is + None. + :paramtype user_data: str + :return: CreatePersonResult. The CreatePersonResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.CreatePersonResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[_models.CreatePersonResult] = kwargs.pop("cls", None) + + if body is _Unset: + if name is _Unset: + raise TypeError("missing required argument: name") + body = {"name": name, "userData": user_data} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_large_person_group_create_person_request( + large_person_group_id=large_person_group_id, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.CreatePersonResult, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace + def delete_person( # pylint: disable=inconsistent-return-statements + self, large_person_group_id: str, person_id: str, **kwargs: Any + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/delete-large-person-group-person + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[None] = kwargs.pop("cls", None) + + _request = build_large_person_group_delete_person_request( + large_person_group_id=large_person_group_id, + person_id=person_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + @distributed_trace + def get_person(self, large_person_group_id: str, person_id: str, **kwargs: Any) -> _models.LargePersonGroupPerson: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/get-large-person-group-person + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :return: LargePersonGroupPerson. The LargePersonGroupPerson is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.LargePersonGroupPerson + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[_models.LargePersonGroupPerson] = kwargs.pop("cls", None) + + _request = build_large_person_group_get_person_request( + large_person_group_id=large_person_group_id, + person_id=person_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.LargePersonGroupPerson, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + def update_person( + self, + large_person_group_id: str, + person_id: str, + body: JSON, + *, + content_type: str = "application/json", + **kwargs: Any, + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/update-large-person-group-person + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def update_person( + self, + large_person_group_id: str, + person_id: str, + *, + content_type: str = "application/json", + name: Optional[str] = None, + user_data: Optional[str] = None, + **kwargs: Any, + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/update-large-person-group-person + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :keyword name: User defined name, maximum length is 128. Default value is None. + :paramtype name: str + :keyword user_data: Optional user defined data. Length should not exceed 16K. Default value is + None. + :paramtype user_data: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def update_person( + self, + large_person_group_id: str, + person_id: str, + body: IO[bytes], + *, + content_type: str = "application/json", + **kwargs: Any, + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/update-large-person-group-person + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace + def update_person( # pylint: disable=inconsistent-return-statements + self, + large_person_group_id: str, + person_id: str, + body: Union[JSON, IO[bytes]] = _Unset, + *, + name: Optional[str] = None, + user_data: Optional[str] = None, + **kwargs: Any, + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/update-large-person-group-person + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword name: User defined name, maximum length is 128. Default value is None. + :paramtype name: str + :keyword user_data: Optional user defined data. Length should not exceed 16K. Default value is + None. + :paramtype user_data: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[None] = kwargs.pop("cls", None) + + if body is _Unset: + body = {"name": name, "userData": user_data} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_large_person_group_update_person_request( + large_person_group_id=large_person_group_id, + person_id=person_id, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + @distributed_trace + def get_persons( + self, large_person_group_id: str, *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any + ) -> List[_models.LargePersonGroupPerson]: + """List all persons' information in the specified Large Person Group, including personId, name, + userData and persistedFaceIds of registered person faces. + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/get-large-person-group-persons + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :keyword start: List resources greater than the "start". It contains no more than 64 + characters. Default is empty. Default value is None. + :paramtype start: str + :keyword top: The number of items to list, ranging in [1, 1000]. Default is 1000. Default value + is None. + :paramtype top: int + :return: list of LargePersonGroupPerson + :rtype: list[~azure.ai.vision.face.models.LargePersonGroupPerson] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[List[_models.LargePersonGroupPerson]] = kwargs.pop("cls", None) + + _request = build_large_person_group_get_persons_request( + large_person_group_id=large_person_group_id, + start=start, + top=top, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.LargePersonGroupPerson], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + def add_face_from_url( + self, + large_person_group_id: str, + person_id: str, + body: JSON, + *, + target_face: Optional[List[int]] = None, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + user_data: Optional[str] = None, + content_type: str = "application/json", + **kwargs: Any, + ) -> _models.AddFaceResult: + """Add a face to a person into a Large Person Group for face identification or verification. + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/add-large-person-group-person-face-from-url + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :param body: Required. + :type body: JSON + :keyword target_face: A face rectangle to specify the target face to be added to a person, in + the format of 'targetFace=left,top,width,height'. Default value is None. + :paramtype target_face: list[int] + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". + Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword user_data: User-provided data attached to the face. The size limit is 1K. Default + value is None. + :paramtype user_data: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: AddFaceResult. The AddFaceResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.AddFaceResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def add_face_from_url( + self, + large_person_group_id: str, + person_id: str, + *, + url: str, + target_face: Optional[List[int]] = None, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + user_data: Optional[str] = None, + content_type: str = "application/json", + **kwargs: Any, + ) -> _models.AddFaceResult: + """Add a face to a person into a Large Person Group for face identification or verification. + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/add-large-person-group-person-face-from-url + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :keyword url: URL of input image. Required. + :paramtype url: str + :keyword target_face: A face rectangle to specify the target face to be added to a person, in + the format of 'targetFace=left,top,width,height'. Default value is None. + :paramtype target_face: list[int] + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". + Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword user_data: User-provided data attached to the face. The size limit is 1K. Default + value is None. + :paramtype user_data: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: AddFaceResult. The AddFaceResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.AddFaceResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def add_face_from_url( + self, + large_person_group_id: str, + person_id: str, + body: IO[bytes], + *, + target_face: Optional[List[int]] = None, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + user_data: Optional[str] = None, + content_type: str = "application/json", + **kwargs: Any, + ) -> _models.AddFaceResult: + """Add a face to a person into a Large Person Group for face identification or verification. + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/add-large-person-group-person-face-from-url + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :param body: Required. + :type body: IO[bytes] + :keyword target_face: A face rectangle to specify the target face to be added to a person, in + the format of 'targetFace=left,top,width,height'. Default value is None. + :paramtype target_face: list[int] + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". + Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword user_data: User-provided data attached to the face. The size limit is 1K. Default + value is None. + :paramtype user_data: str + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: AddFaceResult. The AddFaceResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.AddFaceResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace + def add_face_from_url( + self, + large_person_group_id: str, + person_id: str, + body: Union[JSON, IO[bytes]] = _Unset, + *, + url: str = _Unset, + target_face: Optional[List[int]] = None, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + user_data: Optional[str] = None, + **kwargs: Any, + ) -> _models.AddFaceResult: + """Add a face to a person into a Large Person Group for face identification or verification. + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/add-large-person-group-person-face-from-url + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword url: URL of input image. Required. + :paramtype url: str + :keyword target_face: A face rectangle to specify the target face to be added to a person, in + the format of 'targetFace=left,top,width,height'. Default value is None. + :paramtype target_face: list[int] + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". + Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword user_data: User-provided data attached to the face. The size limit is 1K. Default + value is None. + :paramtype user_data: str + :return: AddFaceResult. The AddFaceResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.AddFaceResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[_models.AddFaceResult] = kwargs.pop("cls", None) + + if body is _Unset: + if url is _Unset: + raise TypeError("missing required argument: url") + body = {"url": url} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_large_person_group_add_face_from_url_request( + large_person_group_id=large_person_group_id, + person_id=person_id, + target_face=target_face, + detection_model=detection_model, + user_data=user_data, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.AddFaceResult, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace + def add_face( + self, + large_person_group_id: str, + person_id: str, + image_content: bytes, + *, + target_face: Optional[List[int]] = None, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + user_data: Optional[str] = None, + **kwargs: Any, + ) -> _models.AddFaceResult: + """Add a face to a person into a Large Person Group for face identification or verification. + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/add-large-person-group-person-face + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :param image_content: The image to be analyzed. Required. + :type image_content: bytes + :keyword target_face: A face rectangle to specify the target face to be added to a person, in + the format of 'targetFace=left,top,width,height'. Default value is None. + :paramtype target_face: list[int] + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". + Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword user_data: User-provided data attached to the face. The size limit is 1K. Default + value is None. + :paramtype user_data: str + :return: AddFaceResult. The AddFaceResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.AddFaceResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: str = kwargs.pop("content_type", _headers.pop("content-type", "application/octet-stream")) + cls: ClsType[_models.AddFaceResult] = kwargs.pop("cls", None) + + _content = image_content + + _request = build_large_person_group_add_face_request( + large_person_group_id=large_person_group_id, + person_id=person_id, + target_face=target_face, + detection_model=detection_model, + user_data=user_data, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.AddFaceResult, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace + def delete_face( # pylint: disable=inconsistent-return-statements + self, large_person_group_id: str, person_id: str, persisted_face_id: str, **kwargs: Any + ) -> None: + """Delete a face from a person in a Large Person Group by specified largePersonGroupId, personId + and persistedFaceId. + + Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/delete-large-person-group-person-face + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :param persisted_face_id: Face ID of the face. Required. + :type persisted_face_id: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[None] = kwargs.pop("cls", None) + + _request = build_large_person_group_delete_face_request( + large_person_group_id=large_person_group_id, + person_id=person_id, + persisted_face_id=persisted_face_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + @distributed_trace + def get_face( + self, large_person_group_id: str, person_id: str, persisted_face_id: str, **kwargs: Any + ) -> _models.LargePersonGroupPersonFace: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/get-large-person-group-person-face + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :param persisted_face_id: Face ID of the face. Required. + :type persisted_face_id: str + :return: LargePersonGroupPersonFace. The LargePersonGroupPersonFace is compatible with + MutableMapping + :rtype: ~azure.ai.vision.face.models.LargePersonGroupPersonFace + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[_models.LargePersonGroupPersonFace] = kwargs.pop("cls", None) + + _request = build_large_person_group_get_face_request( + large_person_group_id=large_person_group_id, + person_id=person_id, + persisted_face_id=persisted_face_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.LargePersonGroupPersonFace, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + def update_face( + self, + large_person_group_id: str, + person_id: str, + persisted_face_id: str, + body: JSON, + *, + content_type: str = "application/json", + **kwargs: Any, + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/update-large-person-group-person-face + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :param persisted_face_id: Face ID of the face. Required. + :type persisted_face_id: str + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def update_face( + self, + large_person_group_id: str, + person_id: str, + persisted_face_id: str, + *, + content_type: str = "application/json", + user_data: Optional[str] = None, + **kwargs: Any, + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/update-large-person-group-person-face + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :param persisted_face_id: Face ID of the face. Required. + :type persisted_face_id: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :keyword user_data: User-provided data attached to the face. The length limit is 1K. Default + value is None. + :paramtype user_data: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def update_face( + self, + large_person_group_id: str, + person_id: str, + persisted_face_id: str, + body: IO[bytes], + *, + content_type: str = "application/json", + **kwargs: Any, + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/update-large-person-group-person-face + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :param persisted_face_id: Face ID of the face. Required. + :type persisted_face_id: str + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace + def update_face( # pylint: disable=inconsistent-return-statements + self, + large_person_group_id: str, + person_id: str, + persisted_face_id: str, + body: Union[JSON, IO[bytes]] = _Unset, + *, + user_data: Optional[str] = None, + **kwargs: Any, + ) -> None: + """Please refer to + https://learn.microsoft.com/rest/api/face/person-group-operations/update-large-person-group-person-face + for more details. + + :param large_person_group_id: ID of the container. Required. + :type large_person_group_id: str + :param person_id: ID of the person. Required. + :type person_id: str + :param persisted_face_id: Face ID of the face. Required. + :type persisted_face_id: str + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword user_data: User-provided data attached to the face. The length limit is 1K. Default + value is None. + :paramtype user_data: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[None] = kwargs.pop("cls", None) + + if body is _Unset: + body = {"userData": user_data} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_large_person_group_update_face_request( + large_person_group_id=large_person_group_id, + person_id=person_id, + persisted_face_id=persisted_face_id, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + +class FaceClientOperationsMixin(FaceClientMixinABC): + + @overload + def _detect_from_url( + self, + body: JSON, + *, + content_type: str = "application/json", + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + return_face_id: Optional[bool] = None, + return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, + return_face_landmarks: Optional[bool] = None, + return_recognition_model: Optional[bool] = None, + face_id_time_to_live: Optional[int] = None, + **kwargs: Any, + ) -> List[_models.FaceDetectionResult]: ... + @overload + def _detect_from_url( + self, + *, + url: str, + content_type: str = "application/json", + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + return_face_id: Optional[bool] = None, + return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, + return_face_landmarks: Optional[bool] = None, + return_recognition_model: Optional[bool] = None, + face_id_time_to_live: Optional[int] = None, + **kwargs: Any, + ) -> List[_models.FaceDetectionResult]: ... + @overload + def _detect_from_url( + self, + body: IO[bytes], + *, + content_type: str = "application/json", + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + return_face_id: Optional[bool] = None, + return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, + return_face_landmarks: Optional[bool] = None, + return_recognition_model: Optional[bool] = None, + face_id_time_to_live: Optional[int] = None, + **kwargs: Any, + ) -> List[_models.FaceDetectionResult]: ... + + @distributed_trace + def _detect_from_url( + self, + body: Union[JSON, IO[bytes]] = _Unset, + *, + url: str = _Unset, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + return_face_id: Optional[bool] = None, + return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, + return_face_landmarks: Optional[bool] = None, + return_recognition_model: Optional[bool] = None, + face_id_time_to_live: Optional[int] = None, + **kwargs: Any, + ) -> List[_models.FaceDetectionResult]: + """Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, + and attributes. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-detection-operations/detect-from-url for more + details. + + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword url: URL of input image. Required. + :paramtype url: str + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. 'detection_03' is recommended since its accuracy is improved on + smaller faces (64x64 pixels) and rotated face orientations. Known values are: "detection_01", + "detection_02", and "detection_03". Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword recognition_model: The 'recognitionModel' associated with the detected faceIds. + Supported 'recognitionModel' values include 'recognition_01', 'recognition_02', + 'recognition_03' or 'recognition_04'. The default value is 'recognition_01'. 'recognition_04' + is recommended since its accuracy is improved on faces wearing masks compared with + 'recognition_03', and its overall accuracy is improved compared with 'recognition_01' and + 'recognition_02'. Known values are: "recognition_01", "recognition_02", "recognition_03", and + "recognition_04". Default value is None. + :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel + :keyword return_face_id: Return faceIds of the detected faces or not. The default value is + true. Default value is None. + :paramtype return_face_id: bool + :keyword return_face_attributes: Analyze and return the one or more specified face attributes + in the comma-separated string like 'returnFaceAttributes=headPose,glasses'. Face attribute + analysis has additional computational and time cost. Default value is None. + :paramtype return_face_attributes: list[str or ~azure.ai.vision.face.models.FaceAttributeType] + :keyword return_face_landmarks: Return face landmarks of the detected faces or not. The default + value is false. Default value is None. + :paramtype return_face_landmarks: bool + :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is + false. This is only applicable when returnFaceId = true. Default value is None. + :paramtype return_recognition_model: bool + :keyword face_id_time_to_live: The number of seconds for the face ID being cached. Supported + range from 60 seconds up to 86400 seconds. The default value is 86400 (24 hours). Default value + is None. + :paramtype face_id_time_to_live: int + :return: list of FaceDetectionResult + :rtype: list[~azure.ai.vision.face.models.FaceDetectionResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("content-type", None)) + cls: ClsType[List[_models.FaceDetectionResult]] = kwargs.pop("cls", None) + + if body is _Unset: + if url is _Unset: + raise TypeError("missing required argument: url") + body = {"url": url} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_face_detect_from_url_request( + detection_model=detection_model, + recognition_model=recognition_model, + return_face_id=return_face_id, + return_face_attributes=return_face_attributes, + return_face_landmarks=return_face_landmarks, + return_recognition_model=return_recognition_model, + face_id_time_to_live=face_id_time_to_live, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.FaceDetectionResult], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace + def _detect( + self, + image_content: bytes, + *, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + return_face_id: Optional[bool] = None, + return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, + return_face_landmarks: Optional[bool] = None, + return_recognition_model: Optional[bool] = None, + face_id_time_to_live: Optional[int] = None, + **kwargs: Any, + ) -> List[_models.FaceDetectionResult]: + """Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, + and attributes. + + Please refer to https://learn.microsoft.com/rest/api/face/face-detection-operations/detect for + more details. + + :param image_content: The input image binary. Required. + :type image_content: bytes + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. 'detection_03' is recommended since its accuracy is improved on + smaller faces (64x64 pixels) and rotated face orientations. Known values are: "detection_01", + "detection_02", and "detection_03". Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword recognition_model: The 'recognitionModel' associated with the detected faceIds. + Supported 'recognitionModel' values include 'recognition_01', 'recognition_02', + 'recognition_03' or 'recognition_04'. The default value is 'recognition_01'. 'recognition_04' + is recommended since its accuracy is improved on faces wearing masks compared with + 'recognition_03', and its overall accuracy is improved compared with 'recognition_01' and + 'recognition_02'. Known values are: "recognition_01", "recognition_02", "recognition_03", and + "recognition_04". Default value is None. + :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel + :keyword return_face_id: Return faceIds of the detected faces or not. The default value is + true. Default value is None. + :paramtype return_face_id: bool + :keyword return_face_attributes: Analyze and return the one or more specified face attributes + in the comma-separated string like 'returnFaceAttributes=headPose,glasses'. Face attribute + analysis has additional computational and time cost. Default value is None. + :paramtype return_face_attributes: list[str or ~azure.ai.vision.face.models.FaceAttributeType] + :keyword return_face_landmarks: Return face landmarks of the detected faces or not. The default + value is false. Default value is None. + :paramtype return_face_landmarks: bool + :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is + false. This is only applicable when returnFaceId = true. Default value is None. + :paramtype return_recognition_model: bool + :keyword face_id_time_to_live: The number of seconds for the face ID being cached. Supported + range from 60 seconds up to 86400 seconds. The default value is 86400 (24 hours). Default value + is None. + :paramtype face_id_time_to_live: int + :return: list of FaceDetectionResult + :rtype: list[~azure.ai.vision.face.models.FaceDetectionResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: str = kwargs.pop("content_type", _headers.pop("content-type", "application/octet-stream")) + cls: ClsType[List[_models.FaceDetectionResult]] = kwargs.pop("cls", None) + + _content = image_content + + _request = build_face_detect_request( + detection_model=detection_model, + recognition_model=recognition_model, + return_face_id=return_face_id, + return_face_attributes=return_face_attributes, + return_face_landmarks=return_face_landmarks, + return_recognition_model=return_recognition_model, + face_id_time_to_live=face_id_time_to_live, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.FaceDetectionResult], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + def find_similar( + self, body: JSON, *, content_type: str = "application/json", **kwargs: Any + ) -> List[_models.FaceFindSimilarResult]: + """Given query face's faceId, to search the similar-looking faces from a faceId array. A faceId + array contains the faces created by Detect. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/find-similar for more + details. + + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: list of FaceFindSimilarResult + :rtype: list[~azure.ai.vision.face.models.FaceFindSimilarResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def find_similar( + self, + *, + face_id: str, + face_ids: List[str], + content_type: str = "application/json", + max_num_of_candidates_returned: Optional[int] = None, + mode: Optional[Union[str, _models.FindSimilarMatchMode]] = None, + **kwargs: Any, + ) -> List[_models.FaceFindSimilarResult]: + """Given query face's faceId, to search the similar-looking faces from a faceId array. A faceId + array contains the faces created by Detect. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/find-similar for more + details. + + :keyword face_id: faceId of the query face. User needs to call "Detect" first to get a valid + faceId. Note that this faceId is not persisted and will expire 24 hours after the detection + call. Required. + :paramtype face_id: str + :keyword face_ids: An array of candidate faceIds. All of them are created by "Detect" and the + faceIds will expire 24 hours after the detection call. The number of faceIds is limited to + 1000. Required. + :paramtype face_ids: list[str] + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :keyword max_num_of_candidates_returned: The number of top similar faces returned. The valid + range is [1, 1000]. Default value is 20. Default value is None. + :paramtype max_num_of_candidates_returned: int + :keyword mode: Similar face searching mode. It can be 'matchPerson' or 'matchFace'. Default + value is 'matchPerson'. Known values are: "matchPerson" and "matchFace". Default value is None. + :paramtype mode: str or ~azure.ai.vision.face.models.FindSimilarMatchMode + :return: list of FaceFindSimilarResult + :rtype: list[~azure.ai.vision.face.models.FaceFindSimilarResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def find_similar( + self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + ) -> List[_models.FaceFindSimilarResult]: + """Given query face's faceId, to search the similar-looking faces from a faceId array. A faceId + array contains the faces created by Detect. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/find-similar for more + details. + + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: list of FaceFindSimilarResult + :rtype: list[~azure.ai.vision.face.models.FaceFindSimilarResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace + def find_similar( + self, + body: Union[JSON, IO[bytes]] = _Unset, + *, + face_id: str = _Unset, + face_ids: List[str] = _Unset, + max_num_of_candidates_returned: Optional[int] = None, + mode: Optional[Union[str, _models.FindSimilarMatchMode]] = None, + **kwargs: Any, + ) -> List[_models.FaceFindSimilarResult]: + """Given query face's faceId, to search the similar-looking faces from a faceId array. A faceId + array contains the faces created by Detect. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/find-similar for more + details. + + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword face_id: faceId of the query face. User needs to call "Detect" first to get a valid + faceId. Note that this faceId is not persisted and will expire 24 hours after the detection + call. Required. + :paramtype face_id: str + :keyword face_ids: An array of candidate faceIds. All of them are created by "Detect" and the + faceIds will expire 24 hours after the detection call. The number of faceIds is limited to + 1000. Required. + :paramtype face_ids: list[str] + :keyword max_num_of_candidates_returned: The number of top similar faces returned. The valid + range is [1, 1000]. Default value is 20. Default value is None. + :paramtype max_num_of_candidates_returned: int + :keyword mode: Similar face searching mode. It can be 'matchPerson' or 'matchFace'. Default + value is 'matchPerson'. Known values are: "matchPerson" and "matchFace". Default value is None. + :paramtype mode: str or ~azure.ai.vision.face.models.FindSimilarMatchMode + :return: list of FaceFindSimilarResult + :rtype: list[~azure.ai.vision.face.models.FaceFindSimilarResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[List[_models.FaceFindSimilarResult]] = kwargs.pop("cls", None) + + if body is _Unset: + if face_id is _Unset: + raise TypeError("missing required argument: face_id") + if face_ids is _Unset: + raise TypeError("missing required argument: face_ids") + body = { + "faceId": face_id, + "faceIds": face_ids, + "maxNumOfCandidatesReturned": max_num_of_candidates_returned, + "mode": mode, + } + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_face_find_similar_request( + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.FaceFindSimilarResult], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + def verify_face_to_face( + self, body: JSON, *, content_type: str = "application/json", **kwargs: Any + ) -> _models.FaceVerificationResult: + """Verify whether two faces belong to a same person. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/verify-face-to-face for + more details. + + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: FaceVerificationResult. The FaceVerificationResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceVerificationResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def verify_face_to_face( + self, *, face_id1: str, face_id2: str, content_type: str = "application/json", **kwargs: Any + ) -> _models.FaceVerificationResult: + """Verify whether two faces belong to a same person. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/verify-face-to-face for + more details. + + :keyword face_id1: The faceId of one face, come from "Detect". Required. + :paramtype face_id1: str + :keyword face_id2: The faceId of another face, come from "Detect". Required. + :paramtype face_id2: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: FaceVerificationResult. The FaceVerificationResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceVerificationResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def verify_face_to_face( + self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + ) -> _models.FaceVerificationResult: + """Verify whether two faces belong to a same person. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/verify-face-to-face for + more details. + + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: FaceVerificationResult. The FaceVerificationResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceVerificationResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace + def verify_face_to_face( + self, body: Union[JSON, IO[bytes]] = _Unset, *, face_id1: str = _Unset, face_id2: str = _Unset, **kwargs: Any + ) -> _models.FaceVerificationResult: + """Verify whether two faces belong to a same person. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/verify-face-to-face for + more details. + + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword face_id1: The faceId of one face, come from "Detect". Required. + :paramtype face_id1: str + :keyword face_id2: The faceId of another face, come from "Detect". Required. + :paramtype face_id2: str + :return: FaceVerificationResult. The FaceVerificationResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceVerificationResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[_models.FaceVerificationResult] = kwargs.pop("cls", None) + + if body is _Unset: + if face_id1 is _Unset: + raise TypeError("missing required argument: face_id1") + if face_id2 is _Unset: + raise TypeError("missing required argument: face_id2") + body = {"faceId1": face_id1, "faceId2": face_id2} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_face_verify_face_to_face_request( + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.FaceVerificationResult, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + def group(self, body: JSON, *, content_type: str = "application/json", **kwargs: Any) -> _models.FaceGroupingResult: + """Divide candidate faces into groups based on face similarity. + + Please refer to https://learn.microsoft.com/rest/api/face/face-recognition-operations/group for + more details. + + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: FaceGroupingResult. The FaceGroupingResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceGroupingResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def group( + self, *, face_ids: List[str], content_type: str = "application/json", **kwargs: Any + ) -> _models.FaceGroupingResult: + """Divide candidate faces into groups based on face similarity. + + Please refer to https://learn.microsoft.com/rest/api/face/face-recognition-operations/group for + more details. + + :keyword face_ids: Array of candidate faceIds created by "Detect". The maximum is 1000 faces. + Required. + :paramtype face_ids: list[str] + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: FaceGroupingResult. The FaceGroupingResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceGroupingResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def group( + self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + ) -> _models.FaceGroupingResult: + """Divide candidate faces into groups based on face similarity. + + Please refer to https://learn.microsoft.com/rest/api/face/face-recognition-operations/group for + more details. + + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: FaceGroupingResult. The FaceGroupingResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceGroupingResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace + def group( + self, body: Union[JSON, IO[bytes]] = _Unset, *, face_ids: List[str] = _Unset, **kwargs: Any + ) -> _models.FaceGroupingResult: + """Divide candidate faces into groups based on face similarity. + + Please refer to https://learn.microsoft.com/rest/api/face/face-recognition-operations/group for + more details. + + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword face_ids: Array of candidate faceIds created by "Detect". The maximum is 1000 faces. + Required. + :paramtype face_ids: list[str] + :return: FaceGroupingResult. The FaceGroupingResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceGroupingResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[_models.FaceGroupingResult] = kwargs.pop("cls", None) + + if body is _Unset: + if face_ids is _Unset: + raise TypeError("missing required argument: face_ids") + body = {"faceIds": face_ids} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_face_group_request( + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.FaceGroupingResult, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + def find_similar_from_large_face_list( + self, body: JSON, *, content_type: str = "application/json", **kwargs: Any + ) -> List[_models.FaceFindSimilarResult]: + """Given query face's faceId, to search the similar-looking faces from a Large Face List. A + 'largeFaceListId' is created by Create Large Face List. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/find-similar-from-large-face-list + for more details. + + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: list of FaceFindSimilarResult + :rtype: list[~azure.ai.vision.face.models.FaceFindSimilarResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def find_similar_from_large_face_list( + self, + *, + face_id: str, + large_face_list_id: str, + content_type: str = "application/json", + max_num_of_candidates_returned: Optional[int] = None, + mode: Optional[Union[str, _models.FindSimilarMatchMode]] = None, + **kwargs: Any, + ) -> List[_models.FaceFindSimilarResult]: + """Given query face's faceId, to search the similar-looking faces from a Large Face List. A + 'largeFaceListId' is created by Create Large Face List. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/find-similar-from-large-face-list + for more details. + + :keyword face_id: faceId of the query face. User needs to call "Detect" first to get a valid + faceId. Note that this faceId is not persisted and will expire 24 hours after the detection + call. Required. + :paramtype face_id: str + :keyword large_face_list_id: An existing user-specified unique candidate Large Face List, + created in "Create Large Face List". Large Face List contains a set of persistedFaceIds which + are persisted and will never expire. Required. + :paramtype large_face_list_id: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :keyword max_num_of_candidates_returned: The number of top similar faces returned. The valid + range is [1, 1000]. Default value is 20. Default value is None. + :paramtype max_num_of_candidates_returned: int + :keyword mode: Similar face searching mode. It can be 'matchPerson' or 'matchFace'. Default + value is 'matchPerson'. Known values are: "matchPerson" and "matchFace". Default value is None. + :paramtype mode: str or ~azure.ai.vision.face.models.FindSimilarMatchMode + :return: list of FaceFindSimilarResult + :rtype: list[~azure.ai.vision.face.models.FaceFindSimilarResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def find_similar_from_large_face_list( + self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + ) -> List[_models.FaceFindSimilarResult]: + """Given query face's faceId, to search the similar-looking faces from a Large Face List. A + 'largeFaceListId' is created by Create Large Face List. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/find-similar-from-large-face-list + for more details. + + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: list of FaceFindSimilarResult + :rtype: list[~azure.ai.vision.face.models.FaceFindSimilarResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace + def find_similar_from_large_face_list( + self, + body: Union[JSON, IO[bytes]] = _Unset, + *, + face_id: str = _Unset, + large_face_list_id: str = _Unset, + max_num_of_candidates_returned: Optional[int] = None, + mode: Optional[Union[str, _models.FindSimilarMatchMode]] = None, + **kwargs: Any, + ) -> List[_models.FaceFindSimilarResult]: + """Given query face's faceId, to search the similar-looking faces from a Large Face List. A + 'largeFaceListId' is created by Create Large Face List. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/find-similar-from-large-face-list + for more details. + + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword face_id: faceId of the query face. User needs to call "Detect" first to get a valid + faceId. Note that this faceId is not persisted and will expire 24 hours after the detection + call. Required. + :paramtype face_id: str + :keyword large_face_list_id: An existing user-specified unique candidate Large Face List, + created in "Create Large Face List". Large Face List contains a set of persistedFaceIds which + are persisted and will never expire. Required. + :paramtype large_face_list_id: str + :keyword max_num_of_candidates_returned: The number of top similar faces returned. The valid + range is [1, 1000]. Default value is 20. Default value is None. + :paramtype max_num_of_candidates_returned: int + :keyword mode: Similar face searching mode. It can be 'matchPerson' or 'matchFace'. Default + value is 'matchPerson'. Known values are: "matchPerson" and "matchFace". Default value is None. + :paramtype mode: str or ~azure.ai.vision.face.models.FindSimilarMatchMode + :return: list of FaceFindSimilarResult + :rtype: list[~azure.ai.vision.face.models.FaceFindSimilarResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[List[_models.FaceFindSimilarResult]] = kwargs.pop("cls", None) + + if body is _Unset: + if face_id is _Unset: + raise TypeError("missing required argument: face_id") + if large_face_list_id is _Unset: + raise TypeError("missing required argument: large_face_list_id") + body = { + "faceId": face_id, + "largeFaceListId": large_face_list_id, + "maxNumOfCandidatesReturned": max_num_of_candidates_returned, + "mode": mode, + } + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_face_find_similar_from_large_face_list_request( + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.FaceFindSimilarResult], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + def identify_from_large_person_group( + self, body: JSON, *, content_type: str = "application/json", **kwargs: Any + ) -> List[_models.FaceIdentificationResult]: + """1-to-many identification to find the closest matches of the specific query person face from a + Large Person Group. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/identify-from-person-group + for more details. + + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: list of FaceIdentificationResult + :rtype: list[~azure.ai.vision.face.models.FaceIdentificationResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def identify_from_large_person_group( + self, + *, + face_ids: List[str], + large_person_group_id: str, + content_type: str = "application/json", + max_num_of_candidates_returned: Optional[int] = None, + confidence_threshold: Optional[float] = None, + **kwargs: Any, + ) -> List[_models.FaceIdentificationResult]: + """1-to-many identification to find the closest matches of the specific query person face from a + Large Person Group. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/identify-from-person-group + for more details. + + :keyword face_ids: Array of query faces faceIds, created by the "Detect". Each of the faces are + identified independently. The valid number of faceIds is between [1, 10]. Required. + :paramtype face_ids: list[str] + :keyword large_person_group_id: largePersonGroupId of the target Large Person Group, created by + "Create Large Person Group". Parameter personGroupId and largePersonGroupId should not be + provided at the same time. Required. + :paramtype large_person_group_id: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :keyword max_num_of_candidates_returned: The range of maxNumOfCandidatesReturned is between 1 + and 100. Default value is 10. Default value is None. + :paramtype max_num_of_candidates_returned: int + :keyword confidence_threshold: Customized identification confidence threshold, in the range of + [0, 1]. Advanced user can tweak this value to override default internal threshold for better + precision on their scenario data. Note there is no guarantee of this threshold value working on + other data and after algorithm updates. Default value is None. + :paramtype confidence_threshold: float + :return: list of FaceIdentificationResult + :rtype: list[~azure.ai.vision.face.models.FaceIdentificationResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def identify_from_large_person_group( + self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + ) -> List[_models.FaceIdentificationResult]: + """1-to-many identification to find the closest matches of the specific query person face from a + Large Person Group. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/identify-from-person-group + for more details. + + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: list of FaceIdentificationResult + :rtype: list[~azure.ai.vision.face.models.FaceIdentificationResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace + def identify_from_large_person_group( + self, + body: Union[JSON, IO[bytes]] = _Unset, + *, + face_ids: List[str] = _Unset, + large_person_group_id: str = _Unset, + max_num_of_candidates_returned: Optional[int] = None, + confidence_threshold: Optional[float] = None, + **kwargs: Any, + ) -> List[_models.FaceIdentificationResult]: + """1-to-many identification to find the closest matches of the specific query person face from a + Large Person Group. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/identify-from-person-group + for more details. + + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword face_ids: Array of query faces faceIds, created by the "Detect". Each of the faces are + identified independently. The valid number of faceIds is between [1, 10]. Required. + :paramtype face_ids: list[str] + :keyword large_person_group_id: largePersonGroupId of the target Large Person Group, created by + "Create Large Person Group". Parameter personGroupId and largePersonGroupId should not be + provided at the same time. Required. + :paramtype large_person_group_id: str + :keyword max_num_of_candidates_returned: The range of maxNumOfCandidatesReturned is between 1 + and 100. Default value is 10. Default value is None. + :paramtype max_num_of_candidates_returned: int + :keyword confidence_threshold: Customized identification confidence threshold, in the range of + [0, 1]. Advanced user can tweak this value to override default internal threshold for better + precision on their scenario data. Note there is no guarantee of this threshold value working on + other data and after algorithm updates. Default value is None. + :paramtype confidence_threshold: float + :return: list of FaceIdentificationResult + :rtype: list[~azure.ai.vision.face.models.FaceIdentificationResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[List[_models.FaceIdentificationResult]] = kwargs.pop("cls", None) + + if body is _Unset: + if face_ids is _Unset: + raise TypeError("missing required argument: face_ids") + if large_person_group_id is _Unset: + raise TypeError("missing required argument: large_person_group_id") + body = { + "confidenceThreshold": confidence_threshold, + "faceIds": face_ids, + "largePersonGroupId": large_person_group_id, + "maxNumOfCandidatesReturned": max_num_of_candidates_returned, + } + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_face_identify_from_large_person_group_request( + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.FaceIdentificationResult], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + def verify_from_large_person_group( + self, body: JSON, *, content_type: str = "application/json", **kwargs: Any + ) -> _models.FaceVerificationResult: + """Verify whether a face belongs to a person in a Large Person Group. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/verify-from-large-person-group + for more details. + + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: FaceVerificationResult. The FaceVerificationResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceVerificationResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def verify_from_large_person_group( + self, + *, + face_id: str, + large_person_group_id: str, + person_id: str, + content_type: str = "application/json", + **kwargs: Any, + ) -> _models.FaceVerificationResult: + """Verify whether a face belongs to a person in a Large Person Group. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/verify-from-large-person-group + for more details. + + :keyword face_id: The faceId of the face, come from "Detect". Required. + :paramtype face_id: str + :keyword large_person_group_id: Using existing largePersonGroupId and personId for fast loading + a specified person. largePersonGroupId is created in "Create Large Person Group". Required. + :paramtype large_person_group_id: str + :keyword person_id: Specify a certain person in Large Person Group. Required. + :paramtype person_id: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: FaceVerificationResult. The FaceVerificationResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceVerificationResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def verify_from_large_person_group( + self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + ) -> _models.FaceVerificationResult: + """Verify whether a face belongs to a person in a Large Person Group. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/verify-from-large-person-group + for more details. + + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: FaceVerificationResult. The FaceVerificationResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceVerificationResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace + def verify_from_large_person_group( + self, + body: Union[JSON, IO[bytes]] = _Unset, + *, + face_id: str = _Unset, + large_person_group_id: str = _Unset, + person_id: str = _Unset, + **kwargs: Any, + ) -> _models.FaceVerificationResult: + """Verify whether a face belongs to a person in a Large Person Group. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-recognition-operations/verify-from-large-person-group + for more details. + + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword face_id: The faceId of the face, come from "Detect". Required. + :paramtype face_id: str + :keyword large_person_group_id: Using existing largePersonGroupId and personId for fast loading + a specified person. largePersonGroupId is created in "Create Large Person Group". Required. + :paramtype large_person_group_id: str + :keyword person_id: Specify a certain person in Large Person Group. Required. + :paramtype person_id: str + :return: FaceVerificationResult. The FaceVerificationResult is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.FaceVerificationResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[_models.FaceVerificationResult] = kwargs.pop("cls", None) + + if body is _Unset: + if face_id is _Unset: + raise TypeError("missing required argument: face_id") + if large_person_group_id is _Unset: + raise TypeError("missing required argument: large_person_group_id") + if person_id is _Unset: + raise TypeError("missing required argument: person_id") + body = {"faceId": face_id, "largePersonGroupId": large_person_group_id, "personId": person_id} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_face_verify_from_large_person_group_request( + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.FaceVerificationResult, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + +class FaceSessionClientOperationsMixin(FaceSessionClientMixinABC): + + @overload + def create_liveness_session( + self, body: _models.CreateLivenessSessionContent, *, content_type: str = "application/json", **kwargs: Any + ) -> _models.CreateLivenessSessionResult: + """Create a new detect liveness session. + + Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/create-liveness-session + for more details. + + :param body: Body parameter. Required. + :type body: ~azure.ai.vision.face.models.CreateLivenessSessionContent + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: CreateLivenessSessionResult. The CreateLivenessSessionResult is compatible with + MutableMapping + :rtype: ~azure.ai.vision.face.models.CreateLivenessSessionResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def create_liveness_session( + self, body: JSON, *, content_type: str = "application/json", **kwargs: Any + ) -> _models.CreateLivenessSessionResult: + """Create a new detect liveness session. + + Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/create-liveness-session + for more details. + + :param body: Body parameter. Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :return: CreateLivenessSessionResult. The CreateLivenessSessionResult is compatible with + MutableMapping + :rtype: ~azure.ai.vision.face.models.CreateLivenessSessionResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def create_liveness_session( + self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + ) -> _models.CreateLivenessSessionResult: + """Create a new detect liveness session. + + Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/create-liveness-session + for more details. + + :param body: Body parameter. Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :return: CreateLivenessSessionResult. The CreateLivenessSessionResult is compatible with + MutableMapping + :rtype: ~azure.ai.vision.face.models.CreateLivenessSessionResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace + def create_liveness_session( + self, body: Union[_models.CreateLivenessSessionContent, JSON, IO[bytes]], **kwargs: Any + ) -> _models.CreateLivenessSessionResult: + """Create a new detect liveness session. + + Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/create-liveness-session + for more details. + + :param body: Body parameter. Is one of the following types: CreateLivenessSessionContent, JSON, + IO[bytes] Required. + :type body: ~azure.ai.vision.face.models.CreateLivenessSessionContent or JSON or IO[bytes] + :return: CreateLivenessSessionResult. The CreateLivenessSessionResult is compatible with + MutableMapping + :rtype: ~azure.ai.vision.face.models.CreateLivenessSessionResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[_models.CreateLivenessSessionResult] = kwargs.pop("cls", None) + + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_face_session_create_liveness_session_request( + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.CreateLivenessSessionResult, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace + def delete_liveness_session( # pylint: disable=inconsistent-return-statements + self, session_id: str, **kwargs: Any + ) -> None: + """Delete all session related information for matching the specified session id. + + Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/delete-liveness-session + for more details. + + :param session_id: The unique ID to reference this session. Required. + :type session_id: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[None] = kwargs.pop("cls", None) + + _request = build_face_session_delete_liveness_session_request( + session_id=session_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + @distributed_trace + def get_liveness_session_result(self, session_id: str, **kwargs: Any) -> _models.LivenessSession: + """Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/get-liveness-session-result + for more details. + + :param session_id: The unique ID to reference this session. Required. + :type session_id: str + :return: LivenessSession. The LivenessSession is compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.LivenessSession + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[_models.LivenessSession] = kwargs.pop("cls", None) + + _request = build_face_session_get_liveness_session_result_request( + session_id=session_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.LivenessSession, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace + def get_liveness_sessions( + self, *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any + ) -> List[_models.LivenessSessionItem]: + """Lists sessions for /detectLiveness/SingleModal. + + Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/get-liveness-sessions for + more details. + + :keyword start: List resources greater than the "start". It contains no more than 64 + characters. Default is empty. Default value is None. + :paramtype start: str + :keyword top: The number of items to list, ranging in [1, 1000]. Default is 1000. Default value + is None. + :paramtype top: int + :return: list of LivenessSessionItem + :rtype: list[~azure.ai.vision.face.models.LivenessSessionItem] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[List[_models.LivenessSessionItem]] = kwargs.pop("cls", None) + + _request = build_face_session_get_liveness_sessions_request( + start=start, + top=top, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.LivenessSessionItem], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace + def get_liveness_session_audit_entries( + self, session_id: str, *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any + ) -> List[_models.LivenessSessionAuditEntry]: + """Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/get-liveness-session-audit-entries + for more details. + + :param session_id: The unique ID to reference this session. Required. + :type session_id: str + :keyword start: List resources greater than the "start". It contains no more than 64 + characters. Default is empty. Default value is None. + :paramtype start: str + :keyword top: The number of items to list, ranging in [1, 1000]. Default is 1000. Default value + is None. + :paramtype top: int + :return: list of LivenessSessionAuditEntry + :rtype: list[~azure.ai.vision.face.models.LivenessSessionAuditEntry] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[List[_models.LivenessSessionAuditEntry]] = kwargs.pop("cls", None) + + _request = build_face_session_get_liveness_session_audit_entries_request( + session_id=session_id, + start=start, + top=top, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.LivenessSessionAuditEntry], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + def _create_liveness_with_verify_session( + self, + body: _models.CreateLivenessWithVerifySessionContent, + *, + content_type: str = "application/json", + **kwargs: Any, + ) -> _models.CreateLivenessWithVerifySessionResult: ... + @overload + def _create_liveness_with_verify_session( + self, body: JSON, *, content_type: str = "application/json", **kwargs: Any + ) -> _models.CreateLivenessWithVerifySessionResult: ... + @overload + def _create_liveness_with_verify_session( + self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + ) -> _models.CreateLivenessWithVerifySessionResult: ... + + @distributed_trace + def _create_liveness_with_verify_session( + self, body: Union[_models.CreateLivenessWithVerifySessionContent, JSON, IO[bytes]], **kwargs: Any + ) -> _models.CreateLivenessWithVerifySessionResult: + """Create a new liveness session with verify. Client device submits VerifyImage during the + /detectLivenessWithVerify/singleModal call. + + Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/create-liveness-with-verify-session + for more details. + + :param body: Body parameter. Is one of the following types: + CreateLivenessWithVerifySessionContent, JSON, IO[bytes] Required. + :type body: ~azure.ai.vision.face.models.CreateLivenessWithVerifySessionContent or JSON or + IO[bytes] + :return: CreateLivenessWithVerifySessionResult. The CreateLivenessWithVerifySessionResult is + compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.CreateLivenessWithVerifySessionResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) + cls: ClsType[_models.CreateLivenessWithVerifySessionResult] = kwargs.pop("cls", None) + + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_face_session_create_liveness_with_verify_session_request( + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.CreateLivenessWithVerifySessionResult, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + def _create_liveness_with_verify_session_with_verify_image( # pylint: disable=name-too-long + self, body: _models.CreateLivenessWithVerifySessionMultipartContent, **kwargs: Any + ) -> _models.CreateLivenessWithVerifySessionResult: ... + @overload + def _create_liveness_with_verify_session_with_verify_image( # pylint: disable=name-too-long + self, body: JSON, **kwargs: Any + ) -> _models.CreateLivenessWithVerifySessionResult: ... + + @distributed_trace + def _create_liveness_with_verify_session_with_verify_image( # pylint: disable=name-too-long + self, body: Union[_models.CreateLivenessWithVerifySessionMultipartContent, JSON], **kwargs: Any + ) -> _models.CreateLivenessWithVerifySessionResult: + """Create a new liveness session with verify. Provide the verify image during session creation. + + Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/create-liveness-with-verify-session-with-verify-image + for more details. + + :param body: Request content of liveness with verify session creation. Is either a + CreateLivenessWithVerifySessionMultipartContent type or a JSON type. Required. + :type body: ~azure.ai.vision.face.models.CreateLivenessWithVerifySessionMultipartContent or + JSON + :return: CreateLivenessWithVerifySessionResult. The CreateLivenessWithVerifySessionResult is + compatible with MutableMapping + :rtype: ~azure.ai.vision.face.models.CreateLivenessWithVerifySessionResult + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[_models.CreateLivenessWithVerifySessionResult] = kwargs.pop("cls", None) + + _body = body.as_dict() if isinstance(body, _model_base.Model) else body + _file_fields: List[str] = ["VerifyImage"] + _data_fields: List[str] = ["Parameters"] + _files, _data = prepare_multipart_form_data(_body, _file_fields, _data_fields) + + _request = build_face_session_create_liveness_with_verify_session_with_verify_image_request( + files=_files, + data=_data, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.CreateLivenessWithVerifySessionResult, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace + def delete_liveness_with_verify_session( # pylint: disable=inconsistent-return-statements + self, session_id: str, **kwargs: Any + ) -> None: + """Delete all session related information for matching the specified session id. + + Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/delete-liveness-with-verify-session + for more details. + + :param session_id: The unique ID to reference this session. Required. + :type session_id: str + :return: None + :rtype: None + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[None] = kwargs.pop("cls", None) + + _request = build_face_session_delete_liveness_with_verify_session_request( + session_id=session_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = False + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if cls: + return cls(pipeline_response, None, {}) # type: ignore + + @distributed_trace + def get_liveness_with_verify_session_result( + self, session_id: str, **kwargs: Any + ) -> _models.LivenessWithVerifySession: + """Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/get-liveness-with-verify-session-result + for more details. + + :param session_id: The unique ID to reference this session. Required. + :type session_id: str + :return: LivenessWithVerifySession. The LivenessWithVerifySession is compatible with + MutableMapping + :rtype: ~azure.ai.vision.face.models.LivenessWithVerifySession + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[_models.LivenessWithVerifySession] = kwargs.pop("cls", None) + + _request = build_face_session_get_liveness_with_verify_session_result_request( + session_id=session_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(_models.LivenessWithVerifySession, response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace + def get_liveness_with_verify_sessions( + self, *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any + ) -> List[_models.LivenessSessionItem]: + """Lists sessions for /detectLivenessWithVerify/SingleModal. + + Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/get-liveness-with-verify-sessions + for more details. + + :keyword start: List resources greater than the "start". It contains no more than 64 + characters. Default is empty. Default value is None. + :paramtype start: str + :keyword top: The number of items to list, ranging in [1, 1000]. Default is 1000. Default value + is None. + :paramtype top: int + :return: list of LivenessSessionItem + :rtype: list[~azure.ai.vision.face.models.LivenessSessionItem] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[List[_models.LivenessSessionItem]] = kwargs.pop("cls", None) + + _request = build_face_session_get_liveness_with_verify_sessions_request( + start=start, + top=top, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.LivenessSessionItem], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace + def get_liveness_with_verify_session_audit_entries( # pylint: disable=name-too-long + self, session_id: str, *, start: Optional[str] = None, top: Optional[int] = None, **kwargs: Any + ) -> List[_models.LivenessSessionAuditEntry]: + """Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/get-liveness-with-verify-session-audit-entries + for more details. + + :param session_id: The unique ID to reference this session. Required. + :type session_id: str + :keyword start: List resources greater than the "start". It contains no more than 64 + characters. Default is empty. Default value is None. + :paramtype start: str + :keyword top: The number of items to list, ranging in [1, 1000]. Default is 1000. Default value + is None. + :paramtype top: int + :return: list of LivenessSessionAuditEntry + :rtype: list[~azure.ai.vision.face.models.LivenessSessionAuditEntry] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[List[_models.LivenessSessionAuditEntry]] = kwargs.pop("cls", None) + + _request = build_face_session_get_liveness_with_verify_session_audit_entries_request( + session_id=session_id, + start=start, + top=top, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.LivenessSessionAuditEntry], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @overload + def detect_from_session_image( + self, + body: JSON, + *, + content_type: str = "application/json", + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + return_face_id: Optional[bool] = None, + return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, + return_face_landmarks: Optional[bool] = None, + return_recognition_model: Optional[bool] = None, + face_id_time_to_live: Optional[int] = None, + **kwargs: Any, + ) -> List[_models.FaceDetectionResult]: + """Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, + and attributes. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-detection-operations/detect-from-session-image-id + for more details. + + :param body: Required. + :type body: JSON + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. 'detection_03' is recommended since its accuracy is improved on + smaller faces (64x64 pixels) and rotated face orientations. Known values are: "detection_01", + "detection_02", and "detection_03". Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword recognition_model: The 'recognitionModel' associated with the detected faceIds. + Supported 'recognitionModel' values include 'recognition_01', 'recognition_02', + 'recognition_03' or 'recognition_04'. The default value is 'recognition_01'. 'recognition_04' + is recommended since its accuracy is improved on faces wearing masks compared with + 'recognition_03', and its overall accuracy is improved compared with 'recognition_01' and + 'recognition_02'. Known values are: "recognition_01", "recognition_02", "recognition_03", and + "recognition_04". Default value is None. + :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel + :keyword return_face_id: Return faceIds of the detected faces or not. The default value is + true. Default value is None. + :paramtype return_face_id: bool + :keyword return_face_attributes: Analyze and return the one or more specified face attributes + in the comma-separated string like 'returnFaceAttributes=headPose,glasses'. Face attribute + analysis has additional computational and time cost. Default value is None. + :paramtype return_face_attributes: list[str or ~azure.ai.vision.face.models.FaceAttributeType] + :keyword return_face_landmarks: Return face landmarks of the detected faces or not. The default + value is false. Default value is None. + :paramtype return_face_landmarks: bool + :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is + false. This is only applicable when returnFaceId = true. Default value is None. + :paramtype return_recognition_model: bool + :keyword face_id_time_to_live: The number of seconds for the face ID being cached. Supported + range from 60 seconds up to 86400 seconds. The default value is 86400 (24 hours). Default value + is None. + :paramtype face_id_time_to_live: int + :return: list of FaceDetectionResult + :rtype: list[~azure.ai.vision.face.models.FaceDetectionResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def detect_from_session_image( + self, + *, + session_image_id: str, + content_type: str = "application/json", + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + return_face_id: Optional[bool] = None, + return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, + return_face_landmarks: Optional[bool] = None, + return_recognition_model: Optional[bool] = None, + face_id_time_to_live: Optional[int] = None, + **kwargs: Any, + ) -> List[_models.FaceDetectionResult]: + """Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, + and attributes. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-detection-operations/detect-from-session-image-id + for more details. + + :keyword session_image_id: Id of session image. Required. + :paramtype session_image_id: str + :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. + Default value is "application/json". + :paramtype content_type: str + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. 'detection_03' is recommended since its accuracy is improved on + smaller faces (64x64 pixels) and rotated face orientations. Known values are: "detection_01", + "detection_02", and "detection_03". Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword recognition_model: The 'recognitionModel' associated with the detected faceIds. + Supported 'recognitionModel' values include 'recognition_01', 'recognition_02', + 'recognition_03' or 'recognition_04'. The default value is 'recognition_01'. 'recognition_04' + is recommended since its accuracy is improved on faces wearing masks compared with + 'recognition_03', and its overall accuracy is improved compared with 'recognition_01' and + 'recognition_02'. Known values are: "recognition_01", "recognition_02", "recognition_03", and + "recognition_04". Default value is None. + :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel + :keyword return_face_id: Return faceIds of the detected faces or not. The default value is + true. Default value is None. + :paramtype return_face_id: bool + :keyword return_face_attributes: Analyze and return the one or more specified face attributes + in the comma-separated string like 'returnFaceAttributes=headPose,glasses'. Face attribute + analysis has additional computational and time cost. Default value is None. + :paramtype return_face_attributes: list[str or ~azure.ai.vision.face.models.FaceAttributeType] + :keyword return_face_landmarks: Return face landmarks of the detected faces or not. The default + value is false. Default value is None. + :paramtype return_face_landmarks: bool + :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is + false. This is only applicable when returnFaceId = true. Default value is None. + :paramtype return_recognition_model: bool + :keyword face_id_time_to_live: The number of seconds for the face ID being cached. Supported + range from 60 seconds up to 86400 seconds. The default value is 86400 (24 hours). Default value + is None. + :paramtype face_id_time_to_live: int + :return: list of FaceDetectionResult + :rtype: list[~azure.ai.vision.face.models.FaceDetectionResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @overload + def detect_from_session_image( + self, + body: IO[bytes], + *, + content_type: str = "application/json", + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + return_face_id: Optional[bool] = None, + return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, + return_face_landmarks: Optional[bool] = None, + return_recognition_model: Optional[bool] = None, + face_id_time_to_live: Optional[int] = None, + **kwargs: Any, + ) -> List[_models.FaceDetectionResult]: + """Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, + and attributes. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-detection-operations/detect-from-session-image-id + for more details. + + :param body: Required. + :type body: IO[bytes] + :keyword content_type: Body Parameter content-type. Content type parameter for binary body. + Default value is "application/json". + :paramtype content_type: str + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. 'detection_03' is recommended since its accuracy is improved on + smaller faces (64x64 pixels) and rotated face orientations. Known values are: "detection_01", + "detection_02", and "detection_03". Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword recognition_model: The 'recognitionModel' associated with the detected faceIds. + Supported 'recognitionModel' values include 'recognition_01', 'recognition_02', + 'recognition_03' or 'recognition_04'. The default value is 'recognition_01'. 'recognition_04' + is recommended since its accuracy is improved on faces wearing masks compared with + 'recognition_03', and its overall accuracy is improved compared with 'recognition_01' and + 'recognition_02'. Known values are: "recognition_01", "recognition_02", "recognition_03", and + "recognition_04". Default value is None. + :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel + :keyword return_face_id: Return faceIds of the detected faces or not. The default value is + true. Default value is None. + :paramtype return_face_id: bool + :keyword return_face_attributes: Analyze and return the one or more specified face attributes + in the comma-separated string like 'returnFaceAttributes=headPose,glasses'. Face attribute + analysis has additional computational and time cost. Default value is None. + :paramtype return_face_attributes: list[str or ~azure.ai.vision.face.models.FaceAttributeType] + :keyword return_face_landmarks: Return face landmarks of the detected faces or not. The default + value is false. Default value is None. + :paramtype return_face_landmarks: bool + :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is + false. This is only applicable when returnFaceId = true. Default value is None. + :paramtype return_recognition_model: bool + :keyword face_id_time_to_live: The number of seconds for the face ID being cached. Supported + range from 60 seconds up to 86400 seconds. The default value is 86400 (24 hours). Default value + is None. + :paramtype face_id_time_to_live: int + :return: list of FaceDetectionResult + :rtype: list[~azure.ai.vision.face.models.FaceDetectionResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + + @distributed_trace + @api_version_validation( + method_added_on="v1.2-preview.1", + params_added_on={ + "v1.2-preview.1": [ + "content_type", + "detection_model", + "recognition_model", + "return_face_id", + "return_face_attributes", + "return_face_landmarks", + "return_recognition_model", + "face_id_time_to_live", + "accept", + ] + }, + ) + def detect_from_session_image( + self, + body: Union[JSON, IO[bytes]] = _Unset, + *, + session_image_id: str = _Unset, + detection_model: Optional[Union[str, _models.FaceDetectionModel]] = None, + recognition_model: Optional[Union[str, _models.FaceRecognitionModel]] = None, + return_face_id: Optional[bool] = None, + return_face_attributes: Optional[List[Union[str, _models.FaceAttributeType]]] = None, + return_face_landmarks: Optional[bool] = None, + return_recognition_model: Optional[bool] = None, + face_id_time_to_live: Optional[int] = None, + **kwargs: Any, + ) -> List[_models.FaceDetectionResult]: + """Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, + and attributes. + + Please refer to + https://learn.microsoft.com/rest/api/face/face-detection-operations/detect-from-session-image-id + for more details. + + :param body: Is either a JSON type or a IO[bytes] type. Required. + :type body: JSON or IO[bytes] + :keyword session_image_id: Id of session image. Required. + :paramtype session_image_id: str + :keyword detection_model: The 'detectionModel' associated with the detected faceIds. Supported + 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default + value is 'detection_01'. 'detection_03' is recommended since its accuracy is improved on + smaller faces (64x64 pixels) and rotated face orientations. Known values are: "detection_01", + "detection_02", and "detection_03". Default value is None. + :paramtype detection_model: str or ~azure.ai.vision.face.models.FaceDetectionModel + :keyword recognition_model: The 'recognitionModel' associated with the detected faceIds. + Supported 'recognitionModel' values include 'recognition_01', 'recognition_02', + 'recognition_03' or 'recognition_04'. The default value is 'recognition_01'. 'recognition_04' + is recommended since its accuracy is improved on faces wearing masks compared with + 'recognition_03', and its overall accuracy is improved compared with 'recognition_01' and + 'recognition_02'. Known values are: "recognition_01", "recognition_02", "recognition_03", and + "recognition_04". Default value is None. + :paramtype recognition_model: str or ~azure.ai.vision.face.models.FaceRecognitionModel + :keyword return_face_id: Return faceIds of the detected faces or not. The default value is + true. Default value is None. + :paramtype return_face_id: bool + :keyword return_face_attributes: Analyze and return the one or more specified face attributes + in the comma-separated string like 'returnFaceAttributes=headPose,glasses'. Face attribute + analysis has additional computational and time cost. Default value is None. + :paramtype return_face_attributes: list[str or ~azure.ai.vision.face.models.FaceAttributeType] + :keyword return_face_landmarks: Return face landmarks of the detected faces or not. The default + value is false. Default value is None. + :paramtype return_face_landmarks: bool + :keyword return_recognition_model: Return 'recognitionModel' or not. The default value is + false. This is only applicable when returnFaceId = true. Default value is None. + :paramtype return_recognition_model: bool + :keyword face_id_time_to_live: The number of seconds for the face ID being cached. Supported + range from 60 seconds up to 86400 seconds. The default value is 86400 (24 hours). Default value + is None. + :paramtype face_id_time_to_live: int + :return: list of FaceDetectionResult + :rtype: list[~azure.ai.vision.face.models.FaceDetectionResult] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) + _params = kwargs.pop("params", {}) or {} + + content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("content-type", None)) + cls: ClsType[List[_models.FaceDetectionResult]] = kwargs.pop("cls", None) + + if body is _Unset: + if session_image_id is _Unset: + raise TypeError("missing required argument: session_image_id") + body = {"sessionImageId": session_image_id} + body = {k: v for k, v in body.items() if v is not None} + content_type = content_type or "application/json" + _content = None + if isinstance(body, (IOBase, bytes)): + _content = body + else: + _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + + _request = build_face_session_detect_from_session_image_request( + detection_model=detection_model, + recognition_model=recognition_model, + return_face_id=return_face_id, + return_face_attributes=return_face_attributes, + return_face_landmarks=return_face_landmarks, + return_recognition_model=return_recognition_model, + face_id_time_to_live=face_id_time_to_live, + content_type=content_type, + content=_content, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", False) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + if _stream: + deserialized = response.iter_bytes() + else: + deserialized = _deserialize(List[_models.FaceDetectionResult], response.json()) + + if cls: + return cls(pipeline_response, deserialized, {}) # type: ignore + + return deserialized # type: ignore + + @distributed_trace + @api_version_validation( + method_added_on="v1.2-preview.1", + params_added_on={"v1.2-preview.1": ["session_image_id", "accept"]}, + ) + def get_session_image(self, session_image_id: str, **kwargs: Any) -> Iterator[bytes]: + """Please refer to + https://learn.microsoft.com/rest/api/face/liveness-session-operations/get-session-image for + more details. + + :param session_image_id: The request ID of the image to be retrieved. Required. + :type session_image_id: str + :return: Iterator[bytes] + :rtype: Iterator[bytes] + :raises ~azure.core.exceptions.HttpResponseError: + """ + error_map: MutableMapping = { + 401: ClientAuthenticationError, + 404: ResourceNotFoundError, + 409: ResourceExistsError, + 304: ResourceNotModifiedError, + } + error_map.update(kwargs.pop("error_map", {}) or {}) + + _headers = kwargs.pop("headers", {}) or {} + _params = kwargs.pop("params", {}) or {} + + cls: ClsType[Iterator[bytes]] = kwargs.pop("cls", None) + + _request = build_face_session_get_session_image_request( + session_image_id=session_image_id, + headers=_headers, + params=_params, + ) + path_format_arguments = { + "endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True), + "apiVersion": self._serialize.url("self._config.api_version", self._config.api_version, "str"), + } + _request.url = self._client.format_url(_request.url, **path_format_arguments) + + _stream = kwargs.pop("stream", True) + pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access + _request, stream=_stream, **kwargs + ) + + response = pipeline_response.http_response + + if response.status_code not in [200]: + if _stream: + try: + response.read() # Load the body in memory and close the socket + except (StreamConsumedError, StreamClosedError): + pass + map_error(status_code=response.status_code, response=response, error_map=error_map) + error = _deserialize(_models.FaceErrorResponse, response.json()) + raise HttpResponseError(response=response, model=error) + + response_headers = {} + response_headers["content-type"] = self._deserialize("str", response.headers.get("content-type")) + + deserialized = response.iter_bytes() + + if cls: + return cls(pipeline_response, deserialized, response_headers) # type: ignore + + return deserialized # type: ignore diff --git a/sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/_operations/_patch.py b/sdk/face/azure-ai-vision-face/azure/ai/vision/face/operations/_patch.py similarity index 100% rename from sdk/face/azure-ai-vision-face/azure/ai/vision/face/aio/_operations/_patch.py rename to sdk/face/azure-ai-vision-face/azure/ai/vision/face/operations/_patch.py diff --git a/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_add_large_face_list_face_from_stream.py b/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_add_large_face_list_face_from_stream.py new file mode 100644 index 000000000000..f928999a811a --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_add_large_face_list_face_from_stream.py @@ -0,0 +1,34 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python face_list_operations_add_large_face_list_face_from_stream.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + response = client.large_face_list.add_face( + large_face_list_id="your_large_face_list_id", + image_content="", + ) + print(response) + + +# x-ms-original-file: v1.2-preview.1/FaceListOperations_AddLargeFaceListFaceFromStream.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_delete_large_face_list.py b/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_delete_large_face_list.py new file mode 100644 index 000000000000..a3983c7eab2e --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_delete_large_face_list.py @@ -0,0 +1,32 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python face_list_operations_delete_large_face_list.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + client.large_face_list.delete( + large_face_list_id="your_large_face_list_id", + ) + + +# x-ms-original-file: v1.2-preview.1/FaceListOperations_DeleteLargeFaceList.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_delete_large_face_list_face.py b/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_delete_large_face_list_face.py new file mode 100644 index 000000000000..40652c3a3381 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_delete_large_face_list_face.py @@ -0,0 +1,33 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python face_list_operations_delete_large_face_list_face.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + client.large_face_list.delete_face( + large_face_list_id="your_large_face_list_id", + persisted_face_id="43897a75-8d6f-42cf-885e-74832febb055", + ) + + +# x-ms-original-file: v1.2-preview.1/FaceListOperations_DeleteLargeFaceListFace.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_get_large_face_list.py b/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_get_large_face_list.py new file mode 100644 index 000000000000..c1e1af2fb3e8 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_get_large_face_list.py @@ -0,0 +1,33 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python face_list_operations_get_large_face_list.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + response = client.large_face_list.get( + large_face_list_id="your_large_face_list_id", + ) + print(response) + + +# x-ms-original-file: v1.2-preview.1/FaceListOperations_GetLargeFaceList.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_get_large_face_list_face.py b/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_get_large_face_list_face.py new file mode 100644 index 000000000000..6efa6e2a1616 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_get_large_face_list_face.py @@ -0,0 +1,34 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python face_list_operations_get_large_face_list_face.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + response = client.large_face_list.get_face( + large_face_list_id="your_large_face_list_id", + persisted_face_id="43897a75-8d6f-42cf-885e-74832febb055", + ) + print(response) + + +# x-ms-original-file: v1.2-preview.1/FaceListOperations_GetLargeFaceListFace.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_get_large_face_list_faces.py b/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_get_large_face_list_faces.py new file mode 100644 index 000000000000..c32e1f741852 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_get_large_face_list_faces.py @@ -0,0 +1,33 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python face_list_operations_get_large_face_list_faces.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + response = client.large_face_list.get_faces( + large_face_list_id="your_large_face_list_id", + ) + print(response) + + +# x-ms-original-file: v1.2-preview.1/FaceListOperations_GetLargeFaceListFaces.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_get_large_face_list_training_status.py b/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_get_large_face_list_training_status.py new file mode 100644 index 000000000000..3c8bdbb9cdc8 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_get_large_face_list_training_status.py @@ -0,0 +1,33 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python face_list_operations_get_large_face_list_training_status.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + response = client.large_face_list.get_training_status( + large_face_list_id="your_large_face_list_id", + ) + print(response) + + +# x-ms-original-file: v1.2-preview.1/FaceListOperations_GetLargeFaceListTrainingStatus.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_get_large_face_lists.py b/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_get_large_face_lists.py new file mode 100644 index 000000000000..656c12749622 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_get_large_face_lists.py @@ -0,0 +1,31 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python face_list_operations_get_large_face_lists.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + response = client.large_face_list.get_large_face_lists() + print(response) + + +# x-ms-original-file: v1.2-preview.1/FaceListOperations_GetLargeFaceLists.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_train_large_face_list.py b/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_train_large_face_list.py new file mode 100644 index 000000000000..f7a75806cff7 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_train_large_face_list.py @@ -0,0 +1,32 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python face_list_operations_train_large_face_list.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + client.large_face_list.begin_train( + large_face_list_id="your_large_face_list_id", + ).result() + + +# x-ms-original-file: v1.2-preview.1/FaceListOperations_TrainLargeFaceList.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_update_large_face_list.py b/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_update_large_face_list.py new file mode 100644 index 000000000000..2259f284a121 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_update_large_face_list.py @@ -0,0 +1,33 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python face_list_operations_update_large_face_list.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + client.large_face_list.update( + large_face_list_id="your_large_face_list_id", + body={"name": "your_large_face_list_name", "userData": "your_user_data"}, + ) + + +# x-ms-original-file: v1.2-preview.1/FaceListOperations_UpdateLargeFaceList.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_update_large_face_list_face.py b/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_update_large_face_list_face.py new file mode 100644 index 000000000000..a0d72eab3f89 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/face_list_operations_update_large_face_list_face.py @@ -0,0 +1,34 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python face_list_operations_update_large_face_list_face.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + client.large_face_list.update_face( + large_face_list_id="your_large_face_list_id", + persisted_face_id="43897a75-8d6f-42cf-885e-74832febb055", + body={"userData": "your_user_data"}, + ) + + +# x-ms-original-file: v1.2-preview.1/FaceListOperations_UpdateLargeFaceListFace.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_create_liveness_session.py b/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_create_liveness_session.py new file mode 100644 index 000000000000..870b2adf0dd5 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_create_liveness_session.py @@ -0,0 +1,39 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python liveness_session_operations_create_liveness_session.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + response = client.create_liveness_session( + body={ + "authTokenTimeToLiveInSeconds": 60, + "deviceCorrelationId": "your_device_correlation_id", + "deviceCorrelationIdSetInClient": True, + "livenessOperationMode": "Passive", + "sendResultsToClient": True, + }, + ) + print(response) + + +# x-ms-original-file: v1.2-preview.1/LivenessSessionOperations_CreateLivenessSession.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_delete_liveness_session.py b/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_delete_liveness_session.py new file mode 100644 index 000000000000..328aedebfcd2 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_delete_liveness_session.py @@ -0,0 +1,32 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python liveness_session_operations_delete_liveness_session.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + client.delete_liveness_session( + session_id="b12e033e-bda7-4b83-a211-e721c661f30e", + ) + + +# x-ms-original-file: v1.2-preview.1/LivenessSessionOperations_DeleteLivenessSession.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_delete_liveness_with_verify_session.py b/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_delete_liveness_with_verify_session.py new file mode 100644 index 000000000000..5a07df9ede03 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_delete_liveness_with_verify_session.py @@ -0,0 +1,32 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python liveness_session_operations_delete_liveness_with_verify_session.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + client.delete_liveness_with_verify_session( + session_id="b12e033e-bda7-4b83-a211-e721c661f30e", + ) + + +# x-ms-original-file: v1.2-preview.1/LivenessSessionOperations_DeleteLivenessWithVerifySession.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_get_liveness_session_audit_entries.py b/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_get_liveness_session_audit_entries.py new file mode 100644 index 000000000000..8e1378c21b41 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_get_liveness_session_audit_entries.py @@ -0,0 +1,33 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python liveness_session_operations_get_liveness_session_audit_entries.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + response = client.get_liveness_session_audit_entries( + session_id="b12e033e-bda7-4b83-a211-e721c661f30e", + ) + print(response) + + +# x-ms-original-file: v1.2-preview.1/LivenessSessionOperations_GetLivenessSessionAuditEntries.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_get_liveness_session_result.py b/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_get_liveness_session_result.py new file mode 100644 index 000000000000..171c716c0ddf --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_get_liveness_session_result.py @@ -0,0 +1,33 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python liveness_session_operations_get_liveness_session_result.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + response = client.get_liveness_session_result( + session_id="b12e033e-bda7-4b83-a211-e721c661f30e", + ) + print(response) + + +# x-ms-original-file: v1.2-preview.1/LivenessSessionOperations_GetLivenessSessionResult.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_get_liveness_sessions.py b/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_get_liveness_sessions.py new file mode 100644 index 000000000000..0c5415da59b7 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_get_liveness_sessions.py @@ -0,0 +1,31 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python liveness_session_operations_get_liveness_sessions.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + response = client.get_liveness_sessions() + print(response) + + +# x-ms-original-file: v1.2-preview.1/LivenessSessionOperations_GetLivenessSessions.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_get_liveness_with_verify_session_audit_entries.py b/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_get_liveness_with_verify_session_audit_entries.py new file mode 100644 index 000000000000..b7549952e8e8 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_get_liveness_with_verify_session_audit_entries.py @@ -0,0 +1,33 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python liveness_session_operations_get_liveness_with_verify_session_audit_entries.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + response = client.get_liveness_with_verify_session_audit_entries( + session_id="b12e033e-bda7-4b83-a211-e721c661f30e", + ) + print(response) + + +# x-ms-original-file: v1.2-preview.1/LivenessSessionOperations_GetLivenessWithVerifySessionAuditEntries.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_get_liveness_with_verify_session_result.py b/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_get_liveness_with_verify_session_result.py new file mode 100644 index 000000000000..ca331fd5c973 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_get_liveness_with_verify_session_result.py @@ -0,0 +1,33 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python liveness_session_operations_get_liveness_with_verify_session_result.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + response = client.get_liveness_with_verify_session_result( + session_id="b12e033e-bda7-4b83-a211-e721c661f30e", + ) + print(response) + + +# x-ms-original-file: v1.2-preview.1/LivenessSessionOperations_GetLivenessWithVerifySessionResult.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_get_liveness_with_verify_sessions.py b/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_get_liveness_with_verify_sessions.py new file mode 100644 index 000000000000..b1c6fa5efb4d --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_get_liveness_with_verify_sessions.py @@ -0,0 +1,31 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python liveness_session_operations_get_liveness_with_verify_sessions.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + response = client.get_liveness_with_verify_sessions() + print(response) + + +# x-ms-original-file: v1.2-preview.1/LivenessSessionOperations_GetLivenessWithVerifySessions.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_get_session_image.py b/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_get_session_image.py new file mode 100644 index 000000000000..e809aacf02c0 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/liveness_session_operations_get_session_image.py @@ -0,0 +1,33 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python liveness_session_operations_get_session_image.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + response = client.get_session_image( + session_image_id="3d035d35-2e01-4ed4-8935-577afde9caaa", + ) + print(response) + + +# x-ms-original-file: v1.2-preview.1/LivenessSessionOperations_GetSessionImage.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_add_large_person_group_person_face_from_stream.py b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_add_large_person_group_person_face_from_stream.py new file mode 100644 index 000000000000..8b5fb2c3484b --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_add_large_person_group_person_face_from_stream.py @@ -0,0 +1,35 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python person_group_operations_add_large_person_group_person_face_from_stream.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + response = client.large_person_group.add_face( + large_person_group_id="your_large_person_group_id", + person_id="25985303-c537-4467-b41d-bdb45cd95ca1", + image_content="", + ) + print(response) + + +# x-ms-original-file: v1.2-preview.1/PersonGroupOperations_AddLargePersonGroupPersonFaceFromStream.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_delete_large_person_group.py b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_delete_large_person_group.py new file mode 100644 index 000000000000..aae50a47f38f --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_delete_large_person_group.py @@ -0,0 +1,32 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python person_group_operations_delete_large_person_group.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + client.large_person_group.delete( + large_person_group_id="your_large_person_group_id", + ) + + +# x-ms-original-file: v1.2-preview.1/PersonGroupOperations_DeleteLargePersonGroup.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_delete_large_person_group_person.py b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_delete_large_person_group_person.py new file mode 100644 index 000000000000..3c5018bc1b8e --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_delete_large_person_group_person.py @@ -0,0 +1,33 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python person_group_operations_delete_large_person_group_person.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + client.large_person_group.delete_person( + large_person_group_id="your_large_person_group_id", + person_id="25985303-c537-4467-b41d-bdb45cd95ca1", + ) + + +# x-ms-original-file: v1.2-preview.1/PersonGroupOperations_DeleteLargePersonGroupPerson.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_delete_large_person_group_person_face.py b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_delete_large_person_group_person_face.py new file mode 100644 index 000000000000..9b2abd194e32 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_delete_large_person_group_person_face.py @@ -0,0 +1,34 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python person_group_operations_delete_large_person_group_person_face.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + client.large_person_group.delete_face( + large_person_group_id="your_large_person_group_id", + person_id="25985303-c537-4467-b41d-bdb45cd95ca1", + persisted_face_id="43897a75-8d6f-42cf-885e-74832febb055", + ) + + +# x-ms-original-file: v1.2-preview.1/PersonGroupOperations_DeleteLargePersonGroupPersonFace.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_get_large_person_group.py b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_get_large_person_group.py new file mode 100644 index 000000000000..1443601f67ff --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_get_large_person_group.py @@ -0,0 +1,33 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python person_group_operations_get_large_person_group.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + response = client.large_person_group.get( + large_person_group_id="your_large_person_group_id", + ) + print(response) + + +# x-ms-original-file: v1.2-preview.1/PersonGroupOperations_GetLargePersonGroup.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_get_large_person_group_person.py b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_get_large_person_group_person.py new file mode 100644 index 000000000000..eba005b7ab3a --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_get_large_person_group_person.py @@ -0,0 +1,34 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python person_group_operations_get_large_person_group_person.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + response = client.large_person_group.get_person( + large_person_group_id="your_large_person_group_id", + person_id="25985303-c537-4467-b41d-bdb45cd95ca1", + ) + print(response) + + +# x-ms-original-file: v1.2-preview.1/PersonGroupOperations_GetLargePersonGroupPerson.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_get_large_person_group_person_face.py b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_get_large_person_group_person_face.py new file mode 100644 index 000000000000..77ac553f6538 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_get_large_person_group_person_face.py @@ -0,0 +1,35 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python person_group_operations_get_large_person_group_person_face.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + response = client.large_person_group.get_face( + large_person_group_id="your_large_person_group_id", + person_id="25985303-c537-4467-b41d-bdb45cd95ca1", + persisted_face_id="43897a75-8d6f-42cf-885e-74832febb055", + ) + print(response) + + +# x-ms-original-file: v1.2-preview.1/PersonGroupOperations_GetLargePersonGroupPersonFace.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_get_large_person_group_persons.py b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_get_large_person_group_persons.py new file mode 100644 index 000000000000..dd1e8776dc0b --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_get_large_person_group_persons.py @@ -0,0 +1,33 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python person_group_operations_get_large_person_group_persons.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + response = client.large_person_group.get_persons( + large_person_group_id="your_large_person_group_id", + ) + print(response) + + +# x-ms-original-file: v1.2-preview.1/PersonGroupOperations_GetLargePersonGroupPersons.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_get_large_person_group_training_status.py b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_get_large_person_group_training_status.py new file mode 100644 index 000000000000..263bd708cb98 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_get_large_person_group_training_status.py @@ -0,0 +1,33 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python person_group_operations_get_large_person_group_training_status.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + response = client.large_person_group.get_training_status( + large_person_group_id="your_large_person_group_id", + ) + print(response) + + +# x-ms-original-file: v1.2-preview.1/PersonGroupOperations_GetLargePersonGroupTrainingStatus.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_get_large_person_groups.py b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_get_large_person_groups.py new file mode 100644 index 000000000000..c5c7fba6e248 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_get_large_person_groups.py @@ -0,0 +1,31 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python person_group_operations_get_large_person_groups.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + response = client.large_person_group.get_large_person_groups() + print(response) + + +# x-ms-original-file: v1.2-preview.1/PersonGroupOperations_GetLargePersonGroups.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_train_large_person_group.py b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_train_large_person_group.py new file mode 100644 index 000000000000..2498946b61e7 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_train_large_person_group.py @@ -0,0 +1,32 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python person_group_operations_train_large_person_group.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + client.large_person_group.begin_train( + large_person_group_id="your_large_person_group_id", + ).result() + + +# x-ms-original-file: v1.2-preview.1/PersonGroupOperations_TrainLargePersonGroup.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_update_large_person_group.py b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_update_large_person_group.py new file mode 100644 index 000000000000..ebd472d7bdb2 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_update_large_person_group.py @@ -0,0 +1,33 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python person_group_operations_update_large_person_group.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + client.large_person_group.update( + large_person_group_id="your_large_person_group_id", + body={"name": "your_large_person_group_name", "userData": "your_user_data"}, + ) + + +# x-ms-original-file: v1.2-preview.1/PersonGroupOperations_UpdateLargePersonGroup.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_update_large_person_group_person.py b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_update_large_person_group_person.py new file mode 100644 index 000000000000..030ccb8ef107 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_update_large_person_group_person.py @@ -0,0 +1,34 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python person_group_operations_update_large_person_group_person.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + client.large_person_group.update_person( + large_person_group_id="your_large_person_group_id", + person_id="25985303-c537-4467-b41d-bdb45cd95ca1", + body={"name": "your_large_person_group_person_name", "userData": "your_user_data"}, + ) + + +# x-ms-original-file: v1.2-preview.1/PersonGroupOperations_UpdateLargePersonGroupPerson.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_update_large_person_group_person_face.py b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_update_large_person_group_person_face.py new file mode 100644 index 000000000000..a409df092146 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_samples/person_group_operations_update_large_person_group_person_face.py @@ -0,0 +1,35 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from azure.ai.vision.face import FaceAdministrationClient + +""" +# PREREQUISITES + pip install azure-ai-vision-face +# USAGE + python person_group_operations_update_large_person_group_person_face.py +""" + + +def main(): + client = FaceAdministrationClient( + endpoint="ENDPOINT", + credential="CREDENTIAL", + ) + + client.large_person_group.update_face( + large_person_group_id="your_large_person_group_id", + person_id="25985303-c537-4467-b41d-bdb45cd95ca1", + persisted_face_id="43897a75-8d6f-42cf-885e-74832febb055", + body={"userData": "your_user_data"}, + ) + + +# x-ms-original-file: v1.2-preview.1/PersonGroupOperations_UpdateLargePersonGroupPersonFace.json +if __name__ == "__main__": + main() diff --git a/sdk/face/azure-ai-vision-face/generated_tests/conftest.py b/sdk/face/azure-ai-vision-face/generated_tests/conftest.py new file mode 100644 index 000000000000..83118e9ab790 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_tests/conftest.py @@ -0,0 +1,61 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- +import os +import pytest +from dotenv import load_dotenv +from devtools_testutils import ( + test_proxy, + add_general_regex_sanitizer, + add_body_key_sanitizer, + add_header_regex_sanitizer, +) + +load_dotenv() + + +# For security, please avoid record sensitive identity information in recordings +@pytest.fixture(scope="session", autouse=True) +def add_sanitizers(test_proxy): + faceadministration_subscription_id = os.environ.get( + "FACEADMINISTRATION_SUBSCRIPTION_ID", "00000000-0000-0000-0000-000000000000" + ) + faceadministration_tenant_id = os.environ.get( + "FACEADMINISTRATION_TENANT_ID", "00000000-0000-0000-0000-000000000000" + ) + faceadministration_client_id = os.environ.get( + "FACEADMINISTRATION_CLIENT_ID", "00000000-0000-0000-0000-000000000000" + ) + faceadministration_client_secret = os.environ.get( + "FACEADMINISTRATION_CLIENT_SECRET", "00000000-0000-0000-0000-000000000000" + ) + add_general_regex_sanitizer(regex=faceadministration_subscription_id, value="00000000-0000-0000-0000-000000000000") + add_general_regex_sanitizer(regex=faceadministration_tenant_id, value="00000000-0000-0000-0000-000000000000") + add_general_regex_sanitizer(regex=faceadministration_client_id, value="00000000-0000-0000-0000-000000000000") + add_general_regex_sanitizer(regex=faceadministration_client_secret, value="00000000-0000-0000-0000-000000000000") + + face_subscription_id = os.environ.get("FACE_SUBSCRIPTION_ID", "00000000-0000-0000-0000-000000000000") + face_tenant_id = os.environ.get("FACE_TENANT_ID", "00000000-0000-0000-0000-000000000000") + face_client_id = os.environ.get("FACE_CLIENT_ID", "00000000-0000-0000-0000-000000000000") + face_client_secret = os.environ.get("FACE_CLIENT_SECRET", "00000000-0000-0000-0000-000000000000") + add_general_regex_sanitizer(regex=face_subscription_id, value="00000000-0000-0000-0000-000000000000") + add_general_regex_sanitizer(regex=face_tenant_id, value="00000000-0000-0000-0000-000000000000") + add_general_regex_sanitizer(regex=face_client_id, value="00000000-0000-0000-0000-000000000000") + add_general_regex_sanitizer(regex=face_client_secret, value="00000000-0000-0000-0000-000000000000") + + facesession_subscription_id = os.environ.get("FACESESSION_SUBSCRIPTION_ID", "00000000-0000-0000-0000-000000000000") + facesession_tenant_id = os.environ.get("FACESESSION_TENANT_ID", "00000000-0000-0000-0000-000000000000") + facesession_client_id = os.environ.get("FACESESSION_CLIENT_ID", "00000000-0000-0000-0000-000000000000") + facesession_client_secret = os.environ.get("FACESESSION_CLIENT_SECRET", "00000000-0000-0000-0000-000000000000") + add_general_regex_sanitizer(regex=facesession_subscription_id, value="00000000-0000-0000-0000-000000000000") + add_general_regex_sanitizer(regex=facesession_tenant_id, value="00000000-0000-0000-0000-000000000000") + add_general_regex_sanitizer(regex=facesession_client_id, value="00000000-0000-0000-0000-000000000000") + add_general_regex_sanitizer(regex=facesession_client_secret, value="00000000-0000-0000-0000-000000000000") + + add_header_regex_sanitizer(key="Set-Cookie", value="[set-cookie;]") + add_header_regex_sanitizer(key="Cookie", value="cookie;") + add_body_key_sanitizer(json_path="$..access_token", value="access_token") diff --git a/sdk/face/azure-ai-vision-face/generated_tests/test_face.py b/sdk/face/azure-ai-vision-face/generated_tests/test_face.py new file mode 100644 index 000000000000..4e4b42169983 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_tests/test_face.py @@ -0,0 +1,96 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- +import pytest +from devtools_testutils import recorded_by_proxy +from testpreparer import FaceClientTestBase, FacePreparer + + +@pytest.mark.skip("you may need to update the auto-generated test case before run it") +class TestFace(FaceClientTestBase): + @FacePreparer() + @recorded_by_proxy + def test_find_similar(self, face_endpoint): + client = self.create_client(endpoint=face_endpoint) + response = client.find_similar( + body={"faceId": "str", "faceIds": ["str"], "maxNumOfCandidatesReturned": 0, "mode": "str"}, + face_id="str", + face_ids=["str"], + ) + + # please add some check logic here by yourself + # ... + + @FacePreparer() + @recorded_by_proxy + def test_verify_face_to_face(self, face_endpoint): + client = self.create_client(endpoint=face_endpoint) + response = client.verify_face_to_face( + body={"faceId1": "str", "faceId2": "str"}, + face_id1="str", + face_id2="str", + ) + + # please add some check logic here by yourself + # ... + + @FacePreparer() + @recorded_by_proxy + def test_group(self, face_endpoint): + client = self.create_client(endpoint=face_endpoint) + response = client.group( + body={"faceIds": ["str"]}, + face_ids=["str"], + ) + + # please add some check logic here by yourself + # ... + + @FacePreparer() + @recorded_by_proxy + def test_find_similar_from_large_face_list(self, face_endpoint): + client = self.create_client(endpoint=face_endpoint) + response = client.find_similar_from_large_face_list( + body={"faceId": "str", "largeFaceListId": "str", "maxNumOfCandidatesReturned": 0, "mode": "str"}, + face_id="str", + large_face_list_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FacePreparer() + @recorded_by_proxy + def test_identify_from_large_person_group(self, face_endpoint): + client = self.create_client(endpoint=face_endpoint) + response = client.identify_from_large_person_group( + body={ + "faceIds": ["str"], + "largePersonGroupId": "str", + "confidenceThreshold": 0.0, + "maxNumOfCandidatesReturned": 0, + }, + face_ids=["str"], + large_person_group_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FacePreparer() + @recorded_by_proxy + def test_verify_from_large_person_group(self, face_endpoint): + client = self.create_client(endpoint=face_endpoint) + response = client.verify_from_large_person_group( + body={"faceId": "str", "largePersonGroupId": "str", "personId": "str"}, + face_id="str", + large_person_group_id="str", + person_id="str", + ) + + # please add some check logic here by yourself + # ... diff --git a/sdk/face/azure-ai-vision-face/generated_tests/test_face_administration_large_face_list_operations.py b/sdk/face/azure-ai-vision-face/generated_tests/test_face_administration_large_face_list_operations.py new file mode 100644 index 000000000000..e560dcd23165 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_tests/test_face_administration_large_face_list_operations.py @@ -0,0 +1,165 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- +import pytest +from devtools_testutils import recorded_by_proxy +from testpreparer import FaceAdministrationClientTestBase, FaceAdministrationPreparer + + +@pytest.mark.skip("you may need to update the auto-generated test case before run it") +class TestFaceAdministrationLargeFaceListOperations(FaceAdministrationClientTestBase): + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_face_list_create(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_face_list.create( + large_face_list_id="str", + body={"name": "str", "recognitionModel": "str", "userData": "str"}, + name="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_face_list_delete(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_face_list.delete( + large_face_list_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_face_list_get(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_face_list.get( + large_face_list_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_face_list_update(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_face_list.update( + large_face_list_id="str", + body={"name": "str", "userData": "str"}, + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_face_list_get_large_face_lists(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_face_list.get_large_face_lists() + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_face_list_get_training_status(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_face_list.get_training_status( + large_face_list_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_face_list_begin_train(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_face_list.begin_train( + large_face_list_id="str", + ).result() # call '.result()' to poll until service return final result + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_face_list_add_face_from_url(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_face_list.add_face_from_url( + large_face_list_id="str", + body={"url": "str"}, + url="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_face_list_add_face(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_face_list.add_face( + large_face_list_id="str", + image_content=bytes("bytes", encoding="utf-8"), + content_type="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_face_list_delete_face(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_face_list.delete_face( + large_face_list_id="str", + persisted_face_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_face_list_get_face(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_face_list.get_face( + large_face_list_id="str", + persisted_face_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_face_list_update_face(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_face_list.update_face( + large_face_list_id="str", + persisted_face_id="str", + body={"userData": "str"}, + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_face_list_get_faces(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_face_list.get_faces( + large_face_list_id="str", + ) + + # please add some check logic here by yourself + # ... diff --git a/sdk/face/azure-ai-vision-face/generated_tests/test_face_administration_large_face_list_operations_async.py b/sdk/face/azure-ai-vision-face/generated_tests/test_face_administration_large_face_list_operations_async.py new file mode 100644 index 000000000000..828c521085e2 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_tests/test_face_administration_large_face_list_operations_async.py @@ -0,0 +1,168 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- +import pytest +from devtools_testutils.aio import recorded_by_proxy_async +from testpreparer import FaceAdministrationPreparer +from testpreparer_async import FaceAdministrationClientTestBaseAsync + + +@pytest.mark.skip("you may need to update the auto-generated test case before run it") +class TestFaceAdministrationLargeFaceListOperationsAsync(FaceAdministrationClientTestBaseAsync): + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_face_list_create(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_face_list.create( + large_face_list_id="str", + body={"name": "str", "recognitionModel": "str", "userData": "str"}, + name="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_face_list_delete(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_face_list.delete( + large_face_list_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_face_list_get(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_face_list.get( + large_face_list_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_face_list_update(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_face_list.update( + large_face_list_id="str", + body={"name": "str", "userData": "str"}, + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_face_list_get_large_face_lists(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_face_list.get_large_face_lists() + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_face_list_get_training_status(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_face_list.get_training_status( + large_face_list_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_face_list_begin_train(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await ( + await client.large_face_list.begin_train( + large_face_list_id="str", + ) + ).result() # call '.result()' to poll until service return final result + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_face_list_add_face_from_url(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_face_list.add_face_from_url( + large_face_list_id="str", + body={"url": "str"}, + url="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_face_list_add_face(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_face_list.add_face( + large_face_list_id="str", + image_content=bytes("bytes", encoding="utf-8"), + content_type="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_face_list_delete_face(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_face_list.delete_face( + large_face_list_id="str", + persisted_face_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_face_list_get_face(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_face_list.get_face( + large_face_list_id="str", + persisted_face_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_face_list_update_face(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_face_list.update_face( + large_face_list_id="str", + persisted_face_id="str", + body={"userData": "str"}, + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_face_list_get_faces(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_face_list.get_faces( + large_face_list_id="str", + ) + + # please add some check logic here by yourself + # ... diff --git a/sdk/face/azure-ai-vision-face/generated_tests/test_face_administration_large_person_group_operations.py b/sdk/face/azure-ai-vision-face/generated_tests/test_face_administration_large_person_group_operations.py new file mode 100644 index 000000000000..6f44a6c942db --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_tests/test_face_administration_large_person_group_operations.py @@ -0,0 +1,220 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- +import pytest +from devtools_testutils import recorded_by_proxy +from testpreparer import FaceAdministrationClientTestBase, FaceAdministrationPreparer + + +@pytest.mark.skip("you may need to update the auto-generated test case before run it") +class TestFaceAdministrationLargePersonGroupOperations(FaceAdministrationClientTestBase): + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_person_group_create(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_person_group.create( + large_person_group_id="str", + body={"name": "str", "recognitionModel": "str", "userData": "str"}, + name="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_person_group_delete(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_person_group.delete( + large_person_group_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_person_group_get(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_person_group.get( + large_person_group_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_person_group_update(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_person_group.update( + large_person_group_id="str", + body={"name": "str", "userData": "str"}, + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_person_group_get_large_person_groups(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_person_group.get_large_person_groups() + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_person_group_get_training_status(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_person_group.get_training_status( + large_person_group_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_person_group_begin_train(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_person_group.begin_train( + large_person_group_id="str", + ).result() # call '.result()' to poll until service return final result + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_person_group_create_person(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_person_group.create_person( + large_person_group_id="str", + body={"name": "str", "userData": "str"}, + name="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_person_group_delete_person(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_person_group.delete_person( + large_person_group_id="str", + person_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_person_group_get_person(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_person_group.get_person( + large_person_group_id="str", + person_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_person_group_update_person(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_person_group.update_person( + large_person_group_id="str", + person_id="str", + body={"name": "str", "userData": "str"}, + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_person_group_get_persons(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_person_group.get_persons( + large_person_group_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_person_group_add_face_from_url(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_person_group.add_face_from_url( + large_person_group_id="str", + person_id="str", + body={"url": "str"}, + url="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_person_group_add_face(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_person_group.add_face( + large_person_group_id="str", + person_id="str", + image_content=bytes("bytes", encoding="utf-8"), + content_type="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_person_group_delete_face(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_person_group.delete_face( + large_person_group_id="str", + person_id="str", + persisted_face_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_person_group_get_face(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_person_group.get_face( + large_person_group_id="str", + person_id="str", + persisted_face_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy + def test_large_person_group_update_face(self, faceadministration_endpoint): + client = self.create_client(endpoint=faceadministration_endpoint) + response = client.large_person_group.update_face( + large_person_group_id="str", + person_id="str", + persisted_face_id="str", + body={"userData": "str"}, + ) + + # please add some check logic here by yourself + # ... diff --git a/sdk/face/azure-ai-vision-face/generated_tests/test_face_administration_large_person_group_operations_async.py b/sdk/face/azure-ai-vision-face/generated_tests/test_face_administration_large_person_group_operations_async.py new file mode 100644 index 000000000000..d91a4f7f5440 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_tests/test_face_administration_large_person_group_operations_async.py @@ -0,0 +1,223 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- +import pytest +from devtools_testutils.aio import recorded_by_proxy_async +from testpreparer import FaceAdministrationPreparer +from testpreparer_async import FaceAdministrationClientTestBaseAsync + + +@pytest.mark.skip("you may need to update the auto-generated test case before run it") +class TestFaceAdministrationLargePersonGroupOperationsAsync(FaceAdministrationClientTestBaseAsync): + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_person_group_create(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_person_group.create( + large_person_group_id="str", + body={"name": "str", "recognitionModel": "str", "userData": "str"}, + name="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_person_group_delete(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_person_group.delete( + large_person_group_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_person_group_get(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_person_group.get( + large_person_group_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_person_group_update(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_person_group.update( + large_person_group_id="str", + body={"name": "str", "userData": "str"}, + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_person_group_get_large_person_groups(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_person_group.get_large_person_groups() + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_person_group_get_training_status(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_person_group.get_training_status( + large_person_group_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_person_group_begin_train(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await ( + await client.large_person_group.begin_train( + large_person_group_id="str", + ) + ).result() # call '.result()' to poll until service return final result + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_person_group_create_person(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_person_group.create_person( + large_person_group_id="str", + body={"name": "str", "userData": "str"}, + name="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_person_group_delete_person(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_person_group.delete_person( + large_person_group_id="str", + person_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_person_group_get_person(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_person_group.get_person( + large_person_group_id="str", + person_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_person_group_update_person(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_person_group.update_person( + large_person_group_id="str", + person_id="str", + body={"name": "str", "userData": "str"}, + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_person_group_get_persons(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_person_group.get_persons( + large_person_group_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_person_group_add_face_from_url(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_person_group.add_face_from_url( + large_person_group_id="str", + person_id="str", + body={"url": "str"}, + url="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_person_group_add_face(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_person_group.add_face( + large_person_group_id="str", + person_id="str", + image_content=bytes("bytes", encoding="utf-8"), + content_type="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_person_group_delete_face(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_person_group.delete_face( + large_person_group_id="str", + person_id="str", + persisted_face_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_person_group_get_face(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_person_group.get_face( + large_person_group_id="str", + person_id="str", + persisted_face_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceAdministrationPreparer() + @recorded_by_proxy_async + async def test_large_person_group_update_face(self, faceadministration_endpoint): + client = self.create_async_client(endpoint=faceadministration_endpoint) + response = await client.large_person_group.update_face( + large_person_group_id="str", + person_id="str", + persisted_face_id="str", + body={"userData": "str"}, + ) + + # please add some check logic here by yourself + # ... diff --git a/sdk/face/azure-ai-vision-face/generated_tests/test_face_async.py b/sdk/face/azure-ai-vision-face/generated_tests/test_face_async.py new file mode 100644 index 000000000000..2d005977596f --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_tests/test_face_async.py @@ -0,0 +1,97 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- +import pytest +from devtools_testutils.aio import recorded_by_proxy_async +from testpreparer import FacePreparer +from testpreparer_async import FaceClientTestBaseAsync + + +@pytest.mark.skip("you may need to update the auto-generated test case before run it") +class TestFaceAsync(FaceClientTestBaseAsync): + @FacePreparer() + @recorded_by_proxy_async + async def test_find_similar(self, face_endpoint): + client = self.create_async_client(endpoint=face_endpoint) + response = await client.find_similar( + body={"faceId": "str", "faceIds": ["str"], "maxNumOfCandidatesReturned": 0, "mode": "str"}, + face_id="str", + face_ids=["str"], + ) + + # please add some check logic here by yourself + # ... + + @FacePreparer() + @recorded_by_proxy_async + async def test_verify_face_to_face(self, face_endpoint): + client = self.create_async_client(endpoint=face_endpoint) + response = await client.verify_face_to_face( + body={"faceId1": "str", "faceId2": "str"}, + face_id1="str", + face_id2="str", + ) + + # please add some check logic here by yourself + # ... + + @FacePreparer() + @recorded_by_proxy_async + async def test_group(self, face_endpoint): + client = self.create_async_client(endpoint=face_endpoint) + response = await client.group( + body={"faceIds": ["str"]}, + face_ids=["str"], + ) + + # please add some check logic here by yourself + # ... + + @FacePreparer() + @recorded_by_proxy_async + async def test_find_similar_from_large_face_list(self, face_endpoint): + client = self.create_async_client(endpoint=face_endpoint) + response = await client.find_similar_from_large_face_list( + body={"faceId": "str", "largeFaceListId": "str", "maxNumOfCandidatesReturned": 0, "mode": "str"}, + face_id="str", + large_face_list_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FacePreparer() + @recorded_by_proxy_async + async def test_identify_from_large_person_group(self, face_endpoint): + client = self.create_async_client(endpoint=face_endpoint) + response = await client.identify_from_large_person_group( + body={ + "faceIds": ["str"], + "largePersonGroupId": "str", + "confidenceThreshold": 0.0, + "maxNumOfCandidatesReturned": 0, + }, + face_ids=["str"], + large_person_group_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FacePreparer() + @recorded_by_proxy_async + async def test_verify_from_large_person_group(self, face_endpoint): + client = self.create_async_client(endpoint=face_endpoint) + response = await client.verify_from_large_person_group( + body={"faceId": "str", "largePersonGroupId": "str", "personId": "str"}, + face_id="str", + large_person_group_id="str", + person_id="str", + ) + + # please add some check logic here by yourself + # ... diff --git a/sdk/face/azure-ai-vision-face/generated_tests/test_face_session.py b/sdk/face/azure-ai-vision-face/generated_tests/test_face_session.py new file mode 100644 index 000000000000..72a4b251ed5c --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_tests/test_face_session.py @@ -0,0 +1,139 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- +import pytest +from devtools_testutils import recorded_by_proxy +from testpreparer import FaceSessionClientTestBase, FaceSessionPreparer + + +@pytest.mark.skip("you may need to update the auto-generated test case before run it") +class TestFaceSession(FaceSessionClientTestBase): + @FaceSessionPreparer() + @recorded_by_proxy + def test_create_liveness_session(self, facesession_endpoint): + client = self.create_client(endpoint=facesession_endpoint) + response = client.create_liveness_session( + body={ + "livenessOperationMode": "str", + "authTokenTimeToLiveInSeconds": 0, + "deviceCorrelationId": "str", + "deviceCorrelationIdSetInClient": bool, + "enableSessionImage": bool, + "livenessSingleModalModel": "str", + "sendResultsToClient": bool, + }, + ) + + # please add some check logic here by yourself + # ... + + @FaceSessionPreparer() + @recorded_by_proxy + def test_delete_liveness_session(self, facesession_endpoint): + client = self.create_client(endpoint=facesession_endpoint) + response = client.delete_liveness_session( + session_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceSessionPreparer() + @recorded_by_proxy + def test_get_liveness_session_result(self, facesession_endpoint): + client = self.create_client(endpoint=facesession_endpoint) + response = client.get_liveness_session_result( + session_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceSessionPreparer() + @recorded_by_proxy + def test_get_liveness_sessions(self, facesession_endpoint): + client = self.create_client(endpoint=facesession_endpoint) + response = client.get_liveness_sessions() + + # please add some check logic here by yourself + # ... + + @FaceSessionPreparer() + @recorded_by_proxy + def test_get_liveness_session_audit_entries(self, facesession_endpoint): + client = self.create_client(endpoint=facesession_endpoint) + response = client.get_liveness_session_audit_entries( + session_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceSessionPreparer() + @recorded_by_proxy + def test_delete_liveness_with_verify_session(self, facesession_endpoint): + client = self.create_client(endpoint=facesession_endpoint) + response = client.delete_liveness_with_verify_session( + session_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceSessionPreparer() + @recorded_by_proxy + def test_get_liveness_with_verify_session_result(self, facesession_endpoint): + client = self.create_client(endpoint=facesession_endpoint) + response = client.get_liveness_with_verify_session_result( + session_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceSessionPreparer() + @recorded_by_proxy + def test_get_liveness_with_verify_sessions(self, facesession_endpoint): + client = self.create_client(endpoint=facesession_endpoint) + response = client.get_liveness_with_verify_sessions() + + # please add some check logic here by yourself + # ... + + @FaceSessionPreparer() + @recorded_by_proxy + def test_get_liveness_with_verify_session_audit_entries(self, facesession_endpoint): + client = self.create_client(endpoint=facesession_endpoint) + response = client.get_liveness_with_verify_session_audit_entries( + session_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceSessionPreparer() + @recorded_by_proxy + def test_detect_from_session_image(self, facesession_endpoint): + client = self.create_client(endpoint=facesession_endpoint) + response = client.detect_from_session_image( + body={"sessionImageId": "str"}, + session_image_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceSessionPreparer() + @recorded_by_proxy + def test_get_session_image(self, facesession_endpoint): + client = self.create_client(endpoint=facesession_endpoint) + response = client.get_session_image( + session_image_id="str", + ) + + # please add some check logic here by yourself + # ... diff --git a/sdk/face/azure-ai-vision-face/generated_tests/test_face_session_async.py b/sdk/face/azure-ai-vision-face/generated_tests/test_face_session_async.py new file mode 100644 index 000000000000..305b3457df68 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_tests/test_face_session_async.py @@ -0,0 +1,140 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- +import pytest +from devtools_testutils.aio import recorded_by_proxy_async +from testpreparer import FaceSessionPreparer +from testpreparer_async import FaceSessionClientTestBaseAsync + + +@pytest.mark.skip("you may need to update the auto-generated test case before run it") +class TestFaceSessionAsync(FaceSessionClientTestBaseAsync): + @FaceSessionPreparer() + @recorded_by_proxy_async + async def test_create_liveness_session(self, facesession_endpoint): + client = self.create_async_client(endpoint=facesession_endpoint) + response = await client.create_liveness_session( + body={ + "livenessOperationMode": "str", + "authTokenTimeToLiveInSeconds": 0, + "deviceCorrelationId": "str", + "deviceCorrelationIdSetInClient": bool, + "enableSessionImage": bool, + "livenessSingleModalModel": "str", + "sendResultsToClient": bool, + }, + ) + + # please add some check logic here by yourself + # ... + + @FaceSessionPreparer() + @recorded_by_proxy_async + async def test_delete_liveness_session(self, facesession_endpoint): + client = self.create_async_client(endpoint=facesession_endpoint) + response = await client.delete_liveness_session( + session_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceSessionPreparer() + @recorded_by_proxy_async + async def test_get_liveness_session_result(self, facesession_endpoint): + client = self.create_async_client(endpoint=facesession_endpoint) + response = await client.get_liveness_session_result( + session_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceSessionPreparer() + @recorded_by_proxy_async + async def test_get_liveness_sessions(self, facesession_endpoint): + client = self.create_async_client(endpoint=facesession_endpoint) + response = await client.get_liveness_sessions() + + # please add some check logic here by yourself + # ... + + @FaceSessionPreparer() + @recorded_by_proxy_async + async def test_get_liveness_session_audit_entries(self, facesession_endpoint): + client = self.create_async_client(endpoint=facesession_endpoint) + response = await client.get_liveness_session_audit_entries( + session_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceSessionPreparer() + @recorded_by_proxy_async + async def test_delete_liveness_with_verify_session(self, facesession_endpoint): + client = self.create_async_client(endpoint=facesession_endpoint) + response = await client.delete_liveness_with_verify_session( + session_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceSessionPreparer() + @recorded_by_proxy_async + async def test_get_liveness_with_verify_session_result(self, facesession_endpoint): + client = self.create_async_client(endpoint=facesession_endpoint) + response = await client.get_liveness_with_verify_session_result( + session_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceSessionPreparer() + @recorded_by_proxy_async + async def test_get_liveness_with_verify_sessions(self, facesession_endpoint): + client = self.create_async_client(endpoint=facesession_endpoint) + response = await client.get_liveness_with_verify_sessions() + + # please add some check logic here by yourself + # ... + + @FaceSessionPreparer() + @recorded_by_proxy_async + async def test_get_liveness_with_verify_session_audit_entries(self, facesession_endpoint): + client = self.create_async_client(endpoint=facesession_endpoint) + response = await client.get_liveness_with_verify_session_audit_entries( + session_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceSessionPreparer() + @recorded_by_proxy_async + async def test_detect_from_session_image(self, facesession_endpoint): + client = self.create_async_client(endpoint=facesession_endpoint) + response = await client.detect_from_session_image( + body={"sessionImageId": "str"}, + session_image_id="str", + ) + + # please add some check logic here by yourself + # ... + + @FaceSessionPreparer() + @recorded_by_proxy_async + async def test_get_session_image(self, facesession_endpoint): + client = self.create_async_client(endpoint=facesession_endpoint) + response = await client.get_session_image( + session_image_id="str", + ) + + # please add some check logic here by yourself + # ... diff --git a/sdk/face/azure-ai-vision-face/generated_tests/testpreparer.py b/sdk/face/azure-ai-vision-face/generated_tests/testpreparer.py new file mode 100644 index 000000000000..eac137c4227c --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_tests/testpreparer.py @@ -0,0 +1,56 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- +from azure.ai.vision.face import FaceAdministrationClient, FaceClient, FaceSessionClient +from devtools_testutils import AzureRecordedTestCase, PowerShellPreparer +import functools + + +class FaceAdministrationClientTestBase(AzureRecordedTestCase): + + def create_client(self, endpoint): + credential = self.get_credential(FaceAdministrationClient) + return self.create_client_from_credential( + FaceAdministrationClient, + credential=credential, + endpoint=endpoint, + ) + + +FaceAdministrationPreparer = functools.partial( + PowerShellPreparer, "faceadministration", faceadministration_endpoint="https://fake_faceadministration_endpoint.com" +) + + +class FaceClientTestBase(AzureRecordedTestCase): + + def create_client(self, endpoint): + credential = self.get_credential(FaceClient) + return self.create_client_from_credential( + FaceClient, + credential=credential, + endpoint=endpoint, + ) + + +FacePreparer = functools.partial(PowerShellPreparer, "face", face_endpoint="https://fake_face_endpoint.com") + + +class FaceSessionClientTestBase(AzureRecordedTestCase): + + def create_client(self, endpoint): + credential = self.get_credential(FaceSessionClient) + return self.create_client_from_credential( + FaceSessionClient, + credential=credential, + endpoint=endpoint, + ) + + +FaceSessionPreparer = functools.partial( + PowerShellPreparer, "facesession", facesession_endpoint="https://fake_facesession_endpoint.com" +) diff --git a/sdk/face/azure-ai-vision-face/generated_tests/testpreparer_async.py b/sdk/face/azure-ai-vision-face/generated_tests/testpreparer_async.py new file mode 100644 index 000000000000..4ab6e112f20d --- /dev/null +++ b/sdk/face/azure-ai-vision-face/generated_tests/testpreparer_async.py @@ -0,0 +1,42 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- +from azure.ai.vision.face.aio import FaceAdministrationClient, FaceClient, FaceSessionClient +from devtools_testutils import AzureRecordedTestCase + + +class FaceAdministrationClientTestBaseAsync(AzureRecordedTestCase): + + def create_async_client(self, endpoint): + credential = self.get_credential(FaceAdministrationClient, is_async=True) + return self.create_client_from_credential( + FaceAdministrationClient, + credential=credential, + endpoint=endpoint, + ) + + +class FaceClientTestBaseAsync(AzureRecordedTestCase): + + def create_async_client(self, endpoint): + credential = self.get_credential(FaceClient, is_async=True) + return self.create_client_from_credential( + FaceClient, + credential=credential, + endpoint=endpoint, + ) + + +class FaceSessionClientTestBaseAsync(AzureRecordedTestCase): + + def create_async_client(self, endpoint): + credential = self.get_credential(FaceSessionClient, is_async=True) + return self.create_client_from_credential( + FaceSessionClient, + credential=credential, + endpoint=endpoint, + ) diff --git a/sdk/face/azure-ai-vision-face/pyproject.toml b/sdk/face/azure-ai-vision-face/pyproject.toml index 0817f7c7a6c2..dc70880286ce 100644 --- a/sdk/face/azure-ai-vision-face/pyproject.toml +++ b/sdk/face/azure-ai-vision-face/pyproject.toml @@ -1,2 +1,4 @@ [tool.generate] autorest-post-process = true +[tool.azure-sdk-build] +verifytypes = false diff --git a/sdk/face/azure-ai-vision-face/samples/README.md b/sdk/face/azure-ai-vision-face/samples/README.md index 807080ce38fc..6d87062c6aff 100644 --- a/sdk/face/azure-ai-vision-face/samples/README.md +++ b/sdk/face/azure-ai-vision-face/samples/README.md @@ -34,6 +34,8 @@ Several Azure Face Python SDK samples are available to you in the SDK's GitHub r * From a faceId array * From a large face list +* [sample_verify_and_identify_from_large_person_group.py](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/face/azure-ai-vision-face/samples/sample_verify_and_identify_from_large_person_group.py) ([async version](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/face/azure-ai-vision-face/samples/sample_verify_and_identify_from_large_person_group_async.py)) - Examples for verifying and identifying faces from a large person group. + ## Prerequisites * Python 3.8 or later is required to use this package * You must have an [Azure subscription](https://azure.microsoft.com/free/) and an [Face APIs account](https://learn.microsoft.com/azure/ai-services/computer-vision/overview-identity) diff --git a/sdk/face/azure-ai-vision-face/samples/sample_authentication.py b/sdk/face/azure-ai-vision-face/samples/sample_authentication.py index 54562a238a39..773f1341a1e4 100644 --- a/sdk/face/azure-ai-vision-face/samples/sample_authentication.py +++ b/sdk/face/azure-ai-vision-face/samples/sample_authentication.py @@ -44,12 +44,8 @@ class FaceAuthentication: def __init__(self): load_dotenv(find_dotenv()) - self.endpoint = os.getenv( - CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT - ) - self.key = os.getenv( - CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY - ) + self.endpoint = os.getenv(CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT) + self.key = os.getenv(CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY) self.logger = get_logger("sample_authentication") def authentication_by_api_key(self): @@ -58,14 +54,12 @@ def authentication_by_api_key(self): from azure.ai.vision.face.models import FaceDetectionModel, FaceRecognitionModel self.logger.info("Instantiate a FaceClient using an api key") - with FaceClient( - endpoint=self.endpoint, credential=AzureKeyCredential(self.key) - ) as face_client: + with FaceClient(endpoint=self.endpoint, credential=AzureKeyCredential(self.key)) as face_client: sample_file_path = helpers.get_image_path(TestImages.DEFAULT_IMAGE_FILE) result = face_client.detect( helpers.read_file_content(sample_file_path), - detection_model=FaceDetectionModel.DETECTION_03, - recognition_model=FaceRecognitionModel.RECOGNITION_04, + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, return_face_id=False, ) @@ -80,14 +74,12 @@ def authentication_by_aad_credential(self): from azure.ai.vision.face.models import FaceDetectionModel, FaceRecognitionModel self.logger.info("Instantiate a FaceClient using a TokenCredential") - with FaceClient( - endpoint=self.endpoint, credential=DefaultAzureCredential() - ) as face_client: + with FaceClient(endpoint=self.endpoint, credential=DefaultAzureCredential()) as face_client: sample_file_path = helpers.get_image_path(TestImages.DEFAULT_IMAGE_FILE) result = face_client.detect( helpers.read_file_content(sample_file_path), - detection_model=FaceDetectionModel.DETECTION_03, - recognition_model=FaceRecognitionModel.RECOGNITION_04, + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, return_face_id=False, ) diff --git a/sdk/face/azure-ai-vision-face/samples/sample_authentication_async.py b/sdk/face/azure-ai-vision-face/samples/sample_authentication_async.py index 2f9763fe21e9..df8834129423 100644 --- a/sdk/face/azure-ai-vision-face/samples/sample_authentication_async.py +++ b/sdk/face/azure-ai-vision-face/samples/sample_authentication_async.py @@ -45,12 +45,8 @@ class FaceAuthentication: def __init__(self): load_dotenv(find_dotenv()) - self.endpoint = os.getenv( - CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT - ) - self.key = os.getenv( - CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY - ) + self.endpoint = os.getenv(CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT) + self.key = os.getenv(CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY) self.logger = get_logger("sample_authentication_async") async def authentication_by_api_key(self): @@ -59,14 +55,12 @@ async def authentication_by_api_key(self): from azure.ai.vision.face.models import FaceDetectionModel, FaceRecognitionModel self.logger.info("Instantiate a FaceClient using an api key") - async with FaceClient( - endpoint=self.endpoint, credential=AzureKeyCredential(self.key) - ) as face_client: + async with FaceClient(endpoint=self.endpoint, credential=AzureKeyCredential(self.key)) as face_client: sample_file_path = helpers.get_image_path(TestImages.DEFAULT_IMAGE_FILE) result = await face_client.detect( helpers.read_file_content(sample_file_path), - detection_model=FaceDetectionModel.DETECTION_03, - recognition_model=FaceRecognitionModel.RECOGNITION_04, + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, return_face_id=False, ) @@ -87,8 +81,8 @@ async def authentication_by_aad_credential(self): sample_file_path = helpers.get_image_path(TestImages.DEFAULT_IMAGE_FILE) result = await face_client.detect( helpers.read_file_content(sample_file_path), - detection_model=FaceDetectionModel.DETECTION_03, - recognition_model=FaceRecognitionModel.RECOGNITION_04, + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, return_face_id=False, ) diff --git a/sdk/face/azure-ai-vision-face/samples/sample_face_detection.py b/sdk/face/azure-ai-vision-face/samples/sample_face_detection.py index 5f010bdaaa4e..5ade629d657f 100644 --- a/sdk/face/azure-ai-vision-face/samples/sample_face_detection.py +++ b/sdk/face/azure-ai-vision-face/samples/sample_face_detection.py @@ -36,12 +36,8 @@ class DetectFaces: def __init__(self): load_dotenv(find_dotenv()) - self.endpoint = os.getenv( - CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT - ) - self.key = os.getenv( - CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY - ) + self.endpoint = os.getenv(CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT) + self.key = os.getenv(CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY) self.logger = get_logger("sample_face_detection") def detect(self): @@ -54,14 +50,12 @@ def detect(self): FaceAttributeTypeRecognition04, ) - with FaceClient( - endpoint=self.endpoint, credential=AzureKeyCredential(self.key) - ) as face_client: + with FaceClient(endpoint=self.endpoint, credential=AzureKeyCredential(self.key)) as face_client: sample_file_path = helpers.get_image_path(TestImages.IMAGE_DETECTION_5) result = face_client.detect( helpers.read_file_content(sample_file_path), - detection_model=FaceDetectionModel.DETECTION_03, - recognition_model=FaceRecognitionModel.RECOGNITION_04, + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, return_face_id=True, return_face_attributes=[ FaceAttributeTypeDetection03.BLUR, @@ -88,14 +82,12 @@ def detect_from_url(self): FaceAttributeTypeDetection01, ) - with FaceClient( - endpoint=self.endpoint, credential=AzureKeyCredential(self.key) - ) as face_client: + with FaceClient(endpoint=self.endpoint, credential=AzureKeyCredential(self.key)) as face_client: sample_url = TestImages.DEFAULT_IMAGE_URL result = face_client.detect_from_url( url=sample_url, - detection_model=FaceDetectionModel.DETECTION_01, - recognition_model=FaceRecognitionModel.RECOGNITION_04, + detection_model=FaceDetectionModel.DETECTION01, + recognition_model=FaceRecognitionModel.RECOGNITION04, return_face_id=False, return_face_attributes=[ FaceAttributeTypeDetection01.ACCESSORIES, diff --git a/sdk/face/azure-ai-vision-face/samples/sample_face_detection_async.py b/sdk/face/azure-ai-vision-face/samples/sample_face_detection_async.py index 71fe86f1f97a..47f17d5749b3 100644 --- a/sdk/face/azure-ai-vision-face/samples/sample_face_detection_async.py +++ b/sdk/face/azure-ai-vision-face/samples/sample_face_detection_async.py @@ -37,12 +37,8 @@ class DetectFaces: def __init__(self): load_dotenv(find_dotenv()) - self.endpoint = os.getenv( - CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT - ) - self.key = os.getenv( - CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY - ) + self.endpoint = os.getenv(CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT) + self.key = os.getenv(CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY) self.logger = get_logger("sample_face_detection_async") async def detect(self): @@ -55,14 +51,12 @@ async def detect(self): FaceAttributeTypeRecognition04, ) - async with FaceClient( - endpoint=self.endpoint, credential=AzureKeyCredential(self.key) - ) as face_client: + async with FaceClient(endpoint=self.endpoint, credential=AzureKeyCredential(self.key)) as face_client: sample_file_path = helpers.get_image_path(TestImages.IMAGE_DETECTION_5) result = await face_client.detect( helpers.read_file_content(sample_file_path), - detection_model=FaceDetectionModel.DETECTION_03, - recognition_model=FaceRecognitionModel.RECOGNITION_04, + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, return_face_id=True, return_face_attributes=[ FaceAttributeTypeDetection03.BLUR, @@ -89,14 +83,12 @@ async def detect_from_url(self): FaceAttributeTypeDetection01, ) - async with FaceClient( - endpoint=self.endpoint, credential=AzureKeyCredential(self.key) - ) as face_client: + async with FaceClient(endpoint=self.endpoint, credential=AzureKeyCredential(self.key)) as face_client: sample_url = TestImages.DEFAULT_IMAGE_URL result = await face_client.detect_from_url( url=sample_url, - detection_model=FaceDetectionModel.DETECTION_01, - recognition_model=FaceRecognitionModel.RECOGNITION_04, + detection_model=FaceDetectionModel.DETECTION01, + recognition_model=FaceRecognitionModel.RECOGNITION04, return_face_id=False, return_face_attributes=[ FaceAttributeTypeDetection01.ACCESSORIES, diff --git a/sdk/face/azure-ai-vision-face/samples/sample_face_grouping.py b/sdk/face/azure-ai-vision-face/samples/sample_face_grouping.py index 83a88f0242e8..943164d31a7f 100644 --- a/sdk/face/azure-ai-vision-face/samples/sample_face_grouping.py +++ b/sdk/face/azure-ai-vision-face/samples/sample_face_grouping.py @@ -36,12 +36,8 @@ class GroupFaces: def __init__(self): load_dotenv(find_dotenv()) - self.endpoint = os.getenv( - CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT - ) - self.key = os.getenv( - CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY - ) + self.endpoint = os.getenv(CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT) + self.key = os.getenv(CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY) self.logger = get_logger("sample_face_grouping") def group(self): @@ -49,21 +45,17 @@ def group(self): from azure.ai.vision.face import FaceClient from azure.ai.vision.face.models import FaceDetectionModel, FaceRecognitionModel - with FaceClient( - endpoint=self.endpoint, credential=AzureKeyCredential(self.key) - ) as face_client: + with FaceClient(endpoint=self.endpoint, credential=AzureKeyCredential(self.key)) as face_client: sample_file_path = helpers.get_image_path(TestImages.IMAGE_NINE_FACES) detect_result = face_client.detect( helpers.read_file_content(sample_file_path), - detection_model=FaceDetectionModel.DETECTION_03, - recognition_model=FaceRecognitionModel.RECOGNITION_04, + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, return_face_id=True, ) face_ids = [str(face.face_id) for face in detect_result] - self.logger.info( - f"Detect {len(face_ids)} faces from the file '{sample_file_path}': {face_ids}" - ) + self.logger.info(f"Detect {len(face_ids)} faces from the file '{sample_file_path}': {face_ids}") group_result = face_client.group(face_ids=face_ids) self.logger.info(f"Group result: {beautify_json(group_result.as_dict())}") diff --git a/sdk/face/azure-ai-vision-face/samples/sample_face_grouping_async.py b/sdk/face/azure-ai-vision-face/samples/sample_face_grouping_async.py index b7909a60aa78..e307e0d6186d 100644 --- a/sdk/face/azure-ai-vision-face/samples/sample_face_grouping_async.py +++ b/sdk/face/azure-ai-vision-face/samples/sample_face_grouping_async.py @@ -37,12 +37,8 @@ class GroupFaces: def __init__(self): load_dotenv(find_dotenv()) - self.endpoint = os.getenv( - CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT - ) - self.key = os.getenv( - CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY - ) + self.endpoint = os.getenv(CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT) + self.key = os.getenv(CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY) self.logger = get_logger("sample_face_grouping_async") async def group(self): @@ -50,21 +46,17 @@ async def group(self): from azure.ai.vision.face.aio import FaceClient from azure.ai.vision.face.models import FaceDetectionModel, FaceRecognitionModel - async with FaceClient( - endpoint=self.endpoint, credential=AzureKeyCredential(self.key) - ) as face_client: + async with FaceClient(endpoint=self.endpoint, credential=AzureKeyCredential(self.key)) as face_client: sample_file_path = helpers.get_image_path(TestImages.IMAGE_NINE_FACES) detect_result = await face_client.detect( helpers.read_file_content(sample_file_path), - detection_model=FaceDetectionModel.DETECTION_03, - recognition_model=FaceRecognitionModel.RECOGNITION_04, + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, return_face_id=True, ) face_ids = [str(face.face_id) for face in detect_result] - self.logger.info( - f"Detect {len(face_ids)} faces from the file '{sample_file_path}': {face_ids}" - ) + self.logger.info(f"Detect {len(face_ids)} faces from the file '{sample_file_path}': {face_ids}") group_result = await face_client.group(face_ids=face_ids) self.logger.info(f"Group result: {beautify_json(group_result.as_dict())}") diff --git a/sdk/face/azure-ai-vision-face/samples/sample_face_liveness_detection.py b/sdk/face/azure-ai-vision-face/samples/sample_face_liveness_detection.py index 5901c6f8c11e..e633fdc51f72 100644 --- a/sdk/face/azure-ai-vision-face/samples/sample_face_liveness_detection.py +++ b/sdk/face/azure-ai-vision-face/samples/sample_face_liveness_detection.py @@ -41,12 +41,8 @@ class DetectLiveness: def __init__(self): load_dotenv(find_dotenv()) - self.endpoint = os.getenv( - CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT - ) - self.key = os.getenv( - CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY - ) + self.endpoint = os.getenv(CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT) + self.key = os.getenv(CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY) self.logger = get_logger("sample_face_liveness_detection") def wait_for_liveness_check_request(self): @@ -63,9 +59,7 @@ def wait_for_liveness_session_complete(self): "Please refer to https://learn.microsoft.com/azure/ai-services/computer-vision/tutorials/liveness" " and use the mobile client SDK to perform liveness detection on your mobile application." ) - input( - "Press any key to continue when you complete these steps to run sample to get session results ..." - ) + input("Press any key to continue when you complete these steps to run sample to get session results ...") pass def livenessSession(self): @@ -79,9 +73,7 @@ def livenessSession(self): LivenessOperationMode, ) - with FaceSessionClient( - endpoint=self.endpoint, credential=AzureKeyCredential(self.key) - ) as face_session_client: + with FaceSessionClient(endpoint=self.endpoint, credential=AzureKeyCredential(self.key)) as face_session_client: # 1. Wait for liveness check request self.wait_for_liveness_check_request() @@ -110,17 +102,13 @@ def livenessSession(self): # 8. Query for the liveness detection result as the session is completed. self.logger.info("Get the liveness detection result.") - liveness_result = face_session_client.get_liveness_session_result( - created_session.session_id - ) + liveness_result = face_session_client.get_liveness_session_result(created_session.session_id) self.logger.info(f"Result: {beautify_json(liveness_result.as_dict())}") # Furthermore, you can query all request and response for this sessions, and list all sessions you have by # calling `get_liveness_session_audit_entries` and `get_liveness_sessions`. self.logger.info("Get the audit entries of this session.") - audit_entries = face_session_client.get_liveness_session_audit_entries( - created_session.session_id - ) + audit_entries = face_session_client.get_liveness_session_audit_entries(created_session.session_id) for idx, entry in enumerate(audit_entries): self.logger.info(f"----- Audit entries: #{idx+1}-----") self.logger.info(f"Entry: {beautify_json(entry.as_dict())}") diff --git a/sdk/face/azure-ai-vision-face/samples/sample_face_liveness_detection_async.py b/sdk/face/azure-ai-vision-face/samples/sample_face_liveness_detection_async.py index 375d74ae20e8..cc8e0d301103 100644 --- a/sdk/face/azure-ai-vision-face/samples/sample_face_liveness_detection_async.py +++ b/sdk/face/azure-ai-vision-face/samples/sample_face_liveness_detection_async.py @@ -42,12 +42,8 @@ class DetectLiveness: def __init__(self): load_dotenv(find_dotenv()) - self.endpoint = os.getenv( - CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT - ) - self.key = os.getenv( - CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY - ) + self.endpoint = os.getenv(CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT) + self.key = os.getenv(CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY) self.logger = get_logger("sample_face_liveness_detection_async") async def wait_for_liveness_check_request(self): @@ -64,9 +60,7 @@ async def wait_for_liveness_session_complete(self): "Please refer to https://learn.microsoft.com/azure/ai-services/computer-vision/tutorials/liveness" " and use the mobile client SDK to perform liveness detection on your mobile application." ) - input( - "Press any key to continue when you complete these steps to run sample to get session results ..." - ) + input("Press any key to continue when you complete these steps to run sample to get session results ...") pass async def livenessSession(self): @@ -111,19 +105,13 @@ async def livenessSession(self): # 8. Query for the liveness detection result as the session is completed. self.logger.info("Get the liveness detection result.") - liveness_result = await face_session_client.get_liveness_session_result( - created_session.session_id - ) + liveness_result = await face_session_client.get_liveness_session_result(created_session.session_id) self.logger.info(f"Result: {beautify_json(liveness_result.as_dict())}") # Furthermore, you can query all request and response for this sessions, and list all sessions you have by # calling `get_liveness_session_audit_entries` and `get_liveness_sessions`. self.logger.info("Get the audit entries of this session.") - audit_entries = ( - await face_session_client.get_liveness_session_audit_entries( - created_session.session_id - ) - ) + audit_entries = await face_session_client.get_liveness_session_audit_entries(created_session.session_id) for idx, entry in enumerate(audit_entries): self.logger.info(f"----- Audit entries: #{idx+1}-----") self.logger.info(f"Entry: {beautify_json(entry.as_dict())}") @@ -136,9 +124,7 @@ async def livenessSession(self): # Clean up: delete the session self.logger.info("Delete the session.") - await face_session_client.delete_liveness_session( - created_session.session_id - ) + await face_session_client.delete_liveness_session(created_session.session_id) async def main(): diff --git a/sdk/face/azure-ai-vision-face/samples/sample_face_liveness_detection_with_verification.py b/sdk/face/azure-ai-vision-face/samples/sample_face_liveness_detection_with_verification.py index c9634f61bf00..3b8396d31b51 100644 --- a/sdk/face/azure-ai-vision-face/samples/sample_face_liveness_detection_with_verification.py +++ b/sdk/face/azure-ai-vision-face/samples/sample_face_liveness_detection_with_verification.py @@ -44,12 +44,8 @@ class DetectLivenessWithVerify: def __init__(self): load_dotenv(find_dotenv()) - self.endpoint = os.getenv( - CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT - ) - self.key = os.getenv( - CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY - ) + self.endpoint = os.getenv(CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT) + self.key = os.getenv(CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY) self.logger = get_logger("sample_face_liveness_detection_with_verification") def wait_for_liveness_check_request(self): @@ -66,9 +62,7 @@ def wait_for_liveness_session_complete(self): "Please refer to https://learn.microsoft.com/azure/ai-services/computer-vision/tutorials/liveness" " and use the mobile client SDK to perform liveness detection on your mobile application." ) - input( - "Press any key to continue when you complete these steps to run sample to get session results ..." - ) + input("Press any key to continue when you complete these steps to run sample to get session results ...") pass def livenessSessionWithVerify(self): @@ -82,20 +76,14 @@ def livenessSessionWithVerify(self): LivenessOperationMode, ) - with FaceSessionClient( - endpoint=self.endpoint, credential=AzureKeyCredential(self.key) - ) as face_session_client: + with FaceSessionClient(endpoint=self.endpoint, credential=AzureKeyCredential(self.key)) as face_session_client: # 1. Wait for liveness check request self.wait_for_liveness_check_request() # 2. Create a session with verify image. - verify_image_file_path = helpers.get_image_path( - TestImages.DEFAULT_IMAGE_FILE - ) + verify_image_file_path = helpers.get_image_path(TestImages.DEFAULT_IMAGE_FILE) - self.logger.info( - "Create a new liveness with verify session with verify image." - ) + self.logger.info("Create a new liveness with verify session with verify image.") created_session = face_session_client.create_liveness_with_verify_session( CreateLivenessSessionContent( liveness_operation_mode=LivenessOperationMode.PASSIVE, @@ -120,20 +108,14 @@ def livenessSessionWithVerify(self): # 8. Query for the liveness detection result as the session is completed. self.logger.info("Get the liveness detection result.") - liveness_result = ( - face_session_client.get_liveness_with_verify_session_result( - created_session.session_id - ) - ) + liveness_result = face_session_client.get_liveness_with_verify_session_result(created_session.session_id) self.logger.info(f"Result: {beautify_json(liveness_result.as_dict())}") # Furthermore, you can query all request and response for this sessions, and list all sessions you have by # calling `get_liveness_session_audit_entries` and `get_liveness_sessions`. self.logger.info("Get the audit entries of this session.") - audit_entries = ( - face_session_client.get_liveness_with_verify_session_audit_entries( - created_session.session_id - ) + audit_entries = face_session_client.get_liveness_with_verify_session_audit_entries( + created_session.session_id ) for idx, entry in enumerate(audit_entries): self.logger.info(f"----- Audit entries: #{idx+1}-----") @@ -147,9 +129,7 @@ def livenessSessionWithVerify(self): # Clean up: Delete the session self.logger.info("Delete the session.") - face_session_client.delete_liveness_with_verify_session( - created_session.session_id - ) + face_session_client.delete_liveness_with_verify_session(created_session.session_id) if __name__ == "__main__": diff --git a/sdk/face/azure-ai-vision-face/samples/sample_face_liveness_detection_with_verification_async.py b/sdk/face/azure-ai-vision-face/samples/sample_face_liveness_detection_with_verification_async.py index 14376aca0fc4..00861e0a05f0 100644 --- a/sdk/face/azure-ai-vision-face/samples/sample_face_liveness_detection_with_verification_async.py +++ b/sdk/face/azure-ai-vision-face/samples/sample_face_liveness_detection_with_verification_async.py @@ -45,15 +45,9 @@ class DetectLivenessWithVerify: def __init__(self): load_dotenv(find_dotenv()) - self.endpoint = os.getenv( - CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT - ) - self.key = os.getenv( - CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY - ) - self.logger = get_logger( - "sample_face_liveness_detection_with_verification_async" - ) + self.endpoint = os.getenv(CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT) + self.key = os.getenv(CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY) + self.logger = get_logger("sample_face_liveness_detection_with_verification_async") async def wait_for_liveness_check_request(self): # The logic to wait for liveness check request from mobile application. @@ -69,9 +63,7 @@ async def wait_for_liveness_session_complete(self): "Please refer to https://learn.microsoft.com/azure/ai-services/computer-vision/tutorials/liveness" " and use the mobile client SDK to perform liveness detection on your mobile application." ) - input( - "Press any key to continue when you complete these steps to run sample to get session results ..." - ) + input("Press any key to continue when you complete these steps to run sample to get session results ...") pass async def livenessSessionWithVerify(self): @@ -92,13 +84,9 @@ async def livenessSessionWithVerify(self): await self.wait_for_liveness_check_request() # 2. Create a session with verify image. - verify_image_file_path = helpers.get_image_path( - TestImages.DEFAULT_IMAGE_FILE - ) + verify_image_file_path = helpers.get_image_path(TestImages.DEFAULT_IMAGE_FILE) - self.logger.info( - "Create a new liveness with verify session with verify image." - ) + self.logger.info("Create a new liveness with verify session with verify image.") created_session = await face_session_client.create_liveness_with_verify_session( CreateLivenessSessionContent( liveness_operation_mode=LivenessOperationMode.PASSIVE, @@ -123,10 +111,8 @@ async def livenessSessionWithVerify(self): # 8. Query for the liveness detection result as the session is completed. self.logger.info("Get the liveness detection result.") - liveness_result = ( - await face_session_client.get_liveness_with_verify_session_result( - created_session.session_id - ) + liveness_result = await face_session_client.get_liveness_with_verify_session_result( + created_session.session_id ) self.logger.info(f"Result: {beautify_json(liveness_result.as_dict())}") @@ -148,9 +134,7 @@ async def livenessSessionWithVerify(self): # Clean up: Delete the session self.logger.info("Delete the session.") - await face_session_client.delete_liveness_with_verify_session( - created_session.session_id - ) + await face_session_client.delete_liveness_with_verify_session(created_session.session_id) async def main(): diff --git a/sdk/face/azure-ai-vision-face/samples/sample_find_similar_faces.py b/sdk/face/azure-ai-vision-face/samples/sample_find_similar_faces.py index cd46330c89df..f457f407fa73 100644 --- a/sdk/face/azure-ai-vision-face/samples/sample_find_similar_faces.py +++ b/sdk/face/azure-ai-vision-face/samples/sample_find_similar_faces.py @@ -36,12 +36,8 @@ class FindSimilarFaces: def __init__(self): load_dotenv(find_dotenv()) - self.endpoint = os.getenv( - CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT - ) - self.key = os.getenv( - CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY - ) + self.endpoint = os.getenv(CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT) + self.key = os.getenv(CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY) self.logger = get_logger("sample_findsimilar_faces") def find_similar_from_face_ids(self): @@ -49,50 +45,132 @@ def find_similar_from_face_ids(self): from azure.ai.vision.face import FaceClient from azure.ai.vision.face.models import FaceDetectionModel, FaceRecognitionModel - with FaceClient( - endpoint=self.endpoint, credential=AzureKeyCredential(self.key) - ) as face_client: + with FaceClient(endpoint=self.endpoint, credential=AzureKeyCredential(self.key)) as face_client: # Detect faces from 'IMAGE_NINE_FACES' nine_faces_file_path = helpers.get_image_path(TestImages.IMAGE_NINE_FACES) detect_result1 = face_client.detect( helpers.read_file_content(nine_faces_file_path), - detection_model=FaceDetectionModel.DETECTION_03, - recognition_model=FaceRecognitionModel.RECOGNITION_04, + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, return_face_id=True, ) face_ids = [str(face.face_id) for face in detect_result1] - self.logger.info( - f"Detect {len(face_ids)} faces from the file '{nine_faces_file_path}': {face_ids}" - ) + self.logger.info(f"Detect {len(face_ids)} faces from the file '{nine_faces_file_path}': {face_ids}") # Detect face from 'IMAGE_FINDSIMILAR' - find_similar_file_path = helpers.get_image_path( - TestImages.IMAGE_FINDSIMILAR - ) + find_similar_file_path = helpers.get_image_path(TestImages.IMAGE_FINDSIMILAR) detect_result2 = face_client.detect( helpers.read_file_content(find_similar_file_path), - detection_model=FaceDetectionModel.DETECTION_03, - recognition_model=FaceRecognitionModel.RECOGNITION_04, + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, return_face_id=True, ) assert len(detect_result2) == 1 face_id = str(detect_result2[0].face_id) - self.logger.info( - f"Detect 1 face from the file '{find_similar_file_path}': {face_id}" - ) + self.logger.info(f"Detect 1 face from the file '{find_similar_file_path}': {face_id}") # Call Find Similar # The default find similar mode is MATCH_PERSON - find_similar_result1 = face_client.find_similar( - face_id=face_id, face_ids=face_ids - ) + find_similar_result1 = face_client.find_similar(face_id=face_id, face_ids=face_ids) self.logger.info("Find Similar with matchPerson mode:") for r in find_similar_result1: self.logger.info(f"{beautify_json(r.as_dict())}") + def find_similar_from_large_face_list(self): + from azure.core.credentials import AzureKeyCredential + from azure.ai.vision.face import FaceAdministrationClient, FaceClient + from azure.ai.vision.face.models import ( + FaceDetectionModel, + FaceRecognitionModel, + FindSimilarMatchMode, + ) + + with FaceAdministrationClient( + endpoint=self.endpoint, credential=AzureKeyCredential(self.key) + ) as face_admin_client, FaceClient( + endpoint=self.endpoint, credential=AzureKeyCredential(self.key) + ) as face_client: + + large_face_list_id = "lfl01" + # Prepare a LargeFaceList which contains several faces. + self.logger.info(f"Create a LargeFaceList, id = {large_face_list_id}") + face_admin_client.large_face_list.create( + large_face_list_id, + name="List of Face", + user_data="Large Face List for Test", + recognition_model=FaceRecognitionModel.RECOGNITION04, + ) + + # Add faces into the largeFaceList + self.logger.info(f"Add faces into the LargeFaceList {large_face_list_id}") + face_admin_client.large_face_list.add_face( + large_face_list_id, + helpers.read_file_content(helpers.get_image_path(TestImages.IMAGE_FAMILY_1_MOM_1)), + detection_model=FaceDetectionModel.DETECTION02, + user_data="Lady1-1", + ) + face_admin_client.large_face_list.add_face( + large_face_list_id, + helpers.read_file_content(helpers.get_image_path(TestImages.IMAGE_FAMILY_1_MOM_2)), + detection_model=FaceDetectionModel.DETECTION02, + user_data="Lady1-2", + ) + face_admin_client.large_face_list.add_face( + large_face_list_id, + helpers.read_file_content(helpers.get_image_path(TestImages.IMAGE_FAMILY_2_LADY_1)), + detection_model=FaceDetectionModel.DETECTION02, + user_data="Lady2-1", + ) + face_admin_client.large_face_list.add_face( + large_face_list_id, + helpers.read_file_content(helpers.get_image_path(TestImages.IMAGE_FAMILY_2_LADY_2)), + detection_model=FaceDetectionModel.DETECTION02, + user_data="Lady2-2", + ) + face_admin_client.large_face_list.add_face( + large_face_list_id, + helpers.read_file_content(helpers.get_image_path(TestImages.IMAGE_FAMILY_3_LADY_1)), + detection_model=FaceDetectionModel.DETECTION02, + user_data="Lady3-1", + ) + + # The LargeFaceList should be trained to make it ready for find similar operation. + self.logger.info(f"Train the LargeFaceList {large_face_list_id}, and wait until the operation completes.") + poller = face_admin_client.large_face_list.begin_train(large_face_list_id, polling_interval=30) + poller.wait() # Keep polling until the "Train" operation completes. + + # Detect face from 'IMAGE_FINDSIMILAR' + find_similar_file_path = helpers.get_image_path(TestImages.IMAGE_FINDSIMILAR) + detect_result = face_client.detect( + helpers.read_file_content(find_similar_file_path), + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, + return_face_id=True, + ) + + assert len(detect_result) == 1 + face_id = str(detect_result[0].face_id) + self.logger.info(f"Detect 1 face from the file '{find_similar_file_path}': {face_id}") + + # Call Find Similar + find_similar_result = face_client.find_similar_from_large_face_list( + face_id=face_id, + large_face_list_id=large_face_list_id, + max_num_of_candidates_returned=3, + mode=FindSimilarMatchMode.MATCH_FACE, + ) + self.logger.info("Find Similar with matchFace mode:") + for r in find_similar_result: + self.logger.info(f"{beautify_json(r.as_dict())}") + + # Clean up: Remove the LargeFaceList + self.logger.info(f"Remove the LargeFaceList {large_face_list_id}") + face_admin_client.large_face_list.delete(large_face_list_id) + if __name__ == "__main__": sample = FindSimilarFaces() sample.find_similar_from_face_ids() + sample.find_similar_from_large_face_list() diff --git a/sdk/face/azure-ai-vision-face/samples/sample_find_similar_faces_async.py b/sdk/face/azure-ai-vision-face/samples/sample_find_similar_faces_async.py index 579ea44cd78e..cb6002fe177e 100644 --- a/sdk/face/azure-ai-vision-face/samples/sample_find_similar_faces_async.py +++ b/sdk/face/azure-ai-vision-face/samples/sample_find_similar_faces_async.py @@ -37,12 +37,8 @@ class FindSimilarFaces: def __init__(self): load_dotenv(find_dotenv()) - self.endpoint = os.getenv( - CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT - ) - self.key = os.getenv( - CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY - ) + self.endpoint = os.getenv(CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT) + self.key = os.getenv(CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY) self.logger = get_logger("sample_findsimilar_faces_async") async def find_similar_from_face_ids(self): @@ -50,53 +46,135 @@ async def find_similar_from_face_ids(self): from azure.ai.vision.face.aio import FaceClient from azure.ai.vision.face.models import FaceDetectionModel, FaceRecognitionModel - async with FaceClient( - endpoint=self.endpoint, credential=AzureKeyCredential(self.key) - ) as face_client: + async with FaceClient(endpoint=self.endpoint, credential=AzureKeyCredential(self.key)) as face_client: # Detect faces from 'IMAGE_NINE_FACES' nine_faces_file_path = helpers.get_image_path(TestImages.IMAGE_NINE_FACES) detect_result1 = await face_client.detect( helpers.read_file_content(nine_faces_file_path), - detection_model=FaceDetectionModel.DETECTION_03, - recognition_model=FaceRecognitionModel.RECOGNITION_04, + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, return_face_id=True, ) face_ids = [str(face.face_id) for face in detect_result1] - self.logger.info( - f"Detect {len(face_ids)} faces from the file '{nine_faces_file_path}': {face_ids}" - ) + self.logger.info(f"Detect {len(face_ids)} faces from the file '{nine_faces_file_path}': {face_ids}") # Detect face from 'IMAGE_FINDSIMILAR' - find_similar_file_path = helpers.get_image_path( - TestImages.IMAGE_FINDSIMILAR - ) + find_similar_file_path = helpers.get_image_path(TestImages.IMAGE_FINDSIMILAR) detect_result2 = await face_client.detect( helpers.read_file_content(find_similar_file_path), - detection_model=FaceDetectionModel.DETECTION_03, - recognition_model=FaceRecognitionModel.RECOGNITION_04, + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, return_face_id=True, ) assert len(detect_result2) == 1 face_id = str(detect_result2[0].face_id) - self.logger.info( - f"Detect 1 face from the file '{find_similar_file_path}': {face_id}" - ) + self.logger.info(f"Detect 1 face from the file '{find_similar_file_path}': {face_id}") # Call Find Similar # The default find similar mode is MATCH_PERSON - find_similar_result1 = await face_client.find_similar( - face_id=face_id, face_ids=face_ids - ) + find_similar_result1 = await face_client.find_similar(face_id=face_id, face_ids=face_ids) self.logger.info("Find Similar with matchPerson mode:") for r in find_similar_result1: self.logger.info(f"{beautify_json(r.as_dict())}") + async def find_similar_from_large_face_list(self): + from azure.core.credentials import AzureKeyCredential + from azure.ai.vision.face.aio import FaceAdministrationClient, FaceClient + from azure.ai.vision.face.models import ( + FaceDetectionModel, + FaceRecognitionModel, + FindSimilarMatchMode, + ) + + async with FaceAdministrationClient( + endpoint=self.endpoint, credential=AzureKeyCredential(self.key) + ) as face_admin_client, FaceClient( + endpoint=self.endpoint, credential=AzureKeyCredential(self.key) + ) as face_client: + + large_face_list_id = "lfl01" + # Prepare a LargeFaceList which contains several faces. + self.logger.info(f"Create a LargeFaceList, id = {large_face_list_id}") + await face_admin_client.large_face_list.create( + large_face_list_id, + name="List of Face", + user_data="Large Face List for Test", + recognition_model=FaceRecognitionModel.RECOGNITION04, + ) + + # Add faces into the largeFaceList + self.logger.info(f"Add faces into the LargeFaceList {large_face_list_id}") + await face_admin_client.large_face_list.add_face( + large_face_list_id, + helpers.read_file_content(helpers.get_image_path(TestImages.IMAGE_FAMILY_1_MOM_1)), + detection_model=FaceDetectionModel.DETECTION02, + user_data="Lady1-1", + ) + await face_admin_client.large_face_list.add_face( + large_face_list_id, + helpers.read_file_content(helpers.get_image_path(TestImages.IMAGE_FAMILY_1_MOM_2)), + detection_model=FaceDetectionModel.DETECTION02, + user_data="Lady1-2", + ) + await face_admin_client.large_face_list.add_face( + large_face_list_id, + helpers.read_file_content(helpers.get_image_path(TestImages.IMAGE_FAMILY_2_LADY_1)), + detection_model=FaceDetectionModel.DETECTION02, + user_data="Lady2-1", + ) + await face_admin_client.large_face_list.add_face( + large_face_list_id, + helpers.read_file_content(helpers.get_image_path(TestImages.IMAGE_FAMILY_2_LADY_2)), + detection_model=FaceDetectionModel.DETECTION02, + user_data="Lady2-2", + ) + await face_admin_client.large_face_list.add_face( + large_face_list_id, + helpers.read_file_content(helpers.get_image_path(TestImages.IMAGE_FAMILY_3_LADY_1)), + detection_model=FaceDetectionModel.DETECTION02, + user_data="Lady3-1", + ) + + # The LargeFaceList should be trained to make it ready for find similar operation. + self.logger.info(f"Train the LargeFaceList {large_face_list_id}, and wait until the operation completes.") + poller = await face_admin_client.large_face_list.begin_train(large_face_list_id, polling_interval=30) + await poller.wait() # Keep polling until the "Train" operation completes. + + # Detect face from 'IMAGE_FINDSIMILAR' + find_similar_file_path = helpers.get_image_path(TestImages.IMAGE_FINDSIMILAR) + detect_result = await face_client.detect( + helpers.read_file_content(find_similar_file_path), + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, + return_face_id=True, + ) + + assert len(detect_result) == 1 + face_id = str(detect_result[0].face_id) + self.logger.info(f"Detect 1 face from the file '{find_similar_file_path}': {face_id}") + + # Call Find Similar + find_similar_result = await face_client.find_similar_from_large_face_list( + face_id=face_id, + large_face_list_id=large_face_list_id, + max_num_of_candidates_returned=3, + mode=FindSimilarMatchMode.MATCH_FACE, + ) + self.logger.info("Find Similar with matchFace mode:") + for r in find_similar_result: + self.logger.info(f"{beautify_json(r.as_dict())}") + + # Clean up: Remove the LargeFaceList + self.logger.info(f"Remove the LargeFaceList {large_face_list_id}") + await face_admin_client.large_face_list.delete(large_face_list_id) + async def main(): sample = FindSimilarFaces() await sample.find_similar_from_face_ids() + await sample.find_similar_from_large_face_list() if __name__ == "__main__": diff --git a/sdk/face/azure-ai-vision-face/samples/sample_stateless_face_verification.py b/sdk/face/azure-ai-vision-face/samples/sample_stateless_face_verification.py index e0b275137b4f..598c9184d60c 100644 --- a/sdk/face/azure-ai-vision-face/samples/sample_stateless_face_verification.py +++ b/sdk/face/azure-ai-vision-face/samples/sample_stateless_face_verification.py @@ -36,12 +36,8 @@ class StatelessFaceVerification: def __init__(self): load_dotenv(find_dotenv()) - self.endpoint = os.getenv( - CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT - ) - self.key = os.getenv( - CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY - ) + self.endpoint = os.getenv(CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT) + self.key = os.getenv(CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY) self.logger = get_logger("sample_stateless_face_verification") def verify_face_to_face(self): @@ -49,39 +45,31 @@ def verify_face_to_face(self): from azure.ai.vision.face import FaceClient from azure.ai.vision.face.models import FaceDetectionModel, FaceRecognitionModel - with FaceClient( - endpoint=self.endpoint, credential=AzureKeyCredential(self.key) - ) as face_client: + with FaceClient(endpoint=self.endpoint, credential=AzureKeyCredential(self.key)) as face_client: dad_picture1 = helpers.get_image_path(TestImages.IMAGE_FAMILY_1_DAD_1) detect_result1 = face_client.detect( helpers.read_file_content(dad_picture1), - detection_model=FaceDetectionModel.DETECTION_03, - recognition_model=FaceRecognitionModel.RECOGNITION_04, + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, return_face_id=True, ) assert len(detect_result1) == 1 dad_face_id1 = str(detect_result1[0].face_id) - self.logger.info( - f"Detect 1 face from the file '{dad_picture1}': {dad_face_id1}" - ) + self.logger.info(f"Detect 1 face from the file '{dad_picture1}': {dad_face_id1}") dad_picture2 = helpers.get_image_path(TestImages.IMAGE_FAMILY_1_DAD_2) detect_result2 = face_client.detect( helpers.read_file_content(dad_picture2), - detection_model=FaceDetectionModel.DETECTION_03, - recognition_model=FaceRecognitionModel.RECOGNITION_04, + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, return_face_id=True, ) assert len(detect_result2) == 1 dad_face_id2 = str(detect_result2[0].face_id) - self.logger.info( - f"Detect 1 face from the file '{dad_picture2}': {dad_face_id2}" - ) + self.logger.info(f"Detect 1 face from the file '{dad_picture2}': {dad_face_id2}") # Call Verify to check if dad_face_id1 and dad_face_id2 belong to the same person. - verify_result1 = face_client.verify_face_to_face( - face_id1=dad_face_id1, face_id2=dad_face_id2 - ) + verify_result1 = face_client.verify_face_to_face(face_id1=dad_face_id1, face_id2=dad_face_id2) self.logger.info( f"Verify if the faces in '{TestImages.IMAGE_FAMILY_1_DAD_1}' and '{TestImages.IMAGE_FAMILY_1_DAD_2}'" f" belongs to the same person." @@ -91,20 +79,16 @@ def verify_face_to_face(self): man_picture = helpers.get_image_path(TestImages.IMAGE_FAMILY_3_Man_1) detect_result3 = face_client.detect( helpers.read_file_content(man_picture), - detection_model=FaceDetectionModel.DETECTION_03, - recognition_model=FaceRecognitionModel.RECOGNITION_04, + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, return_face_id=True, ) assert len(detect_result3) == 1 man_face_id = str(detect_result3[0].face_id) - self.logger.info( - f"Detect 1 face from the file '{man_picture}': {man_face_id}" - ) + self.logger.info(f"Detect 1 face from the file '{man_picture}': {man_face_id}") # Call Verify to check if dad_face_id1 and man_face_id belong to the same person. - verify_result2 = face_client.verify_face_to_face( - face_id1=dad_face_id1, face_id2=man_face_id - ) + verify_result2 = face_client.verify_face_to_face(face_id1=dad_face_id1, face_id2=man_face_id) self.logger.info( f"Verify if the faces in '{TestImages.IMAGE_FAMILY_1_DAD_1}' and '{TestImages.IMAGE_FAMILY_3_Man_1}'" f" belongs to the same person." diff --git a/sdk/face/azure-ai-vision-face/samples/sample_stateless_face_verification_async.py b/sdk/face/azure-ai-vision-face/samples/sample_stateless_face_verification_async.py index 07b9471b7d74..5a3bcccb34d2 100644 --- a/sdk/face/azure-ai-vision-face/samples/sample_stateless_face_verification_async.py +++ b/sdk/face/azure-ai-vision-face/samples/sample_stateless_face_verification_async.py @@ -37,12 +37,8 @@ class StatelessFaceVerification: def __init__(self): load_dotenv(find_dotenv()) - self.endpoint = os.getenv( - CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT - ) - self.key = os.getenv( - CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY - ) + self.endpoint = os.getenv(CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT) + self.key = os.getenv(CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY) self.logger = get_logger("sample_stateless_face_verification_async") async def verify_face_to_face(self): @@ -50,39 +46,31 @@ async def verify_face_to_face(self): from azure.ai.vision.face.aio import FaceClient from azure.ai.vision.face.models import FaceDetectionModel, FaceRecognitionModel - async with FaceClient( - endpoint=self.endpoint, credential=AzureKeyCredential(self.key) - ) as face_client: + async with FaceClient(endpoint=self.endpoint, credential=AzureKeyCredential(self.key)) as face_client: dad_picture1 = helpers.get_image_path(TestImages.IMAGE_FAMILY_1_DAD_1) detect_result1 = await face_client.detect( helpers.read_file_content(dad_picture1), - detection_model=FaceDetectionModel.DETECTION_03, - recognition_model=FaceRecognitionModel.RECOGNITION_04, + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, return_face_id=True, ) assert len(detect_result1) == 1 dad_face_id1 = str(detect_result1[0].face_id) - self.logger.info( - f"Detect 1 face from the file '{dad_picture1}': {dad_face_id1}" - ) + self.logger.info(f"Detect 1 face from the file '{dad_picture1}': {dad_face_id1}") dad_picture2 = helpers.get_image_path(TestImages.IMAGE_FAMILY_1_DAD_2) detect_result2 = await face_client.detect( helpers.read_file_content(dad_picture2), - detection_model=FaceDetectionModel.DETECTION_03, - recognition_model=FaceRecognitionModel.RECOGNITION_04, + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, return_face_id=True, ) assert len(detect_result2) == 1 dad_face_id2 = str(detect_result2[0].face_id) - self.logger.info( - f"Detect 1 face from the file '{dad_picture2}': {dad_face_id2}" - ) + self.logger.info(f"Detect 1 face from the file '{dad_picture2}': {dad_face_id2}") # Call Verify to check if dad_face_id1 and dad_face_id2 belong to the same person. - verify_result1 = await face_client.verify_face_to_face( - face_id1=dad_face_id1, face_id2=dad_face_id2 - ) + verify_result1 = await face_client.verify_face_to_face(face_id1=dad_face_id1, face_id2=dad_face_id2) self.logger.info( f"Verify if the faces in '{TestImages.IMAGE_FAMILY_1_DAD_1}' and '{TestImages.IMAGE_FAMILY_1_DAD_2}'" f" belongs to the same person." @@ -92,20 +80,16 @@ async def verify_face_to_face(self): man_picture = helpers.get_image_path(TestImages.IMAGE_FAMILY_3_Man_1) detect_result3 = await face_client.detect( helpers.read_file_content(man_picture), - detection_model=FaceDetectionModel.DETECTION_03, - recognition_model=FaceRecognitionModel.RECOGNITION_04, + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, return_face_id=True, ) assert len(detect_result3) == 1 man_face_id = str(detect_result3[0].face_id) - self.logger.info( - f"Detect 1 face from the file '{man_picture}': {man_face_id}" - ) + self.logger.info(f"Detect 1 face from the file '{man_picture}': {man_face_id}") # Call Verify to check if dad_face_id1 and man_face_id belong to the same person. - verify_result2 = await face_client.verify_face_to_face( - face_id1=dad_face_id1, face_id2=man_face_id - ) + verify_result2 = await face_client.verify_face_to_face(face_id1=dad_face_id1, face_id2=man_face_id) self.logger.info( f"Verify if the faces in '{TestImages.IMAGE_FAMILY_1_DAD_1}' and '{TestImages.IMAGE_FAMILY_3_Man_1}'" f" belongs to the same person." diff --git a/sdk/face/azure-ai-vision-face/samples/sample_verify_and_identify_from_large_person_group.py b/sdk/face/azure-ai-vision-face/samples/sample_verify_and_identify_from_large_person_group.py new file mode 100644 index 000000000000..f762b5a3f7c8 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/samples/sample_verify_and_identify_from_large_person_group.py @@ -0,0 +1,168 @@ +# coding: utf-8 + +# ------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for +# license information. +# -------------------------------------------------------------------------- +""" +FILE: sample_verify_and_identify_from_large_person_group.py + +DESCRIPTION: + This sample demonstrates how to verify and identify faces from a large person group. + +USAGE: + python sample_verify_and_identify_from_large_person_group.py + + Set the environment variables with your own values before running this sample: + 1) AZURE_FACE_API_ENDPOINT - the endpoint to your Face resource. + 2) AZURE_FACE_API_ACCOUNT_KEY - your Face API key. +""" +import os + +from dotenv import find_dotenv, load_dotenv + +from shared.constants import ( + CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, + CONFIGURATION_NAME_FACE_API_ENDPOINT, + DEFAULT_FACE_API_ACCOUNT_KEY, + DEFAULT_FACE_API_ENDPOINT, + TestImages, +) +from shared import helpers +from shared.helpers import beautify_json, get_logger + + +class VerifyAndIdentifyFromLargePersonGroup: + def __init__(self): + load_dotenv(find_dotenv()) + self.endpoint = os.getenv(CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT) + self.key = os.getenv(CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY) + self.logger = get_logger("sample_verify_and_identify_from_large_person_group") + + def verify_and_identify_from_large_person_group(self): + from azure.core.credentials import AzureKeyCredential + from azure.ai.vision.face import FaceAdministrationClient, FaceClient + from azure.ai.vision.face.models import ( + FaceDetectionModel, + FaceRecognitionModel, + ) + + with FaceAdministrationClient( + endpoint=self.endpoint, credential=AzureKeyCredential(self.key) + ) as face_admin_client, FaceClient( + endpoint=self.endpoint, credential=AzureKeyCredential(self.key) + ) as face_client: + + large_person_group_id = "lpg_family1" + # Prepare a LargePersonGroup which contains several person objects. + self.logger.info(f"Create a LargePersonGroup, id = {large_person_group_id}") + face_admin_client.large_person_group.create( + large_person_group_id, + name="Family 1", + user_data="A sweet family", + recognition_model=FaceRecognitionModel.RECOGNITION04, + ) + + # Add person and faces into the LargePersonGroup + self.logger.info("Add person and faces into the LargePersonGroup") + + person1 = face_admin_client.large_person_group.create_person( + large_person_group_id, name="Bill", user_data="Dad" + ) + face_admin_client.large_person_group.add_face( + large_person_group_id, + person1.person_id, + helpers.read_file_content(helpers.get_image_path(TestImages.IMAGE_FAMILY_1_DAD_1)), + user_data="Dad-1", + detection_model=FaceDetectionModel.DETECTION03, + ) + face_admin_client.large_person_group.add_face( + large_person_group_id, + person1.person_id, + helpers.read_file_content(helpers.get_image_path(TestImages.IMAGE_FAMILY_1_DAD_2)), + user_data="Dad-2", + detection_model=FaceDetectionModel.DETECTION03, + ) + + person2 = face_admin_client.large_person_group.create_person( + large_person_group_id, name="Clare", user_data="Mom" + ) + face_admin_client.large_person_group.add_face( + large_person_group_id, + person2.person_id, + helpers.read_file_content(helpers.get_image_path(TestImages.IMAGE_FAMILY_1_MOM_1)), + user_data="Mom-1", + detection_model=FaceDetectionModel.DETECTION03, + ) + face_admin_client.large_person_group.add_face( + large_person_group_id, + person2.person_id, + helpers.read_file_content(helpers.get_image_path(TestImages.IMAGE_FAMILY_1_MOM_2)), + user_data="Mom-2", + detection_model=FaceDetectionModel.DETECTION03, + ) + + person3 = face_admin_client.large_person_group.create_person( + large_person_group_id, name="Ron", user_data="Son" + ) + face_admin_client.large_person_group.add_face( + large_person_group_id, + person3.person_id, + helpers.read_file_content(helpers.get_image_path(TestImages.IMAGE_FAMILY_1_SON_1)), + user_data="Son-1", + detection_model=FaceDetectionModel.DETECTION03, + ) + face_admin_client.large_person_group.add_face( + large_person_group_id, + person3.person_id, + helpers.read_file_content(helpers.get_image_path(TestImages.IMAGE_FAMILY_1_SON_2)), + user_data="Son-2", + detection_model=FaceDetectionModel.DETECTION03, + ) + + # Train the LargePersonGroup + self.logger.info("Train the LargePersonGroup") + poller = face_admin_client.large_person_group.begin_train(large_person_group_id, polling_interval=5) + poller.wait() + + # Detect face from 'DAD_3' + dad_3_image = helpers.get_image_path(TestImages.IMAGE_FAMILY_1_DAD_3) + detect_result = face_client.detect( + helpers.read_file_content(dad_3_image), + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, + return_face_id=True, + ) + + assert len(detect_result) == 1 + face_id = str(detect_result[0].face_id) + + # Verify the face with the person in LargePersonGroup + self.logger.info("Verify the face with Bill") + verify_result = face_client.verify_from_large_person_group( + face_id=face_id, large_person_group_id=large_person_group_id, person_id=person1.person_id + ) + self.logger.info(beautify_json(verify_result.as_dict())) + + self.logger.info("Verify the face with Clare") + verify_result = face_client.verify_from_large_person_group( + face_id=face_id, large_person_group_id=large_person_group_id, person_id=person2.person_id + ) + self.logger.info(beautify_json(verify_result.as_dict())) + + # Identify the face from the LargePersonGroup + self.logger.info("Identify the face from the LargePersonGroup") + identify_result = face_client.identify_from_large_person_group( + face_ids=[face_id], large_person_group_id=large_person_group_id + ) + self.logger.info(beautify_json(identify_result[0].as_dict())) + + # Clean up: Remove the LargePersonGroup + self.logger.info(f"Remove the LargePersonGroup {large_person_group_id}") + face_admin_client.large_person_group.delete(large_person_group_id) + + +if __name__ == "__main__": + sample = VerifyAndIdentifyFromLargePersonGroup() + sample.verify_and_identify_from_large_person_group() diff --git a/sdk/face/azure-ai-vision-face/samples/sample_verify_and_identify_from_large_person_group_async.py b/sdk/face/azure-ai-vision-face/samples/sample_verify_and_identify_from_large_person_group_async.py new file mode 100644 index 000000000000..55d3e17b747e --- /dev/null +++ b/sdk/face/azure-ai-vision-face/samples/sample_verify_and_identify_from_large_person_group_async.py @@ -0,0 +1,173 @@ +# coding: utf-8 + +# ------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for +# license information. +# -------------------------------------------------------------------------- +""" +FILE: sample_verify_and_identify_from_large_person_group_async.py + +DESCRIPTION: + This sample demonstrates how to verify and identify faces from a large person group. + +USAGE: + python sample_verify_and_identify_from_large_person_group_async.py + + Set the environment variables with your own values before running this sample: + 1) AZURE_FACE_API_ENDPOINT - the endpoint to your Face resource. + 2) AZURE_FACE_API_ACCOUNT_KEY - your Face API key. +""" +import asyncio +import os + +from dotenv import find_dotenv, load_dotenv + +from shared.constants import ( + CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, + CONFIGURATION_NAME_FACE_API_ENDPOINT, + DEFAULT_FACE_API_ACCOUNT_KEY, + DEFAULT_FACE_API_ENDPOINT, + TestImages, +) +from shared import helpers +from shared.helpers import beautify_json, get_logger + + +class VerifyAndIdentifyFromLargePersonGroup: + def __init__(self): + load_dotenv(find_dotenv()) + self.endpoint = os.getenv(CONFIGURATION_NAME_FACE_API_ENDPOINT, DEFAULT_FACE_API_ENDPOINT) + self.key = os.getenv(CONFIGURATION_NAME_FACE_API_ACCOUNT_KEY, DEFAULT_FACE_API_ACCOUNT_KEY) + self.logger = get_logger("sample_verify_and_identify_from_large_person_group_async") + + async def verify_and_identify_from_large_person_group(self): + from azure.core.credentials import AzureKeyCredential + from azure.ai.vision.face.aio import FaceAdministrationClient, FaceClient + from azure.ai.vision.face.models import ( + FaceDetectionModel, + FaceRecognitionModel, + ) + + async with FaceAdministrationClient( + endpoint=self.endpoint, credential=AzureKeyCredential(self.key) + ) as face_admin_client, FaceClient( + endpoint=self.endpoint, credential=AzureKeyCredential(self.key) + ) as face_client: + + large_person_group_id = "lpg_family1" + # Prepare a LargePersonGroup which contains several person objects. + self.logger.info(f"Create a LargePersonGroup, id = {large_person_group_id}") + await face_admin_client.large_person_group.create( + large_person_group_id, + name="Family 1", + user_data="A sweet family", + recognition_model=FaceRecognitionModel.RECOGNITION04, + ) + + # Add person and faces into the LargePersonGroup + self.logger.info("Add person and faces into the LargePersonGroup") + + person1 = await face_admin_client.large_person_group.create_person( + large_person_group_id, name="Bill", user_data="Dad" + ) + await face_admin_client.large_person_group.add_face( + large_person_group_id, + person1.person_id, + helpers.read_file_content(helpers.get_image_path(TestImages.IMAGE_FAMILY_1_DAD_1)), + user_data="Dad-1", + detection_model=FaceDetectionModel.DETECTION03, + ) + await face_admin_client.large_person_group.add_face( + large_person_group_id, + person1.person_id, + helpers.read_file_content(helpers.get_image_path(TestImages.IMAGE_FAMILY_1_DAD_2)), + user_data="Dad-2", + detection_model=FaceDetectionModel.DETECTION03, + ) + + person2 = await face_admin_client.large_person_group.create_person( + large_person_group_id, name="Clare", user_data="Mom" + ) + await face_admin_client.large_person_group.add_face( + large_person_group_id, + person2.person_id, + helpers.read_file_content(helpers.get_image_path(TestImages.IMAGE_FAMILY_1_MOM_1)), + user_data="Mom-1", + detection_model=FaceDetectionModel.DETECTION03, + ) + await face_admin_client.large_person_group.add_face( + large_person_group_id, + person2.person_id, + helpers.read_file_content(helpers.get_image_path(TestImages.IMAGE_FAMILY_1_MOM_2)), + user_data="Mom-2", + detection_model=FaceDetectionModel.DETECTION03, + ) + + person3 = await face_admin_client.large_person_group.create_person( + large_person_group_id, name="Ron", user_data="Son" + ) + await face_admin_client.large_person_group.add_face( + large_person_group_id, + person3.person_id, + helpers.read_file_content(helpers.get_image_path(TestImages.IMAGE_FAMILY_1_SON_1)), + user_data="Son-1", + detection_model=FaceDetectionModel.DETECTION03, + ) + await face_admin_client.large_person_group.add_face( + large_person_group_id, + person3.person_id, + helpers.read_file_content(helpers.get_image_path(TestImages.IMAGE_FAMILY_1_SON_2)), + user_data="Son-2", + detection_model=FaceDetectionModel.DETECTION03, + ) + + # Train the LargePersonGroup + self.logger.info("Train the LargePersonGroup") + poller = await face_admin_client.large_person_group.begin_train(large_person_group_id, polling_interval=5) + await poller.wait() + + # Detect face from 'DAD_3' + dad_3_image = helpers.get_image_path(TestImages.IMAGE_FAMILY_1_DAD_3) + detect_result = await face_client.detect( + helpers.read_file_content(dad_3_image), + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, + return_face_id=True, + ) + + assert len(detect_result) == 1 + face_id = str(detect_result[0].face_id) + + # Verify the face with the person in LargePersonGroup + self.logger.info("Verify the face with Bill") + verify_result = await face_client.verify_from_large_person_group( + face_id=face_id, large_person_group_id=large_person_group_id, person_id=person1.person_id + ) + self.logger.info(beautify_json(verify_result.as_dict())) + + self.logger.info("Verify the face with Clare") + verify_result = await face_client.verify_from_large_person_group( + face_id=face_id, large_person_group_id=large_person_group_id, person_id=person2.person_id + ) + self.logger.info(beautify_json(verify_result.as_dict())) + + # Identify the face from the LargePersonGroup + self.logger.info("Identify the face from the LargePersonGroup") + identify_result = await face_client.identify_from_large_person_group( + face_ids=[face_id], large_person_group_id=large_person_group_id + ) + self.logger.info(beautify_json(identify_result[0].as_dict())) + + # Clean up: Remove the LargePersonGroup + self.logger.info(f"Remove the LargePersonGroup {large_person_group_id}") + await face_admin_client.large_person_group.delete(large_person_group_id) + + +async def main(): + sample = VerifyAndIdentifyFromLargePersonGroup() + await sample.verify_and_identify_from_large_person_group() + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/sdk/face/azure-ai-vision-face/samples/shared/helpers.py b/sdk/face/azure-ai-vision-face/samples/shared/helpers.py index 40def24e7284..b3b6e1578271 100644 --- a/sdk/face/azure-ai-vision-face/samples/shared/helpers.py +++ b/sdk/face/azure-ai-vision-face/samples/shared/helpers.py @@ -21,11 +21,7 @@ def get_logger(name): # create console handler handler = logging.StreamHandler() - handler.setFormatter( - logging.Formatter( - fmt="%(asctime)s %(levelname)s %(message)s", datefmt="%Y-%m-%d %H:%M:%S" - ) - ) + handler.setFormatter(logging.Formatter(fmt="%(asctime)s %(levelname)s %(message)s", datefmt="%Y-%m-%d %H:%M:%S")) # create Logger logger = logging.getLogger(name) @@ -42,9 +38,7 @@ def beautify_json(obj: typing.Dict[str, typing.Any]): def get_image_path(image_file_name: str): from .constants import TestImages - return Path(__file__).resolve().parent / ( - TestImages.IMAGE_PARENT_FOLDER + "/" + image_file_name - ) + return Path(__file__).resolve().parent / (TestImages.IMAGE_PARENT_FOLDER + "/" + image_file_name) def read_file_content(file_path: Path): diff --git a/sdk/face/azure-ai-vision-face/tests/_shared/asserter.py b/sdk/face/azure-ai-vision-face/tests/_shared/asserter.py index 40a4d87f02bf..e0f5722d8c85 100644 --- a/sdk/face/azure-ai-vision-face/tests/_shared/asserter.py +++ b/sdk/face/azure-ai-vision-face/tests/_shared/asserter.py @@ -32,9 +32,7 @@ def _assert_liveness_with_verify_image_not_empty( verify_image: models.LivenessWithVerifyImage, ): _assert_face_rectangle_not_empty(verify_image.face_rectangle) - assert isinstance( - verify_image.quality_for_recognition, models.QualityForRecognition - ) + assert isinstance(verify_image.quality_for_recognition, models.QualityForRecognition) def _assert_liveness_with_verify_outputs_not_empty( @@ -45,9 +43,7 @@ def _assert_liveness_with_verify_outputs_not_empty( assert output.is_identical is not None -def _assert_liveness_response_body_not_empty( - body: models.LivenessResponseBody, is_liveness_with_verify: bool = True -): +def _assert_liveness_response_body_not_empty(body: models.LivenessResponseBody, is_liveness_with_verify: bool = True): assert body.liveness_decision in models.FaceLivenessDecision _assert_liveness_outputs_target_not_empty(body.target) assert body.model_version_used in models.LivenessModel @@ -67,9 +63,7 @@ def _assert_session_audit_entry_request_info_not_empty( def _assert_session_audit_entry_response_info_not_empty( response: models.AuditLivenessResponseInfo, is_liveness_with_verify: bool = True ): - _assert_liveness_response_body_not_empty( - response.body, is_liveness_with_verify=is_liveness_with_verify - ) + _assert_liveness_response_body_not_empty(response.body, is_liveness_with_verify=is_liveness_with_verify) assert response.status_code > 0 assert response.latency_in_milliseconds > 0 diff --git a/sdk/face/azure-ai-vision-face/tests/_shared/helpers.py b/sdk/face/azure-ai-vision-face/tests/_shared/helpers.py index 81da59c9513d..f9665f11b2be 100644 --- a/sdk/face/azure-ai-vision-face/tests/_shared/helpers.py +++ b/sdk/face/azure-ai-vision-face/tests/_shared/helpers.py @@ -24,9 +24,7 @@ def get_account_key(**kwargs): def get_image_path(image_file_name: str): from .constants import TestImages - return Path(__file__).resolve().parent / ( - TestImages.IMAGE_PARENT_FOLDER + "/" + image_file_name - ) + return Path(__file__).resolve().parent / (TestImages.IMAGE_PARENT_FOLDER + "/" + image_file_name) def read_file_content(file_path: Path): diff --git a/sdk/face/azure-ai-vision-face/tests/_shared/testcase.py b/sdk/face/azure-ai-vision-face/tests/_shared/testcase.py deleted file mode 100644 index b6853948d712..000000000000 --- a/sdk/face/azure-ai-vision-face/tests/_shared/testcase.py +++ /dev/null @@ -1,32 +0,0 @@ -# coding: utf-8 - -# ------------------------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See License.txt in the project root for -# license information. -# -------------------------------------------------------------------------- -from azure.ai.vision.face import ( - FaceClient, - FaceSessionClient, -) -from azure.core.credentials import AzureKeyCredential - - -class FaceClientTestCase: - def _set_up(self, endpoint, account_key) -> None: - self._client = FaceClient( - endpoint=endpoint, credential=AzureKeyCredential(account_key) - ) - - def _tear_down(self) -> None: - self._client.close() - - -class FaceSessionClientTestCase: - def _set_up(self, endpoint, account_key) -> None: - self._client = FaceSessionClient( - endpoint=endpoint, credential=AzureKeyCredential(account_key) - ) - - def _tear_down(self) -> None: - self._client.close() diff --git a/sdk/face/azure-ai-vision-face/tests/preparers.py b/sdk/face/azure-ai-vision-face/tests/preparers.py index 95d84870cf6b..e60328f7866d 100644 --- a/sdk/face/azure-ai-vision-face/tests/preparers.py +++ b/sdk/face/azure-ai-vision-face/tests/preparers.py @@ -31,8 +31,7 @@ def create_resource(self, name, **kwargs): self._client = self._client_cls(endpoint, AzureKeyCredential(account_key)) env_name = ( self._client_kwargs["client_env_name"] - if self._client_kwargs is not None - and "client_env_name" in self._client_kwargs + if self._client_kwargs is not None and "client_env_name" in self._client_kwargs else "client" ) @@ -49,15 +48,18 @@ def remove_resource(self, name, **kwargs): EnvironmentVariableLoader, "face", azure_face_api_endpoint="https://fakeendpoint.cognitiveservices.azure.com", - azure_face_api_name="fakeaccountname", azure_face_api_account_key="fakeaccountkey", ) FaceClientPreparer = functools.partial(ClientPreparer, Client.FaceClient) +FaceAdministrationClientPreparer = functools.partial( + ClientPreparer, Client.FaceAdministrationClient, client_kwargs={"client_env_name": "administration_client"} +) FaceSessionClientPreparer = functools.partial(ClientPreparer, Client.FaceSessionClient) # Async client AsyncFaceClientPreparer = functools.partial(ClientPreparer, AsyncClient.FaceClient) -AsyncFaceSessionClientPreparer = functools.partial( - ClientPreparer, AsyncClient.FaceSessionClient +AsyncFaceAdministrationClientPreparer = functools.partial( + ClientPreparer, AsyncClient.FaceAdministrationClient, client_kwargs={"client_env_name": "administration_client"} ) +AsyncFaceSessionClientPreparer = functools.partial(ClientPreparer, AsyncClient.FaceSessionClient) diff --git a/sdk/face/azure-ai-vision-face/tests/test_authentication.py b/sdk/face/azure-ai-vision-face/tests/test_authentication.py index 041dc3d4fc73..c226b3c8a692 100644 --- a/sdk/face/azure-ai-vision-face/tests/test_authentication.py +++ b/sdk/face/azure-ai-vision-face/tests/test_authentication.py @@ -27,8 +27,8 @@ def test_face_client_api_key_authentication(self, client, **kwargs): sample_file_path = helpers.get_image_path(TestImages.IMAGE_DETECTION_1) result = client.detect( helpers.read_file_content(sample_file_path), - detection_model=FaceDetectionModel.DETECTION_03, - recognition_model=FaceRecognitionModel.RECOGNITION_04, + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, return_face_id=False, return_face_attributes=[ FaceAttributeTypeDetection03.HEAD_POSE, diff --git a/sdk/face/azure-ai-vision-face/tests/test_authentication_async.py b/sdk/face/azure-ai-vision-face/tests/test_authentication_async.py index b71182b1f286..b33dfdb45752 100644 --- a/sdk/face/azure-ai-vision-face/tests/test_authentication_async.py +++ b/sdk/face/azure-ai-vision-face/tests/test_authentication_async.py @@ -28,8 +28,8 @@ async def test_face_client_api_key_authentication(self, client, **kwargs): sample_file_path = helpers.get_image_path(TestImages.IMAGE_DETECTION_1) result = await client.detect( helpers.read_file_content(sample_file_path), - detection_model=FaceDetectionModel.DETECTION_03, - recognition_model=FaceRecognitionModel.RECOGNITION_04, + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, return_face_id=False, return_face_attributes=[ FaceAttributeTypeDetection03.HEAD_POSE, diff --git a/sdk/face/azure-ai-vision-face/tests/test_find_similar.py b/sdk/face/azure-ai-vision-face/tests/test_find_similar.py new file mode 100644 index 000000000000..1228ddd2132a --- /dev/null +++ b/sdk/face/azure-ai-vision-face/tests/test_find_similar.py @@ -0,0 +1,74 @@ +# coding: utf-8 + +# ------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for +# license information. +# -------------------------------------------------------------------------- +from devtools_testutils import AzureRecordedTestCase, recorded_by_proxy + +from azure.ai.vision.face import FaceClient, FaceAdministrationClient +from azure.ai.vision.face.models import ( + FaceDetectionModel, + FaceRecognitionModel, +) + +from preparers import FaceClientPreparer, FacePreparer, FaceAdministrationClientPreparer +from _shared.constants import TestImages +from _shared import helpers + + +class TestFindSimilar(AzureRecordedTestCase): + test_images = [TestImages.IMAGE_FAMILY_1_MOM_1, TestImages.IMAGE_FAMILY_2_LADY_1] + list_id = "findsimilar" + + def _setup_faces(self, client: FaceClient): + face_ids = [] + for image in self.test_images: + image_path = helpers.get_image_path(image) + result = client.detect( + helpers.read_file_content(image_path), + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, + return_face_id=True, + ) + face_ids.append(result[0].face_id) + return face_ids + + def _setup_large_face_list(self, client: FaceAdministrationClient): + operations = client.large_face_list + operations.create(self.list_id, name=self.list_id, recognition_model=FaceRecognitionModel.RECOGNITION04) + + persisted_face_ids = [] + for image in self.test_images: + image_path = helpers.get_image_path(image) + result = operations.add_face( + self.list_id, helpers.read_file_content(image_path), detection_model=FaceDetectionModel.DETECTION03 + ) + assert result.persisted_face_id + persisted_face_ids.append(result.persisted_face_id) + + poller = operations.begin_train(self.list_id, polling_interval=3) + poller.wait() + + return persisted_face_ids + + # TODO: Use fixtures to replace teardown methods + def _teardown_large_face_list(self, client: FaceAdministrationClient): + client.large_face_list.delete(self.list_id) + + @FacePreparer() + @FaceClientPreparer() + @FaceAdministrationClientPreparer() + @recorded_by_proxy + def test_find_similar_from_large_face_list( + self, client: FaceClient, administration_client: FaceAdministrationClient + ): + face_ids = self._setup_faces(client) + persisted_face_ids = self._setup_large_face_list(administration_client) + + similar_faces = client.find_similar_from_large_face_list(face_id=face_ids[0], large_face_list_id=self.list_id) + assert similar_faces[0].persisted_face_id == persisted_face_ids[0] + assert similar_faces[0].confidence > 0.9 + + self._teardown_large_face_list(administration_client) diff --git a/sdk/face/azure-ai-vision-face/tests/test_find_similar_async.py b/sdk/face/azure-ai-vision-face/tests/test_find_similar_async.py new file mode 100644 index 000000000000..8d2a006e8ecf --- /dev/null +++ b/sdk/face/azure-ai-vision-face/tests/test_find_similar_async.py @@ -0,0 +1,77 @@ +# coding: utf-8 + +# ------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for +# license information. +# -------------------------------------------------------------------------- +from devtools_testutils import AzureRecordedTestCase +from devtools_testutils.aio import recorded_by_proxy_async + +from azure.ai.vision.face.aio import FaceClient, FaceAdministrationClient +from azure.ai.vision.face.models import ( + FaceDetectionModel, + FaceRecognitionModel, +) + +from preparers import AsyncFaceClientPreparer, FacePreparer, AsyncFaceAdministrationClientPreparer +from _shared.constants import TestImages +from _shared import helpers + + +class TestFindSimilarAsync(AzureRecordedTestCase): + test_images = [TestImages.IMAGE_FAMILY_1_MOM_1, TestImages.IMAGE_FAMILY_2_LADY_1] + list_id = "findsimilar" + + async def _setup_faces(self, client: FaceClient): + face_ids = [] + for image in self.test_images: + image_path = helpers.get_image_path(image) + result = await client.detect( + helpers.read_file_content(image_path), + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, + return_face_id=True, + ) + face_ids.append(result[0].face_id) + return face_ids + + async def _setup_large_face_list(self, client: FaceAdministrationClient): + operations = client.large_face_list + await operations.create(self.list_id, name=self.list_id, recognition_model=FaceRecognitionModel.RECOGNITION04) + + persisted_face_ids = [] + for image in self.test_images: + image_path = helpers.get_image_path(image) + result = await operations.add_face( + self.list_id, helpers.read_file_content(image_path), detection_model=FaceDetectionModel.DETECTION03 + ) + assert result.persisted_face_id + persisted_face_ids.append(result.persisted_face_id) + + poller = await operations.begin_train(self.list_id, polling_interval=3) + await poller.wait() + + return persisted_face_ids + + # TODO: Use fixtures to replace teardown methods + async def _teardown_large_face_list(self, client: FaceAdministrationClient): + await client.large_face_list.delete(self.list_id) + + @FacePreparer() + @AsyncFaceClientPreparer() + @AsyncFaceAdministrationClientPreparer() + @recorded_by_proxy_async + async def test_find_similar_from_large_face_list( + self, client: FaceClient, administration_client: FaceAdministrationClient + ): + face_ids = await self._setup_faces(client) + persisted_face_ids = await self._setup_large_face_list(administration_client) + + similar_faces = await client.find_similar_from_large_face_list( + face_id=face_ids[0], large_face_list_id=self.list_id + ) + assert similar_faces[0].persisted_face_id == persisted_face_ids[0] + assert similar_faces[0].confidence > 0.9 + + await self._teardown_large_face_list(administration_client) diff --git a/sdk/face/azure-ai-vision-face/tests/test_identify.py b/sdk/face/azure-ai-vision-face/tests/test_identify.py new file mode 100644 index 000000000000..11d8edb85e4c --- /dev/null +++ b/sdk/face/azure-ai-vision-face/tests/test_identify.py @@ -0,0 +1,89 @@ +# coding: utf-8 + +# ------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for +# license information. +# -------------------------------------------------------------------------- +from devtools_testutils import AzureRecordedTestCase, recorded_by_proxy + +from azure.ai.vision.face import FaceClient, FaceAdministrationClient +from azure.ai.vision.face.models import ( + FaceDetectionModel, + FaceRecognitionModel, +) + +from preparers import FaceClientPreparer, FacePreparer, FaceAdministrationClientPreparer +from _shared.constants import TestImages +from _shared import helpers + + +class TestIdentify(AzureRecordedTestCase): + test_images = [ + TestImages.IMAGE_FAMILY_1_DAD_1, + TestImages.IMAGE_FAMILY_1_DAUGHTER_1, + TestImages.IMAGE_FAMILY_1_MOM_1, + TestImages.IMAGE_FAMILY_1_SON_1, + ] + group_id = "identify" + + def _setup_faces(self, client: FaceClient): + face_ids = [] + image_path = helpers.get_image_path(TestImages.IMAGE_IDENTIFICATION1) + result = client.detect( + helpers.read_file_content(image_path), + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, + return_face_id=True, + ) + for face in result: + face_ids.append(face.face_id) + return face_ids + + def _setup_group(self, operations): + operations.create(self.group_id, name=self.group_id, recognition_model=FaceRecognitionModel.RECOGNITION04) + + person_ids = [] + for image in self.test_images: + result = operations.create_person(self.group_id, name="test_person") + assert result.person_id + person_ids.append(result.person_id) + image_path = helpers.get_image_path(image) + operations.add_face( + self.group_id, + result.person_id, + helpers.read_file_content(image_path), + detection_model=FaceDetectionModel.DETECTION03, + ) + + poller = operations.begin_train(self.group_id) + poller.result() + + return person_ids + + def _teardown_group(self, operations): + operations.delete(self.group_id) + + @FacePreparer() + @FaceClientPreparer() + @FaceAdministrationClientPreparer() + @recorded_by_proxy + def test_identify_from_large_person_group( + self, client: FaceClient, administration_client: FaceAdministrationClient + ): + face_ids = self._setup_faces(client) + self._setup_group(administration_client.large_person_group) + + identify_result = client.identify_from_large_person_group( + face_ids=face_ids, large_person_group_id=self.group_id + ) + + assert len(identify_result) == len(face_ids) + for result in identify_result: + assert result.candidates is not None + assert result.face_id is not None + for candidate in result.candidates: + assert candidate.confidence is not None + assert candidate.person_id is not None + + self._teardown_group(administration_client.large_person_group) diff --git a/sdk/face/azure-ai-vision-face/tests/test_identify_async.py b/sdk/face/azure-ai-vision-face/tests/test_identify_async.py new file mode 100644 index 000000000000..de69d642d196 --- /dev/null +++ b/sdk/face/azure-ai-vision-face/tests/test_identify_async.py @@ -0,0 +1,90 @@ +# coding: utf-8 + +# ------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for +# license information. +# -------------------------------------------------------------------------- +from devtools_testutils import AzureRecordedTestCase +from devtools_testutils.aio import recorded_by_proxy_async + +from azure.ai.vision.face.aio import FaceClient, FaceAdministrationClient +from azure.ai.vision.face.models import ( + FaceDetectionModel, + FaceRecognitionModel, +) + +from preparers import AsyncFaceClientPreparer, FacePreparer, AsyncFaceAdministrationClientPreparer +from _shared.constants import TestImages +from _shared import helpers + + +class TestIdentify(AzureRecordedTestCase): + test_images = [ + TestImages.IMAGE_FAMILY_1_DAD_1, + TestImages.IMAGE_FAMILY_1_DAUGHTER_1, + TestImages.IMAGE_FAMILY_1_MOM_1, + TestImages.IMAGE_FAMILY_1_SON_1, + ] + group_id = "identify" + + async def _setup_faces(self, client: FaceClient): + face_ids = [] + image_path = helpers.get_image_path(TestImages.IMAGE_IDENTIFICATION1) + result = await client.detect( + helpers.read_file_content(image_path), + detection_model=FaceDetectionModel.DETECTION03, + recognition_model=FaceRecognitionModel.RECOGNITION04, + return_face_id=True, + ) + for face in result: + face_ids.append(face.face_id) + return face_ids + + async def _setup_group(self, operations): + await operations.create(self.group_id, name=self.group_id, recognition_model=FaceRecognitionModel.RECOGNITION04) + + person_ids = [] + for image in self.test_images: + result = await operations.create_person(self.group_id, name="test_person") + assert result.person_id + person_ids.append(result.person_id) + image_path = helpers.get_image_path(image) + await operations.add_face( + self.group_id, + result.person_id, + helpers.read_file_content(image_path), + detection_model=FaceDetectionModel.DETECTION03, + ) + + poller = await operations.begin_train(self.group_id) + await poller.wait() + + return person_ids + + async def _teardown_group(self, operations): + await operations.delete(self.group_id) + + @FacePreparer() + @AsyncFaceClientPreparer() + @AsyncFaceAdministrationClientPreparer() + @recorded_by_proxy_async + async def test_identify_from_large_person_group( + self, client: FaceClient, administration_client: FaceAdministrationClient + ): + face_ids = await self._setup_faces(client) + await self._setup_group(administration_client.large_person_group) + + identify_result = await client.identify_from_large_person_group( + face_ids=face_ids, large_person_group_id=self.group_id + ) + + assert len(identify_result) == len(face_ids) + for result in identify_result: + assert result.candidates is not None + assert result.face_id is not None + for candidate in result.candidates: + assert candidate.confidence is not None + assert candidate.person_id is not None + + await self._teardown_group(administration_client.large_person_group) diff --git a/sdk/face/azure-ai-vision-face/tests/test_liveness_session.py b/sdk/face/azure-ai-vision-face/tests/test_liveness_session.py index 04d18fe37071..14399821f479 100644 --- a/sdk/face/azure-ai-vision-face/tests/test_liveness_session.py +++ b/sdk/face/azure-ai-vision-face/tests/test_liveness_session.py @@ -30,9 +30,7 @@ class TestLivenessSession(AzureRecordedTestCase): @recorded_by_proxy def test_create_session(self, client, **kwargs): variables = kwargs.pop("variables", {}) - recorded_device_correlation_id = variables.setdefault( - "deviceCorrelationId", str(uuid.uuid4()) - ) + recorded_device_correlation_id = variables.setdefault("deviceCorrelationId", str(uuid.uuid4())) # Test `create session` operation created_session = client.create_liveness_session( @@ -80,10 +78,7 @@ def test_list_sessions(self, client, **kwargs): created_session_dict[created_session.session_id] = dcid # Sort the dict by key because the `list sessions` operation returns sessions in ascending alphabetical order. - expected_dcid_queue = deque( - value - for _, value in sorted(created_session_dict.items(), key=lambda t: t[0]) - ) + expected_dcid_queue = deque(value for _, value in sorted(created_session_dict.items(), key=lambda t: t[0])) # Test `list sessions` operation result = client.get_liveness_sessions() @@ -107,9 +102,7 @@ def test_list_sessions(self, client, **kwargs): @recorded_by_proxy def test_get_session_result(self, client, **kwargs): variables = kwargs.pop("variables", {}) - recorded_session_id = variables.setdefault( - "sessionId", "5f8e0996-4ef0-4142-9b5d-e42fa5748a7e" - ) + recorded_session_id = variables.setdefault("sessionId", "5f8e0996-4ef0-4142-9b5d-e42fa5748a7e") session = client.get_liveness_session_result(recorded_session_id) assert session.created_date_time is not None @@ -133,9 +126,7 @@ def test_get_session_result(self, client, **kwargs): @recorded_by_proxy def test_get_session_audit_entries(self, client, **kwargs): variables = kwargs.pop("variables", {}) - recorded_session_id = variables.setdefault( - "sessionId", "5f8e0996-4ef0-4142-9b5d-e42fa5748a7e" - ) + recorded_session_id = variables.setdefault("sessionId", "5f8e0996-4ef0-4142-9b5d-e42fa5748a7e") entries = client.get_liveness_session_audit_entries(recorded_session_id) assert len(entries) == 2 @@ -153,9 +144,7 @@ def test_get_session_audit_entries(self, client, **kwargs): @recorded_by_proxy def test_delete_session(self, client, **kwargs): variables = kwargs.pop("variables", {}) - recorded_device_correlation_id = variables.setdefault( - "deviceCorrelationId", str(uuid.uuid4()) - ) + recorded_device_correlation_id = variables.setdefault("deviceCorrelationId", str(uuid.uuid4())) created_session = client.create_liveness_session( CreateLivenessSessionContent( diff --git a/sdk/face/azure-ai-vision-face/tests/test_liveness_session_async.py b/sdk/face/azure-ai-vision-face/tests/test_liveness_session_async.py index ac9846ff152b..ef7b5e5ef0e8 100644 --- a/sdk/face/azure-ai-vision-face/tests/test_liveness_session_async.py +++ b/sdk/face/azure-ai-vision-face/tests/test_liveness_session_async.py @@ -31,9 +31,7 @@ class TestLivenessSessionAsync(AzureRecordedTestCase): @recorded_by_proxy_async async def test_create_session(self, client, **kwargs): variables = kwargs.pop("variables", {}) - recorded_device_correlation_id = variables.setdefault( - "deviceCorrelationId", str(uuid.uuid4()) - ) + recorded_device_correlation_id = variables.setdefault("deviceCorrelationId", str(uuid.uuid4())) # Test `create session` operation created_session = await client.create_liveness_session( @@ -82,10 +80,7 @@ async def test_list_sessions(self, client, **kwargs): created_session_dict[created_session.session_id] = dcid # Sort the dict by key because the `list sessions` operation returns sessions in ascending alphabetical order. - expected_dcid_queue = deque( - value - for _, value in sorted(created_session_dict.items(), key=lambda t: t[0]) - ) + expected_dcid_queue = deque(value for _, value in sorted(created_session_dict.items(), key=lambda t: t[0])) # Test `list sessions` operation result = await client.get_liveness_sessions() @@ -110,9 +105,7 @@ async def test_list_sessions(self, client, **kwargs): @recorded_by_proxy_async async def test_get_session_result(self, client, **kwargs): variables = kwargs.pop("variables", {}) - recorded_session_id = variables.setdefault( - "sessionId", "5f8e0996-4ef0-4142-9b5d-e42fa5748a7e" - ) + recorded_session_id = variables.setdefault("sessionId", "5f8e0996-4ef0-4142-9b5d-e42fa5748a7e") session = await client.get_liveness_session_result(recorded_session_id) assert session.created_date_time is not None @@ -137,9 +130,7 @@ async def test_get_session_result(self, client, **kwargs): @recorded_by_proxy_async async def test_get_session_audit_entries(self, client, **kwargs): variables = kwargs.pop("variables", {}) - recorded_session_id = variables.setdefault( - "sessionId", "5f8e0996-4ef0-4142-9b5d-e42fa5748a7e" - ) + recorded_session_id = variables.setdefault("sessionId", "5f8e0996-4ef0-4142-9b5d-e42fa5748a7e") entries = await client.get_liveness_session_audit_entries(recorded_session_id) assert len(entries) == 2 @@ -158,9 +149,7 @@ async def test_get_session_audit_entries(self, client, **kwargs): @recorded_by_proxy_async async def test_delete_session(self, client, **kwargs): variables = kwargs.pop("variables", {}) - recorded_device_correlation_id = variables.setdefault( - "deviceCorrelationId", str(uuid.uuid4()) - ) + recorded_device_correlation_id = variables.setdefault("deviceCorrelationId", str(uuid.uuid4())) created_session = await client.create_liveness_session( CreateLivenessSessionContent( diff --git a/sdk/face/azure-ai-vision-face/tests/test_liveness_with_verify_session.py b/sdk/face/azure-ai-vision-face/tests/test_liveness_with_verify_session.py index 1d23311e4c86..7699f0ccfff1 100644 --- a/sdk/face/azure-ai-vision-face/tests/test_liveness_with_verify_session.py +++ b/sdk/face/azure-ai-vision-face/tests/test_liveness_with_verify_session.py @@ -33,9 +33,7 @@ class TestLivenessWithVerifySession(AzureRecordedTestCase): @recorded_by_proxy def test_create_session(self, client, **kwargs): variables = kwargs.pop("variables", {}) - recorded_device_correlation_id = variables.setdefault( - "deviceCorrelationId", str(uuid.uuid4()) - ) + recorded_device_correlation_id = variables.setdefault("deviceCorrelationId", str(uuid.uuid4())) # Test `create session` operation created_session = client.create_liveness_with_verify_session( @@ -64,9 +62,7 @@ def test_create_session(self, client, **kwargs): @recorded_by_proxy def test_create_session_with_verify_image(self, client, **kwargs): variables = kwargs.pop("variables", {}) - recorded_device_correlation_id = variables.setdefault( - "deviceCorrelationId", str(uuid.uuid4()) - ) + recorded_device_correlation_id = variables.setdefault("deviceCorrelationId", str(uuid.uuid4())) # verify_image sample_file_path = helpers.get_image_path(TestImages.IMAGE_DETECTION_1) @@ -121,10 +117,7 @@ def test_list_sessions(self, client, **kwargs): created_session_dict[created_session.session_id] = dcid # Sort the dict by key because the `list sessions` operation returns sessions in ascending alphabetical order. - expected_dcid_queue = deque( - value - for _, value in sorted(created_session_dict.items(), key=lambda t: t[0]) - ) + expected_dcid_queue = deque(value for _, value in sorted(created_session_dict.items(), key=lambda t: t[0])) # Test `list sessions` operation result = client.get_liveness_with_verify_sessions() @@ -148,9 +141,7 @@ def test_list_sessions(self, client, **kwargs): @recorded_by_proxy def test_get_session_result(self, client, **kwargs): variables = kwargs.pop("variables", {}) - recorded_session_id = variables.setdefault( - "sessionId", "1b79f44d-d8e0-4652-8f2d-637c4205d854" - ) + recorded_session_id = variables.setdefault("sessionId", "1b79f44d-d8e0-4652-8f2d-637c4205d854") session = client.get_liveness_with_verify_session_result(recorded_session_id) assert session.created_date_time is not None @@ -174,13 +165,9 @@ def test_get_session_result(self, client, **kwargs): @recorded_by_proxy def test_get_session_audit_entries(self, client, **kwargs): variables = kwargs.pop("variables", {}) - recorded_session_id = variables.setdefault( - "sessionId", "1b79f44d-d8e0-4652-8f2d-637c4205d854" - ) + recorded_session_id = variables.setdefault("sessionId", "1b79f44d-d8e0-4652-8f2d-637c4205d854") - entries = client.get_liveness_with_verify_session_audit_entries( - recorded_session_id - ) + entries = client.get_liveness_with_verify_session_audit_entries(recorded_session_id) assert len(entries) == 2 for entry in entries: _assert_liveness_session_audit_entry_is_valid( @@ -196,9 +183,7 @@ def test_get_session_audit_entries(self, client, **kwargs): @recorded_by_proxy def test_delete_session(self, client, **kwargs): variables = kwargs.pop("variables", {}) - recorded_device_correlation_id = variables.setdefault( - "deviceCorrelationId", str(uuid.uuid4()) - ) + recorded_device_correlation_id = variables.setdefault("deviceCorrelationId", str(uuid.uuid4())) created_session = client.create_liveness_with_verify_session( CreateLivenessSessionContent( diff --git a/sdk/face/azure-ai-vision-face/tests/test_liveness_with_verify_session_async.py b/sdk/face/azure-ai-vision-face/tests/test_liveness_with_verify_session_async.py index 4e05b2a7a4cb..d3933ea9ef4a 100644 --- a/sdk/face/azure-ai-vision-face/tests/test_liveness_with_verify_session_async.py +++ b/sdk/face/azure-ai-vision-face/tests/test_liveness_with_verify_session_async.py @@ -34,9 +34,7 @@ class TestLivenessWithVerifySessionAsync(AzureRecordedTestCase): @recorded_by_proxy_async async def test_create_session(self, client, **kwargs): variables = kwargs.pop("variables", {}) - recorded_device_correlation_id = variables.setdefault( - "deviceCorrelationId", str(uuid.uuid4()) - ) + recorded_device_correlation_id = variables.setdefault("deviceCorrelationId", str(uuid.uuid4())) # Test `create session` operation created_session = await client.create_liveness_with_verify_session( @@ -66,9 +64,7 @@ async def test_create_session(self, client, **kwargs): @recorded_by_proxy_async async def test_create_session_with_verify_image(self, client, **kwargs): variables = kwargs.pop("variables", {}) - recorded_device_correlation_id = variables.setdefault( - "deviceCorrelationId", str(uuid.uuid4()) - ) + recorded_device_correlation_id = variables.setdefault("deviceCorrelationId", str(uuid.uuid4())) # verify_image sample_file_path = helpers.get_image_path(TestImages.IMAGE_DETECTION_1) @@ -124,10 +120,7 @@ async def test_list_sessions(self, client, **kwargs): created_session_dict[created_session.session_id] = dcid # Sort the dict by key because the `list sessions` operation returns sessions in ascending alphabetical order. - expected_dcid_queue = deque( - value - for _, value in sorted(created_session_dict.items(), key=lambda t: t[0]) - ) + expected_dcid_queue = deque(value for _, value in sorted(created_session_dict.items(), key=lambda t: t[0])) # Test `list sessions` operation result = await client.get_liveness_with_verify_sessions() @@ -152,13 +145,9 @@ async def test_list_sessions(self, client, **kwargs): @recorded_by_proxy_async async def test_get_session_result(self, client, **kwargs): variables = kwargs.pop("variables", {}) - recorded_session_id = variables.setdefault( - "sessionId", "1b79f44d-d8e0-4652-8f2d-637c4205d854" - ) + recorded_session_id = variables.setdefault("sessionId", "1b79f44d-d8e0-4652-8f2d-637c4205d854") - session = await client.get_liveness_with_verify_session_result( - recorded_session_id - ) + session = await client.get_liveness_with_verify_session_result(recorded_session_id) assert session.created_date_time is not None assert session.session_start_date_time is not None assert isinstance(session.session_expired, bool) @@ -181,13 +170,9 @@ async def test_get_session_result(self, client, **kwargs): @recorded_by_proxy_async async def test_get_session_audit_entries(self, client, **kwargs): variables = kwargs.pop("variables", {}) - recorded_session_id = variables.setdefault( - "sessionId", "1b79f44d-d8e0-4652-8f2d-637c4205d854" - ) + recorded_session_id = variables.setdefault("sessionId", "1b79f44d-d8e0-4652-8f2d-637c4205d854") - entries = await client.get_liveness_with_verify_session_audit_entries( - recorded_session_id - ) + entries = await client.get_liveness_with_verify_session_audit_entries(recorded_session_id) assert len(entries) == 2 for entry in entries: _assert_liveness_session_audit_entry_is_valid( @@ -204,9 +189,7 @@ async def test_get_session_audit_entries(self, client, **kwargs): @recorded_by_proxy_async async def test_delete_session(self, client, **kwargs): variables = kwargs.pop("variables", {}) - recorded_device_correlation_id = variables.setdefault( - "deviceCorrelationId", str(uuid.uuid4()) - ) + recorded_device_correlation_id = variables.setdefault("deviceCorrelationId", str(uuid.uuid4())) created_session = await client.create_liveness_with_verify_session( CreateLivenessSessionContent( diff --git a/sdk/face/azure-ai-vision-face/tsp-location.yaml b/sdk/face/azure-ai-vision-face/tsp-location.yaml index 272d2f0e253b..ceff84974c6e 100644 --- a/sdk/face/azure-ai-vision-face/tsp-location.yaml +++ b/sdk/face/azure-ai-vision-face/tsp-location.yaml @@ -1,4 +1,4 @@ directory: specification/ai/Face -commit: 1d2253d1e221541cf05ae5d0dd95bd28c0846238 +commit: 4037b28c1014648f4cfa6f8c965e45f2476652e2 repo: Azure/azure-rest-api-specs additionalDirectories: