diff --git a/.gitignore b/.gitignore index db72a2e..6905b46 100644 --- a/.gitignore +++ b/.gitignore @@ -8,3 +8,7 @@ docker/Dockerfile.env docker/final.env models samples/edgex_bridge/edgex/**/* +samples/kubernetes/values.yaml +samples/kubernetes/charts/ +samples/kubernetes/Chart.lock +samples/nginx/cert/* diff --git a/README.md b/README.md index a166b1c..807584c 100644 --- a/README.md +++ b/README.md @@ -36,7 +36,6 @@ The sample microservice includes five categories of media analytics pipelines. C | **[object_classification](pipelines/gstreamer/object_classification)** | As object_detection adding meta-data such as object subtype and color | **[object_tracking](pipelines/gstreamer/object_tracking)** | As object_classification adding tracking identifier to meta-data | **[audio_detection](pipelines/gstreamer/audio_detection)** | Analyze audio streams for events such as breaking glass or barking dogs. -| [Preview] **[action_recognition](pipelines/gstreamer/action_recognition/general/README.md)** | Classifies general purpose actions in input video such as tying a bow tie or shaking hands. # Getting Started @@ -123,7 +122,6 @@ In new shell run the following command: ```text - object_classification/vehicle_attributes - audio_detection/environment - - action_recognition/general - object_tracking/object_line_crossing - object_tracking/person_vehicle_bike - object_detection/object_zone_count @@ -221,7 +219,7 @@ Starting pipeline object_detection/person_vehicle_bike, instance = 8ad2c85af4bd4 ``` ```bash -./client/pipeline_client.sh status object_detection/person_vehicle_bike 8ad2c85a-f4bd473e8a693aff562be316 +./client/pipeline_client.sh status object_detection/person_vehicle_bike 8ad2c85af4bd473e8a693aff562be316 ``` ```text @@ -259,7 +257,7 @@ The error state covers a number of outcomes such as the request could not be sat ```text -Starting pipeline object_detection/person_vehicle_bike, instance = 2bb2d219-310a4ee881faf258fbcc4355 +Starting pipeline object_detection/person_vehicle_bike, instance = 2bb2d219310a4ee881faf258fbcc4355 ``` Note that the Pipeline Server does not report an error at this stage as it goes into `QUEUED` state before it realizes that the source is not providing media. @@ -278,7 +276,7 @@ ERROR (0fps) ## Change Pipeline and Source Media -With pipeline_client it is easy to customize service requests. Here will use a vehicle classification pipeline `object_classification/vehicle_attributes` with the Iot Devkit video `car-detection.mp4`. Note how pipeline_client now displays classification metadata including type and color of vehicle. +With pipeline_client it is easy to customize service requests. Here will use a vehicle classification pipeline `object_classification/vehicle_attributes` with the IoT Devkit video `car-detection.mp4`. Note how pipeline_client now displays classification metadata including type and color of vehicle. ```bash ./client/pipeline_client.sh run object_classification/vehicle_attributes https://github.com/intel-iot-devkit/sample-videos/blob/master/car-detection.mp4?raw=true diff --git a/client/README.md b/client/README.md index c2b2586..999d634 100644 --- a/client/README.md +++ b/client/README.md @@ -16,8 +16,6 @@ Listing models: ``` - object_classification/vehicle_attributes - - action_recognition/encoder - - action_recognition/decoder - emotion_recognition/1 - audio_detection/environment - object_detection/person_vehicle_bike @@ -31,7 +29,6 @@ Listing pipelines: ``` - object_classification/vehicle_attributes - - action_recognition/general - audio_detection/environment - object_tracking/person_vehicle_bike - object_tracking/object_line_crossing @@ -40,10 +37,21 @@ Listing pipelines: ``` ### Running Pipelines + +All examples (including samples) that produce `file` output assume you will have already started Pipeline Server using a volume mount to the destination path; e.g., the `/tmp` folder in our examples. + + ``` + ./docker/run.sh -v /tmp:/tmp + ``` + +> **Important**: While Pipeline Server does support overriding the runtime user, keep in mind that by default all examples are designed to permit _current user_ access to files exported by Pipeline Server. + pipeline_client can be used to send pipeline start requests using the `run` command. With the `run` command you will need to enter two additional arguments the `pipeline` (in the form of pipeline_name/pipeline_version) you wish to use and the `uri` pointing to the media of your choice. ``` -./client/pipeline_client.sh run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true +./client/pipeline_client.sh run object_detection/person_vehicle_bike \ + https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true ``` + If the pipeline request is successful, an instance id is created and pipeline_client will print the instance. More on `instance_id` below. Once pre-roll is completed and pipeline begins running, the output file is processed by pipeline_client and inference information is printed to the screen in the following format: `label (confidence) [top left width height] {meta-data}` At the end of the pipeline run, the average fps is printed as well. If you wish to stop the pipeline mid-run, `Ctrl+C` will signal the client to send a `stop` command to the service. Once the pipeline is stopped, pipeline_client will output the average fps. More on `stop` below @@ -72,25 +80,43 @@ avg_fps: 39.66 However, if there are errors during pipeline execution i.e GPU is specified as detection device but is not present, pipeline_client will terminate with an error message ``` Pipeline instance = -Error in pipeline, please check pipeline-server log messages + +``` + +If the server was started without mounting `/tmp` you will see the message: + +``` +No results will be displayed. Unable to read from file +``` + +If the server and client are not started by the same user you will see the message: +``` +Unable to delete destination metadata file /tmp/results.jsonl ``` ### Starting Pipelines The `run` command is helpful for quickly showing inference results but `run` blocks until completion. If you want to do your own processing and only want to kickoff a pipeline, this can be done with the `start` command. `start` arguments are the same as `run`, you'll need to provide the `pipeline` and `uri`. Run the following command: ``` -./client/pipeline_client.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true +./client/pipeline_client.sh start object_detection/person_vehicle_bike \ + https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true ``` + +The `start` and `run` commands in all client examples require the following to successfully output results: + 1. The server volume mounts the /tmp folder (i.e. `-v /tmp:/tmp`) + 2. Both the server and client are started by the same user + Similar to `run`, if the pipeline request is successful, an instance id is created and pipeline_client will print the instance. More on `instance_id` below. ``` Pipeline instance = ``` Errors during pipeline execution are not flagged as pipeline_client exits after receiving instance id for a successful request. However, both `start` and `run` will flag invalid requests, for example: ``` -./client/pipeline_client.sh start object_detection/person_vehicle_bke https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true +./client/pipeline_client.sh start object_detection/person_vehicle_bke \ + https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true ``` The pipeline name has a typo `object_detection/person_vehicle_bke` making it invalid, this results in the error message: ``` -"Invalid Pipeline or Version" +400 - "Invalid Pipeline or Version" ``` #### Instance ID @@ -113,11 +139,15 @@ Querying the current state of the pipeline is done using the `status` command al ``` ./client/pipeline_client.sh status object_detection/person_vehicle_bike 0fe8f408ea2441bca8161e1190eefc51 ``` -pipeline_client will print the status of `QUEUED`, `RUNNING`, `ABORTED`, `COMPLETED` or `ERROR` and also fps. +pipeline_client will print the status of `QUEUED`, `RUNNING`, `ABORTED`, `COMPLETED` along with the fps or `ERROR` with the error message as applicable. ``` RUNNING (30fps) ``` +``` + +ERROR (Not Found (404), URL: https://github.com/intel-iot-devkit/sample.mp4, Redirect to: (NULL)) +``` ### Waiting for a pipeline to finish If you wish to wait for a pipeline to finish running you can use the `wait` command along with the `pipeline` and `instance id`: @@ -131,14 +161,16 @@ Querying the current state of the pipeline is done using the `list-instances` co This example starts two pipelines and then gets their status and request details. ``` -./client/pipeline_client.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true +./client/pipeline_client.sh start object_detection/person_vehicle_bike \ + https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true ``` Output: ``` Starting pipeline object_detection/person_vehicle_bike, instance = 94cf72b718184615bfc181c6589b240c ``` ``` -./client/pipeline_client.sh start object_classification/vehicle_attributes https://github.com/intel-iot-devkit/sample-videos/blob/master/car-detection.mp4?raw=true +./client/pipeline_client.sh start object_classification/vehicle_attributes \ + https://github.com/intel-iot-devkit/sample-videos/blob/master/car-detection.mp4?raw=true ``` Output: ``` @@ -194,7 +226,8 @@ This optional argument is meant to handle logging verbosity common across all co #### Start pipeline_client output will just be the pipeline instance. ``` -./client/pipeline_client.sh --quiet start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true +./client/pipeline_client.sh --quiet start object_detection/person_vehicle_bike \ + https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true ``` ``` @@ -203,7 +236,8 @@ pipeline_client output will just be the pipeline instance. #### Run pipeline_client output will be the pipeline instance followed by inference results. ``` -./client/pipeline_client.sh --quiet run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true +./client/pipeline_client.sh --quiet run object_detection/person_vehicle_bike \ + https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true ``` ``` @@ -238,15 +272,26 @@ By default, pipeline_client uses a generic template for destination: ``` Destination configuration can be updated with `--destination`. This argument affects only metadata part of the destination. In the following example, passing in `--destination path /tmp/newfile.jsonl` will update the filepath for saving inference result. -> **Note**: You may need to volume mount this new location when running Pipeline Server. -``` -./client/pipeline_client.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --destination path /tmp/newfile.jsonl -``` + +> **Note**: To access files exported by Pipeline Server, remember that you must _volume mount_ the destination path (e.g., the `/tmp` folder for our examples) when Pipeline Server is started. + ``` + docker/run.sh -v /tmp:/tmp + ``` + + ``` + ./client/pipeline_client.sh start object_detection/person_vehicle_bike \ + https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true \ + --destination path /tmp/newfile.jsonl + ``` If other destination types are specified (e.g. `mqtt` or `kafka` ), the pipeline will try to publish to specified broker and pipeline_client will subscribe to it and display published metadata. Here is an mqtt example using a broker on localhost. ``` -docker run -rm --network=host -d eclipse-mosquitto:1.6 -./client/pipeline_client.sh run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --destination type mqtt --destination host localhost:1883 --destination topic pipeline-server +docker run --rm --network=host -d eclipse-mosquitto:1.6 + +./client/pipeline_client.sh run object_detection/person_vehicle_bike \ + https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true \ + --destination type mqtt --destination host localhost:1883 \ + --destination topic pipeline-server ``` ``` Starting pipeline object_detection/person_vehicle_bike, instance = @@ -270,7 +315,9 @@ For example, adding `--rtsp-path new_path` will able you to view the stream at ` #### --parameter By default, pipeline_client relies on pipeline parameter defaults. This can be updated with `--parameter` option. See [Defining Pipelines](../docs/defining_pipelines.md) to know how parameters are defined. The following example adds `--parameter detection-device GPU` ``` -./client/pipeline_client.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --parameter detection-device GPU +./client/pipeline_client.sh start object_detection/person_vehicle_bike \ + https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true \ + --parameter detection-device GPU ``` #### --parameter-file @@ -287,26 +334,34 @@ A sample parameter file can look like ``` The above file, say /tmp/sample_parameters.json may be used as follows: ``` -./client/pipeline_client.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --parameter-file /tmp/sample_parameters.json +./client/pipeline_client.sh start object_detection/person_vehicle_bike \ + https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true \ + --parameter-file /tmp/sample_parameters.json ``` #### --tag Specifies a key, value pair to update request with. This information is added to each frame's metadata. This example adds tags for direction and location of video capture ``` -./client/pipeline_client.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --tag direction east --tag camera_location parking_lot +./client/pipeline_client.sh start object_detection/person_vehicle_bike \ + https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true \ + --tag direction east --tag camera_location parking_lot ``` #### --server-address This can be used with any command to specify a remote HTTP server address. Here we start a pipeline on remote server `http://remote-server.my-domain.com:8080`. ``` -./client/pipeline_client.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --tag direction east --server=address http://remote-server.my-domain.com:8080 +./client/pipeline_client.sh start object_detection/person_vehicle_bike \ + https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true \ + --tag direction east --server=address http://remote-server.my-domain.com:8080 ``` #### --status-only Use with `run` command to disable output of metadata and periodically display pipeline state and fps ``` -./client/pipeline_client.sh run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --tag direction east --status-only +./client/pipeline_client.sh run object_detection/person_vehicle_bike \ + https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true \ + --tag direction east --status-only ``` ``` Starting pipeline 0 @@ -326,7 +381,9 @@ Pipeline status @ 21s Takes an integer value that specifies the number of streams to start (default value is 1) using specified request. If number of streams is greater than one, "status only" display mode is used. ``` -./client/pipeline_client.sh run object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --status-only --number-of-streams 4 --server-address http://hbruce-desk2.jf.intel.com:8080 +./client/pipeline_client.sh run object_detection/person_vehicle_bike \ + https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true \ + --status-only --number-of-streams 4 --server-address http://hbruce-desk2.jf.intel.com:8080 ``` ``` Starting pipeline 0 @@ -385,14 +442,18 @@ A sample request file can look like ``` The above file, named for instance as /tmp/sample_request.json may be used as follows: ``` -./client/pipeline_client.sh start object_detection/person_vehicle_bike --request-file /tmp/sample_request.json +./client/pipeline_client.sh start object_detection/person_vehicle_bike \ + --request-file /tmp/sample_request.json ``` #### --show-request All pipeline_client commands can be used with the `--show-request` option which will print out the HTTP request and exit i.e it will not be sent to the Pipeline Server. This example shows the result of `--show-request` when the pipeline is started with options passed in ``` -./client/pipeline_client.sh start object_detection/person_vehicle_bike https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true --destination path /tmp/newfile.jsonl --parameter detection-device GPU --tag direction east --tag camera_location parking_lot --show-request +./client/pipeline_client.sh start object_detection/person_vehicle_bike \ + https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true \ + --destination path /tmp/newfile.jsonl \ + --parameter detection-device GPU --tag direction east --tag camera_location parking_lot --show-request ``` ``` @@ -415,7 +476,8 @@ As mentioned before, `--show-request` option which will print out the HTTP reque ##### Status ``` -./client/pipeline_client.sh status object_detection/person_vehicle_bike 94cf72b718184615bfc181c6589b240c --show-request +./client/pipeline_client.sh status object_detection/person_vehicle_bike \ + 94cf72b718184615bfc181c6589b240c --show-request ``` ``` @@ -423,7 +485,8 @@ GET http://localhost:8080/pipelines/object_detection/person_vehicle_bike/status/ ``` ##### Wait ``` -./client/pipeline_client.sh wait object_detection/person_vehicle_bike 94cf72b718184615bfc181c6589b240c --show-request +./client/pipeline_client.sh wait object_detection/person_vehicle_bike \ + 94cf72b718184615bfc181c6589b240c --show-request ``` ``` @@ -431,9 +494,52 @@ GET http://localhost:8080/pipelines/object_detection/person_vehicle_bike/status/ ``` ##### Stop ``` -./client/pipeline_client.sh stop object_detection/person_vehicle_bike 94cf72b718184615bfc181c6589b240c --show-request +./client/pipeline_client.sh stop object_detection/person_vehicle_bike \ + 94cf72b718184615bfc181c6589b240c --show-request ``` ``` DELETE http://localhost:8080/pipelines/object_detection/person_vehicle_bike/94cf72b718184615bfc181c6589b240c ``` + +### Using HTTPS with Pipeline Client + +To use Pipeline Client together with HTTPS, the request must provide `--server-address` with a https address and `--server-cert` with the server certificate as a configuration option to configure the client with the server certificate. This is handled by `pipeline-client.sh` and is set as an Environment variable to `pipeline_client.py`. Below is an example: + +#### --server-cert +Specifies a certificate for HTTPS. This information is added to each request to run on HTTPS with the given certificate. +This example makes pipeline_client.sh use HTTPS by setting `--server-address` and `--server-cert` + +This adds an Environment variable `ENV_CERT` and `REQUESTS_CA_BUNDLE` to accomodate for self-signed certificates. These environment variables can be ignored if you are not using self-signed certificate. + +```sh +$ client/pipeline_client.sh run object_detection/person_vehicle_bike \ + https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4\?raw\=true \ + --server-address https://localhost:8443 --server-cert samples/nginx/cert/server.crt + +. +. +. + +Starting pipeline object_detection/person_vehicle_bike, instance = 1843e91040da11edbaf2b62e8c582e09 +Pipeline running - instance_id = 1843e91040da11edbaf2b62e8c582e09 +No results will be displayed. Unable to read from file /tmp/results.jsonl +avg_fps: 593.75 +Done +``` + +### Working with Kubernetes + +As Kubernetes deploys its' own MQTT broker inside the cluster, Pipeline Client requires a configuration to be set to `pipeline_client.sh`. This is handled by `pipeline_client.sh` and this configuration option is set as an Environment variable to `pipeline_client.py` to override MQTT broker's address for Kubernetes use case. + +#### --mqtt-cluster-broker +This argument is to be used together with MQTT destination. This argument is helpful when your MQTT broker & Pipeline Server instance is on a separate network from your client machine. This happens in the Kubernetes deployment. Use this argument to set the client to connect to the MQTT broker directly to get the output. This will set an Environment variable `MQTT_CLUSTER_BROKER` which will override the existing MQTT broker destination for the client to connect. + +``` +./client/pipeline_client.sh run object_detection/person_vehicle_bike \ + https://lvamedia.blob.core.windows.net/public/homes_00425.mkv \ + --server-address http://remote-server.my-domain.com:8080 \ + --destination type mqtt --destination host mqtt-broker-address:1883 \ + --destination topic person-vehicle-bike \ + --mqtt-cluster-broker cluster-mqtt-broker-address:1883 +``` \ No newline at end of file diff --git a/client/arguments.py b/client/arguments.py index 5f0bd72..1349e51 100644 --- a/client/arguments.py +++ b/client/arguments.py @@ -1,35 +1,13 @@ ''' -* Copyright (C) 2019-2020 Intel Corporation. +* Copyright (C) 2019 Intel Corporation. * -* SPDX-License-Identifier: MIT License -* -***** -* -* MIT License -* -* Copyright (c) Microsoft Corporation. -* -* Permission is hereby granted, free of charge, to any person obtaining a copy -* of this software and associated documentation files (the "Software"), to deal -* in the Software without restriction, including without limitation the rights -* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -* copies of the Software, and to permit persons to whom the Software is -* furnished to do so, subject to the following conditions: -* -* The above copyright notice and this permission notice shall be included in all -* copies or substantial portions of the Software. -* -* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -* SOFTWARE +* SPDX-License-Identifier: BSD-3-Clause ''' import sys import json import argparse +import os +from urllib.parse import urlparse import pipeline_client @@ -124,4 +102,7 @@ def parse_args(program_name="Pipeline Client"): if args.subparsers in ['start', 'run'] and not args.uri and not args.request_file: parser.error("at least one of uri or --request-file is required") + if urlparse(args.server_address).scheme == "https" and not os.environ["ENV_CERT"]: + parser.error("ENV_CERT environment must be set if you are using HTTPS") + return args diff --git a/client/pipeline_client.py b/client/pipeline_client.py index 248a729..554cc8c 100755 --- a/client/pipeline_client.py +++ b/client/pipeline_client.py @@ -5,16 +5,21 @@ * SPDX-License-Identifier: BSD-3-Clause ''' -from urllib.parse import urljoin +from urllib.parse import urljoin, urlparse import json import time import os import sys + from html.parser import HTMLParser import requests import results_watcher +# Used to workaround warning shown by Self-signed certificate +import urllib3 from server.pipeline import Pipeline +urllib3.disable_warnings(urllib3.exceptions.SecurityWarning) + RESPONSE_SUCCESS = 200 TIMEOUT = 30 SLEEP_FOR_STATUS = 0.5 @@ -156,7 +161,10 @@ def wait(args): def status(args): pipeline_status = get_pipeline_status(args.server_address, args.instance, args.show_request) if pipeline_status is not None and "state" in pipeline_status: - print("{} ({}fps)".format(pipeline_status["state"], round(pipeline_status["avg_fps"]))) + if pipeline_status["state"] == "ERROR": + print("{} ({})".format(pipeline_status["state"], pipeline_status["message"])) + else: + print("{} ({}fps)".format(pipeline_status["state"], round(pipeline_status["avg_fps"]))) else: print("Unable to fetch status") @@ -171,14 +179,14 @@ def list_instances(args): statuses = get(url, args.show_request) for status in statuses: url = urljoin(args.server_address, "pipelines/{}".format(status["id"])) - response = requests.get(url, timeout=TIMEOUT) - request_status = json.loads(response.text) - response.close() + time.sleep(SLEEP_FOR_STATUS) + request_status = get(url, args.show_request) pipeline = request_status["request"]["pipeline"] print("{}: {}/{}".format(status["id"], pipeline["name"], pipeline["version"])) print("state: {}".format(status["state"])) print("fps: {:.2f}".format(status["avg_fps"])) - print("source: {}".format(json.dumps(request_status["request"]["source"], indent=4))) + if request_status["request"].get("source") is not None: + print("source: {}".format(json.dumps(request_status["request"]["source"], indent=4))) if request_status["request"].get("destination") is not None: print("destination: {}".format(json.dumps(request_status["request"]["destination"], indent=4))) if request_status["request"].get("parameters") is not None: @@ -266,25 +274,25 @@ def wait_for_pipeline_running(server_address, status = {"state" : "QUEUED"} timeout_count = 0 while status and not Pipeline.State[status["state"]] == Pipeline.State.RUNNING: + time.sleep(SLEEP_FOR_STATUS) status = get_pipeline_status(server_address, instance_id) if not status or Pipeline.State[status["state"]].stopped(): break - time.sleep(SLEEP_FOR_STATUS) timeout_count += 1 if timeout_count * SLEEP_FOR_STATUS >= timeout_sec: print("Timed out waiting for RUNNING status") break if not status or status["state"] == "ERROR": - raise ValueError("Error in pipeline, please check pipeline-server log messages") + raise ValueError(status["message"]) return Pipeline.State[status["state"]] == Pipeline.State.RUNNING def wait_for_pipeline_completion(server_address, instance_id): status = {"state" : "RUNNING"} while status and not Pipeline.State[status["state"]].stopped(): - status = get_pipeline_status(server_address, instance_id) time.sleep(SLEEP_FOR_STATUS) + status = get_pipeline_status(server_address, instance_id) if status and status["state"] == "ERROR": - raise ValueError("Error in pipeline, please check pipeline-server log messages") + raise ValueError(status["message"]) return status @@ -318,7 +326,7 @@ def wait_for_all_pipeline_completions(server_address, instance_ids, status_only= stopped = Pipeline.State[status["state"]].stopped() status_list.append(status) if status and status["state"] == "ERROR": - raise ValueError("Error in pipeline, please check pipeline-server log messages") + raise ValueError(status["message"]) return status_list def get_pipeline_status(server_address, instance_id, show_request=False): @@ -337,25 +345,34 @@ def _list(server_address, list_name, show_request=False): return print_list(response) +def https_request(url): + return urlparse(url).scheme == "https" + def post(url, body, show_request=False): try: if show_request: print('POST {}\nBody:{}'.format(url, body)) sys.exit(0) - launch_response = requests.post(url, json=body, timeout=TIMEOUT) + if https_request(url): + launch_response = requests.post(url, json=body, timeout=TIMEOUT, verify=os.environ["ENV_CERT"]) + else: + launch_response = requests.post(url, json=body, timeout=TIMEOUT) if launch_response.status_code == RESPONSE_SUCCESS: instance_id = json.loads(launch_response.text) return instance_id except requests.exceptions.ConnectionError as error: raise ConnectionError(SERVER_CONNECTION_FAILURE_MESSAGE) from error - raise RuntimeError(html_to_text(launch_response.text)) + raise RuntimeError("{} - {}".format(launch_response.status_code, html_to_text(launch_response.text))) def get(url, show_request=False): try: if show_request: print('GET {}'.format(url)) sys.exit(0) - status_response = requests.get(url, timeout=TIMEOUT) + if https_request(url): + status_response = requests.get(url, timeout=TIMEOUT, verify=os.environ["ENV_CERT"]) + else: + status_response = requests.get(url, timeout=TIMEOUT) if status_response.status_code == RESPONSE_SUCCESS: return json.loads(status_response.text) print("Got unsuccessful status code: {}".format(status_response.status_code)) @@ -369,7 +386,10 @@ def delete(url, show_request=False): if show_request: print('DELETE {}'.format(url)) sys.exit(0) - stop_response = requests.delete(url, timeout=TIMEOUT) + if https_request(url): + stop_response = requests.delete(url, timeout=TIMEOUT, verify=os.environ["ENV_CERT"]) + else: + stop_response = requests.delete(url, timeout=TIMEOUT) if stop_response.status_code != RESPONSE_SUCCESS: print(html_to_text(stop_response.text)) return stop_response.status_code diff --git a/client/pipeline_client.sh b/client/pipeline_client.sh index 0d222ca..0aaf079 100755 --- a/client/pipeline_client.sh +++ b/client/pipeline_client.sh @@ -9,8 +9,46 @@ VOLUME_MOUNT="-v /tmp:/tmp " IMAGE="dlstreamer-pipeline-server-gstreamer" PIPELINE_SERVER_ROOT=/home/pipeline-server ENTRYPOINT="python3" -ENTRYPOINT_ARGS="$PIPELINE_SERVER_ROOT/client $@" LOCAL_CLIENT_DIR=$(dirname $(readlink -f "$0")) ROOT_DIR=$(dirname $LOCAL_CLIENT_DIR) +ARGS= +ENV_CERT= +MQTT_CLUSTER_BROKER= -"$ROOT_DIR/docker/run.sh" $INTERACTIVE --name \"\" --network host --image $IMAGE $VOLUME_MOUNT --entrypoint $ENTRYPOINT --entrypoint-args "$ENTRYPOINT_ARGS" +error() { + printf '%s\n' "$1" >&2 + exit 1 +} + +while [[ "$#" -ge 0 ]]; do + case $1 in + --server-cert) + if [ "$2" ]; then + VOLUME_MOUNT="$VOLUME_MOUNT -v $2:/etc/ssl/certs/server.crt " + ENV_CERT=/etc/ssl/certs/server.crt + shift + else + error 'ERROR: "--server-cert" requires an argument.' + fi + ;; + --mqtt-cluster-broker) + if [ "$2" ]; then + MQTT_CLUSTER_BROKER=$2 + shift + else + error 'ERROR: "--mqtt-cluster-broker" requires an argument.' + fi + ;; + *) + ARGS="${ARGS} ${1}" + ;; + esac + if [[ "$#" -eq 0 ]]; + then + break + fi + shift +done +ENTRYPOINT_ARGS="$PIPELINE_SERVER_ROOT/client $ARGS" + +"$ROOT_DIR/docker/run.sh" $INTERACTIVE --name \"\" --network host --image $IMAGE $VOLUME_MOUNT -e "ENV_CERT=${ENV_CERT}" -e "MQTT_CLUSTER_BROKER=${MQTT_CLUSTER_BROKER}" -e "REQUESTS_CA_BUNDLE=${ENV_CERT}" --entrypoint $ENTRYPOINT --entrypoint-args "$ENTRYPOINT_ARGS" diff --git a/client/results_watcher.py b/client/results_watcher.py index 071de22..5bd5e70 100755 --- a/client/results_watcher.py +++ b/client/results_watcher.py @@ -7,6 +7,7 @@ import json import time +import os import socket from threading import Thread, Event from abc import ABC, abstractmethod @@ -140,7 +141,11 @@ class MqttWatcher(ResultsWatcher): def __init__(self, destination): super().__init__() self._client = mqtt.Client("Intel(R) DL Streamer Results Watcher", userdata=destination) - broker_address = destination["host"].split(':') + if os.environ["MQTT_CLUSTER_BROKER"]: + mqtt_host = os.environ["MQTT_CLUSTER_BROKER"] + broker_address = mqtt_host.split(':') + else: + broker_address = destination["host"].split(':') self._host = broker_address[0] if len(broker_address) == 2: self._port = int(broker_address[1]) diff --git a/docker/build.sh b/docker/build.sh index 09d3ecd..928adf0 100755 --- a/docker/build.sh +++ b/docker/build.sh @@ -10,7 +10,7 @@ DOCKERFILE_DIR=$(dirname "$(readlink -f "$0")") SOURCE_DIR=$(dirname "$DOCKERFILE_DIR") BASE_IMAGE_FFMPEG="openvisualcloud/xeone3-ubuntu1804-analytics-ffmpeg:20.10" -BASE_IMAGE_GSTREAMER="intel/dlstreamer:2022.1.0-ubuntu20" +BASE_IMAGE_GSTREAMER="intel/dlstreamer:2022.2.0-ubuntu20-gpu815" BASE_IMAGE=${BASE_IMAGE:-""} BASE_BUILD_CONTEXT= @@ -36,7 +36,7 @@ BASE_BUILD_OPTIONS="--network=host " SUPPORTED_IMAGES=($BASE_IMAGE_GSTREAMER $BASE_IMAGE_FFMPEG) DEFAULT_OMZ_IMAGE_GSTREAMER="intel/dlstreamer" -DEFAULT_OMZ_VERSION_GSTREAMER="2022.1.0-ubuntu20-devel" +DEFAULT_OMZ_VERSION_GSTREAMER="2022.2.0-ubuntu20-gpu815-devel" DEFAULT_OMZ_IMAGE_FFMPEG="openvino/ubuntu18_data_dev" DEFAULT_OMZ_VERSION_FFMPEG="2021.2" FORCE_MODEL_DOWNLOAD= diff --git a/docker/run.sh b/docker/run.sh index b10f084..8fcca04 100755 --- a/docker/run.sh +++ b/docker/run.sh @@ -14,6 +14,7 @@ VOLUME_MOUNT= MODE=SERVICE PORTS= DEVICES= +GPU_DEVICE= DEFAULT_GSTREAMER_IMAGE="dlstreamer-pipeline-server-gstreamer" DEFAULT_FFMPEG_IMAGE="dlstreamer-pipeline-server-ffmpeg" ENTRYPOINT= @@ -27,6 +28,7 @@ USER_GROUPS= ENABLE_RTSP=${ENABLE_RTSP:-"false"} ENABLE_WEBRTC=${ENABLE_WEBRTC:-"false"} RTSP_PORT=8554 +HOST_NAME= SCRIPT_DIR=$(dirname "$(readlink -f "$0")") SOURCE_DIR=$(dirname $SCRIPT_DIR) @@ -44,6 +46,7 @@ show_options() { echo " Ports: '${PORTS}'" echo " Name: '${NAME}'" echo " Network: '${NETWORK}'" + echo " Hostname: '${HOST_NAME}'" echo " Entrypoint: '${ENTRYPOINT}'" echo " EntrypointArgs: '${ENTRYPOINT_ARGS}'" echo " User: '${USER}'" @@ -64,11 +67,14 @@ show_help() { echo " [--entrypoint-args additional parameters to pass to entrypoint in docker run]" echo " [-p additional ports to pass to docker run]" echo " [--network name network to pass to docker run]" + echo " [--hostname set hostname of the container to pass to docker run]" echo " [--user name of user to pass to docker run]" echo " [--group-add name of user group to pass to docker run]" echo " [--name container name to pass to docker run]" + echo " [--gpu-device select GPU device]" echo " [--device device to pass to docker run]" echo " [--enable-rtsp To enable rtsp re-streaming]" + echo " [--disable-http-port Specify to close web service port e.g. 8080 in docker]" echo " [--rtsp-port Specify the port to use for rtsp re-streaming]" echo " [--enable-webrtc To enable WebRTC frame destination]" echo " [--dev run in developer mode]" @@ -82,14 +88,21 @@ error() { enable_hardware_access() { # GPU - if ls /dev/dri/render* 1> /dev/null 2>&1; then - echo "Found /dev/dri/render entry - enabling for GPU" - DEVICES+='--device /dev/dri ' - RENDER_GROUPS=$(stat -c '%g' /dev/dri/render*) - for group in $RENDER_GROUPS - do - USER_GROUPS+="--group-add $group " - done + if [ -z $GPU_DEVICE ]; then + if [ -e /dev/dri/renderD128 ] ; then + GPU_DEVICE="/dev/dri/renderD128" + echo "Found $GPU_DEVICE - enabling GPU" + fi + fi + if [ ! -z $GPU_DEVICE ]; then + if [ ! -e $GPU_DEVICE ]; then + echo GPU device $GPU_DEVICE not found - exiting + exit 1 + fi + DEVICES+="--device $GPU_DEVICE " + ENVIRONMENT+="-e GST_VAAPI_DRM_DEVICE=$GPU_DEVICE " + render_group=$(stat -c '%g' $GPU_DEVICE) + USER_GROUPS+="--group-add $render_group " fi # Intel(R) NCS2 @@ -167,6 +180,14 @@ while [[ "$#" -gt 0 ]]; do error 'ERROR: "--device" requires a non-empty option argument.' fi ;; + --gpu-device) + if [ "$2" ]; then + GPU_DEVICE=$2 + shift + else + error 'ERROR: "--gpu-device" requires a non-empty option argument.' + fi + ;; --privileged) PRIVILEGED="--privileged " ;; @@ -261,6 +282,17 @@ while [[ "$#" -gt 0 ]]; do error 'ERROR: "--rtsp-port" requires a non-empty option argument.' fi ;; + --hostname) + if [ "$2" ]; then + HOST_NAME="--hostname "$2 + shift + else + error 'ERROR: "--hostname" requires a non-empty option argument.' + fi + ;; + --disable-http-port) + MODE=DISABLE_HTTP_PORT + ;; --enable-rtsp) ENABLE_RTSP=true ;; @@ -324,6 +356,8 @@ elif [ "${MODE}" == "SERVICE" ]; then if [ -z "$PORTS" ]; then PORTS+="-p 8080:8080 " fi +elif [ "${MODE}" == "DISABLE_HTTP_PORT" ]; then + echo "HTTP Web Service port has been disabled on Docker!" else echo "Invalid Mode" show_help @@ -340,6 +374,10 @@ if [ "$ENABLE_WEBRTC" != "false" ]; then ENVIRONMENT+="-e ENABLE_WEBRTC=$ENABLE_WEBRTC " fi +if [[ ! -z "${MAX_BODY_SIZE}" ]]; then + ENVIRONMENT+="-e MAX_BODY_SIZE=$MAX_BODY_SIZE " +fi + if [ ! -z "$MODELS" ]; then VOLUME_MOUNT+="-v $MODELS:/home/pipeline-server/models " fi @@ -364,4 +402,4 @@ fi show_options # eval must be used to ensure the --device-cgroup-rule string is correctly parsed -eval "$RUN_PREFIX docker run $INTERACTIVE --rm $ENVIRONMENT $VOLUME_MOUNT $DEVICE_CGROUP_RULE $DEVICES $NETWORK $PORTS $ENTRYPOINT --name ${NAME} ${PRIVILEGED} ${USER} $USER_GROUPS $IMAGE ${ENTRYPOINT_ARGS}" +eval "$RUN_PREFIX docker run $INTERACTIVE --rm $ENVIRONMENT $VOLUME_MOUNT $DEVICE_CGROUP_RULE $DEVICES $NETWORK $HOST_NAME $PORTS $ENTRYPOINT --name ${NAME} ${PRIVILEGED} ${USER} $USER_GROUPS $IMAGE ${ENTRYPOINT_ARGS}" diff --git a/docs/building_pipeline_server.md b/docs/building_pipeline_server.md index f52f2b8..7495200 100644 --- a/docs/building_pipeline_server.md +++ b/docs/building_pipeline_server.md @@ -25,7 +25,7 @@ can be customized to meet an application's requirements. # Default Build Commands and Image Names | Command | Media Analytics Base Image | Image Name | Description | | --- | --- | --- | ---- | -| `./docker/build.sh`| **intel/dlstreamer:2022.1.0-ubuntu20** docker [image](https://hub.docker.com/r/intel/dlstreamer) |`dlstreamer-pipeline-server-gstreamer` | Intel(R) DL Streamer based microservice with default pipeline definitions and deep learning models. | +| `./docker/build.sh`| **intel/dlstreamer:2022.2.0-ubuntu20-gpu815** docker [image](https://hub.docker.com/r/intel/dlstreamer) |`dlstreamer-pipeline-server-gstreamer` | Intel(R) DL Streamer based microservice with default pipeline definitions and deep learning models. | | `./docker/build.sh --framework ffmpeg --open-model-zoo...`| **openvisualcloud/xeone3-ubuntu1804-analytics-ffmpeg:20.10** docker [image](https://hub.docker.com/r/openvisualcloud/xeon-ubuntu1804-analytics-ffmpeg) |`dlstreamer-pipeline-server-ffmpeg`| FFmpeg Video Analytics based microservice with default pipeline definitions and deep learning models. | ### Building with OpenVINO, Ubuntu 20.04 and Intel(R) DL Streamer Support **Example:** @@ -69,9 +69,9 @@ All validation is done in docker environment. Host built (aka "bare metal") conf | **Base Image** | **Framework** | **OpenVINO Version** | **Link** | **Default** | |---------------------|---------------|---------------|------------------------|-------------| -| OpenVINO 2021.4.2 ubuntu20_data_runtime | GStreamer | 2021.4.2 | [Docker Hub](https://hub.docker.com/r/openvino/ubuntu20_data_runtime) | N | -| Intel DL Streamer 2022.1.0-ubuntu20 | GStreamer | 2022.1.0 | [Docker Hub](https://hub.docker.com/r/intel/dlstreamer) | Y | +| Intel DL Streamer 2022.2.0-ubuntu20-gpu815 | GStreamer | 2022.2.0 | [Docker Hub](https://hub.docker.com/r/intel/dlstreamer) | Y | | Open Visual Cloud 20.10 xeone3-ubuntu1804-analytics-ffmpeg | FFmpeg | 2021.1 | [Docker Hub](https://hub.docker.com/r/openvisualcloud/xeone3-ubuntu1804-analytics-ffmpeg) | Y | +| Intel DL Streamer 2022.1.0-ubuntu20 | GStreamer | 2022.1.0 | [Docker Hub](https://hub.docker.com/r/intel/dlstreamer) | N | --- \* Other names and brands may be claimed as the property of others. diff --git a/docs/changing_object_detection_models.md b/docs/changing_object_detection_models.md index f100520..ca7bfd4 100644 --- a/docs/changing_object_detection_models.md +++ b/docs/changing_object_detection_models.md @@ -56,8 +56,6 @@ Use [pipeline_client](/client/README.md) to list the models. Check that `object_ - audio_detection/environment - face_detection_retail/1 - object_classification/vehicle_attributes - - action_recognition/encoder - - action_recognition/decoder - object_detection/person_vehicle_bike - emotion_recognition/1 ``` @@ -222,8 +220,6 @@ The `list-models` command now shows 8 models, including `object_detection/yolo-v - object_detection/person_vehicle_bike - object_classification/vehicle_attributes - audio_detection/environment - - action_recognition/encoder - - action_recognition/decoder - face_detection_retail/1 ``` The `list-pipelines` command shows `object_detection/yolo-v2-tiny-tf` @@ -240,7 +236,6 @@ The `list-pipelines` command shows `object_detection/yolo-v2-tiny-tf` - video_decode/app_dst - object_tracking/object_line_crossing - object_tracking/person_vehicle_bike - - action_recognition/general ``` @@ -287,8 +282,6 @@ Once started you can verify that the new model has been loaded. - object_detection/person_vehicle_bike - object_classification/vehicle_attributes - audio_detection/environment - - action_recognition/encoder - - action_recognition/decoder - face_detection_retail/1 ``` diff --git a/docs/customizing_pipeline_requests.md b/docs/customizing_pipeline_requests.md index de39188..5e0ac7b 100644 --- a/docs/customizing_pipeline_requests.md +++ b/docs/customizing_pipeline_requests.md @@ -114,7 +114,7 @@ curl localhost:8080/pipelines/object_detection/person_vehicle_bike -X POST -H \ ``` ### RTSP Source -RTSP streams from IP cameras can be referenced using the `rtsp` uri scheme. RTSP uris will normally be of the format `rtsp://:@:/` where `` and `password` are optional authentication credentials. +RTSP streams originating from IP cameras, DVRs, or similar sources can be referenced using the `rtsp` URI scheme. The request `source` object would be updated to: @@ -127,6 +127,11 @@ The request `source` object would be updated to: } ``` +#### RTSP Basic Authentication +Depending on the configuration of your media source, during development and troubleshooting you may issue Pipeline Server requests that include RTSP URIs formatted as `rtsp://:@:/` where `` and `` are authentication credentials needed to connect to the stream/device at ``. + +> **Warning**: Keep in mind that basic authentication does not provide a secure method to access source inputs and to verify visual, metadata, and logged outputs. For this reason basic authentication is not recommended for production deployments, please use with caution. + ### Web Camera Source Web cameras accessible through the `Video4Linux` api and device drivers are supported via `type=webcam`. `device` is the path of the `v4l2` device, typically `video`. diff --git a/docs/images/0031-https-k8s.png b/docs/images/0031-https-k8s.png new file mode 100644 index 0000000..6da98b3 Binary files /dev/null and b/docs/images/0031-https-k8s.png differ diff --git a/docs/images/k8s-arch-diag.png b/docs/images/k8s-arch-diag.png new file mode 100644 index 0000000..90c9ea9 Binary files /dev/null and b/docs/images/k8s-arch-diag.png differ diff --git a/docs/images/tls_demo.png b/docs/images/tls_demo.png new file mode 100644 index 0000000..40a181f Binary files /dev/null and b/docs/images/tls_demo.png differ diff --git a/docs/images/tls_nginx_curl_demo.png b/docs/images/tls_nginx_curl_demo.png new file mode 100644 index 0000000..6c103c5 Binary files /dev/null and b/docs/images/tls_nginx_curl_demo.png differ diff --git a/docs/images/tls_nginx_pipeline_server.png b/docs/images/tls_nginx_pipeline_server.png new file mode 100644 index 0000000..fe44c19 Binary files /dev/null and b/docs/images/tls_nginx_pipeline_server.png differ diff --git a/docs/images/webrtc-port-forwarding.png b/docs/images/webrtc-port-forwarding.png new file mode 100644 index 0000000..47d655c Binary files /dev/null and b/docs/images/webrtc-port-forwarding.png differ diff --git a/docs/restful_microservice_interfaces.md b/docs/restful_microservice_interfaces.md index 50e6a15..955729c 100644 --- a/docs/restful_microservice_interfaces.md +++ b/docs/restful_microservice_interfaces.md @@ -1,5 +1,7 @@ ## Microservice Endpoints +The REST API has a default maximum body size of 10KB, this can be changed by setting the environment variable MAX_BODY_SIZE in bytes. + | Path | Description | |----|------| | [`GET` /models](#get-models) | Return supported models. | @@ -14,7 +16,7 @@ | [`DELETE` /pipelines/{instance_id}](#delete-pipelinesinstance_id) | Stops a running pipeline or cancels a queued pipeline. | | [`DELETE` /pipelines/{name}/{version}/{instance_id}](#delete-pipelinesnameversioninstance_id) | Stops a running pipeline or cancels a queued pipeline. | -The following endpoints are deprecated and will be removed by v1.0. +The following endpoints are deprecated and will be removed in a future release. | Path | Description | |----|------| | [`GET` /pipelines/{name}/{version}/{instance_id}](#get-pipelinesnameversioninstance_id) | Return pipeline instance summary. | @@ -96,6 +98,7 @@ Return supported pipelines "avg_fps": 8.932587737800183, "start_time": 1638179813.2005367, "elapsed_time": 72.43142008781433, + "message": "", "avg_pipeline_latency": 0.4533823041311556 }, { @@ -104,6 +107,7 @@ Return supported pipelines "avg_fps": 6.366260838099841, "start_time": 1638179886.3203313, "elapsed_time": 16.493194580078125, + "message": "", "avg_pipeline_latency": 0.6517487730298723 }, { @@ -111,7 +115,8 @@ Return supported pipelines "state": "ERROR", "avg_fps": 0, "start_time": null, - "elapsed_time": null + "elapsed_time": null, + "message": "Not Found (404), URL: https://github.com/intel-iot-devkit/sample.mp4, Redirect to: (NULL)" } ] ``` @@ -631,6 +636,7 @@ Return pipeline instance status. "name": "object_detection", "start_time": 1640156425.2014737, "state": "RUNNING", + "message": "", "version": "person_vehicle_bike" } ``` @@ -737,6 +743,7 @@ Return pipeline instance status. "elapsed_time": 5, "id": 0, "state": "RUNNING", + "message": "", "avg_fps": 6.027456183070403 } ``` diff --git a/docs/run_script_reference.md b/docs/run_script_reference.md index ab7da0b..aad062f 100644 --- a/docs/run_script_reference.md +++ b/docs/run_script_reference.md @@ -65,7 +65,7 @@ This argument enables rtsp restreaming by setting `ENABLE_RTSP` environment vari This argument specifies the port to use for rtsp re-streaming. ### Enable WebRTC re-streaming (--enable-webrtc) -This argument enables webrtc restreaming by setting `ENABLE_WEBRTC` environment. Additional dependencies must be running as described [here](./samples/webrtc/README.md). +This argument enables webrtc restreaming by setting `ENABLE_WEBRTC` environment. Additional dependencies must be running as described [here](../samples/webrtc/README.md). ### Developer Mode (--dev) This argument runs the image in `developer` mode which configures the environment as follows: diff --git a/docs/running_pipeline_server.md b/docs/running_pipeline_server.md index 8438dee..853bf6d 100644 --- a/docs/running_pipeline_server.md +++ b/docs/running_pipeline_server.md @@ -127,11 +127,29 @@ curl localhost:8080/pipelines/object_detection/person_vehicle_bike -X POST -H \ tail -f /tmp/results.txt ``` ``` -{"objects":[{"detection":{"bounding_box":{"x_max":0.0503933560103178,"x_min":0.0,"y_max":0.34233352541923523,"y_min":0.14351698756217957},"confidence":0.6430817246437073,"label":"vehicle","label_id":2},"h":86,"roi_type":"vehicle","w":39,"x":0,"y":62}],"resolution":{"height":432,"width":768},"source":"https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true","timestamp":49250000000} +{"objects":[{"detection":{"bounding_box":{"x_max":0.0503933560103178,"x_min":0.0,"y_max":0.34233352541923523,"y_min":0.14351698756217957},"confidence":0.6430817246437073,"label":"vehicle","label_id":2},"h":86,"roi_type":"vehicle","w":39,"x":0,"y":62}],"resolution":{"height":432,"width":768}","timestamp":49250000000} ``` Detection results are published to `/tmp/results.txt`. +### Emitting Source and Destination Details + +To add source details from your pipeline requests to metadata output and source and destination details to pipeline status, launch Pipeline Server with the `EMIT_SOURCE_AND_DESTINATION` flag: + +``` +docker/run.sh -v /tmp:/tmp -e EMIT_SOURCE_AND_DESTINATION=true +``` + +The source is then emitted in the detection result as shown below: + +``` +tail -f /tmp/results.txt +``` +``` +{"objects":[{"detection":{"bounding_box":{"x_max":0.0503933560103178,"x_min":0.0,"y_max":0.34233352541923523,"y_min":0.14351698756217957},"confidence":0.6430817246437073,"label":"vehicle","label_id":2},"h":86,"roi_type":"vehicle","w":39,"x":0,"y":62}],"resolution":{"height":432,"width":768},"source":"https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true","timestamp":49250000000} +``` +> **TIP:** To attach a friendly name rather than revealing source or destination details, we recommend use of [tags](https://github.com/intel-innersource/frameworks.ai.dlstreamer.pipeline-server/blob/main/docs/customizing_pipeline_requests.md#tags) when submitting a pipeline request. + ## Stopping the Microservice To stop the microservice use standard `docker stop` or `docker @@ -230,14 +248,19 @@ The following the table shows docker configuration and inference device name for |Accelerator| Device | Volume Mount(s) |CGroup Rule|Inference Device| |-----------|-------------|------------------- |-----------|----------------| -| GPU | /dev/dri | | | GPU | +| GPU | /dev/dri/renderDxxx | | | GPU | | Intel® NCS2 | | /dev/bus/usb |c 189:* rmw| MYRIAD | | HDDL-R | | /var/tmp, /dev/shm | | HDDL | > **Note:** Intel® NCS2 and HDDL-R accelerators are incompatible and cannot be used on the same system. ## GPU -The first time inference is run on a GPU there will be a 30s delay while OpenCL kernels are built for the specific device. To prevent the same delay from occurring on subsequent runs a [model instance id](docs/defining_pipelines.md#model-persistance-in-openvino-gstreamer-elements) can be specified in the request. +The first time inference is run on a GPU there will be a 30s delay while OpenCL kernels are built for the specific device. To prevent the same delay from occurring on subsequent runs a [model instance id](docs/defining_pipelines.md#model-persistance-in-openvino-gstreamer-elements) can be specified in the request. You can also set the `cl_cache_dir` environment variable to specify location of kernel cache so it can be re-used across sessions. + +If multiple GPUs are available, /dev/dri/renderD128 will be automatically selected. The environment variable [GST_VAAPI_DRM_DEVICE](https://gstreamer.freedesktop.org/documentation/vaapi/index.html?gi-language=python) will be set to device path. Different devices can be selected by using the `--gpu-device` argument. +``` +--gpu-device /dev/dri/renderD129 +``` On Ubuntu20 and later hosts [extra configuration](https://github.com/openvinotoolkit/docker_ci/blob/master/configure_gpu_ubuntu20.md), not shown in the above table, is necessary to allow access to the GPU. The [docker/run.sh](../docker/run.sh) script takes care of this for you, but other deployments will have to be updated accordingly. @@ -296,5 +319,17 @@ pipeline-server@my-host:~$ python3 -m server By default, the running user's UID value determines user name inside the container. A UID of 1001 is assigned as `pipeline-server`. For other UIDs, you may see `I have no name!@my-host`. To run as another user, you can add `--user ` to the run command. i.e. to add pipeline-server by name use add `--user pipeline-server` +# Disabling HTTP Port on Docker + +The run script includes a --disable-http-port flag which starts the container without any HTTP ports opened up for security reasons. This is used for HTTPS or securing your container. + +**Example:** + +The example below disables HTTP Port and connects the container into a bridged network for reverse proxy. + +``` +docker/run.sh --disable-http-port --network my_bridge +``` + --- \* Other names and brands may be claimed as the property of others. diff --git a/models_list/models.list.yml b/models_list/models.list.yml index cd0c9a5..01b1544 100644 --- a/models_list/models.list.yml +++ b/models_list/models.list.yml @@ -18,15 +18,6 @@ alias: face_detection_retail version: 1 precision: [FP16,FP32] -- model: action-recognition-0001-decoder - alias: action_recognition - version: decoder - precision: [FP16,FP32] - model-proc: action-recognition-0001.json -- model: action-recognition-0001-encoder - alias: action_recognition - version: encoder - precision: [FP16,FP32] - model: person-detection-retail-0013 alias: object_detection version: person diff --git a/pipelines/gstreamer/action_recognition/general/README.md b/pipelines/gstreamer/action_recognition/general/README.md deleted file mode 100644 index c796e37..0000000 --- a/pipelines/gstreamer/action_recognition/general/README.md +++ /dev/null @@ -1,108 +0,0 @@ -# Action Recognition Pipeline - -## Purpose - -This is a pipeline based on the [DLStreamer gvaactionrecognitionbin](https://github.com/openvinotoolkit/dlstreamer_gst/wiki/Action-Recognition) preview element and supports general purpose action recognition. - -## Description - -### Pipeline - -A detailed description can be found [here](https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos/action_recognition_demo/python#how-it-works). - -### Models - -A composite model is used, consisting of: - -- [action-recognition-0001-encoder](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/action-recognition-0001/action-recognition-0001-encoder) -- [action-recognition-0001-decoder](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/action-recognition-0001/action-recognition-0001-decoder) - -These are based on (400 actions) models for [Kinetics-400 dataset](https://deepmind.com/research/open-source/kinetics). - -### Parameters - -The key parameters of [DLStreamer gvaactionrecognitionbin](https://github.com/openvinotoolkit/dlstreamer_gst/wiki/Action-Recognition) element are the model and device parameters for each of the encoder and decoder models. - -> Note: The inference devices are set to "CPU" by default in pipeline.json as default values in gvaactionrecognitionbin are empty strings. - -| Parameter | Definition | -|-----------|------------| -|enc-model| Path to encoder inference model network file | -|dec-model| Path to decoder inference model network file | -|enc-device| Encoder inference device i.e CPU/GPU | -|dec-device| Decoder inference device i.e CPU/GPU | - -### Template - -Template is outlined in pipeline.json as follows: -> Note : gvametaconvert requires setting "add-tensor-data=true" as the inference details (label, confidence) determined by gvaactionrecognitionbin is available only inside the tensor data - -```json -template : "uridecodebin name=source ! videoconvert ! video/x-raw,format=BGRx", -" ! gvaactionrecognitionbin enc-model={models[action_recognition][encoder][network]} dec-model={models[action_recognition][decoder][network]} model-proc={models[action_recognition][decode[proc]} name=action_recognition", -" ! gvametaconvert add-tensor-data=true name=metaconvert", -" ! gvametapublish name=destination", -" ! appsink name=appsink"] -``` - -## Output - -Below is a sample of the inference results i.e metadata (json format): - -```json -{ - "objects": [ - { - "h": 432, - "tensors": [ - { - "confidence": 0.005000564735382795, - "label": "surfing crowd", - "label_id": 336, - "layer_name": "data", - "layout": "ANY", - "name": "action", - "precision": "UNSPECIFIED" - } - ], - "w": 768, - "x": 0, - "y": 0 - } - ], - "resolution": { - "height": 432, - "width": 768 - }, - "source": "https://github.com/intel-iot-devkit/sample-videos/blob/master/person-bicycle-car-detection.mp4?raw=true", - "timestamp": 0 -} -``` - -The corresponding pipeline_client output resembles: - -```code - Timestamp --