diff --git a/docs/api/python/.buildinfo b/docs/api/python/.buildinfo index 78bf8190f3dd3..15aad18f58b64 100644 --- a/docs/api/python/.buildinfo +++ b/docs/api/python/.buildinfo @@ -1,4 +1,4 @@ # Sphinx build info version 1 # This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. -config: b6aa838178686f9785018d78eb6169ef +config: 175d339fa1d081233ede23b35ea4629c tags: 645f666f9bcd5a90fca523b33c5a78b7 diff --git a/docs/api/python/README.html b/docs/api/python/README.html deleted file mode 100644 index 3dd33a2be1fa1..0000000000000 --- a/docs/api/python/README.html +++ /dev/null @@ -1,170 +0,0 @@ - - - - - - - - ONNX Runtime — ONNX Runtime 1.7.0 documentation - - - - - - - - - - - - - - - - - - - - - - -
-
-
- - -
- -
-

ONNX Runtime

-

ONNX Runtime is a performance-focused scoring engine for Open Neural Network Exchange (ONNX) models. -For more information on ONNX Runtime, please see aka.ms/onnxruntime or the Github project.

- -
- - -
- -
-
- -
-
- - - - - - - \ No newline at end of file diff --git a/docs/api/python/api_summary.html b/docs/api/python/api_summary.html index dcdbb4979f48e..9c0f162c51f09 100644 --- a/docs/api/python/api_summary.html +++ b/docs/api/python/api_summary.html @@ -4,19 +4,22 @@ - - API Summary — ONNX Runtime 1.7.0 documentation - - + + + API — ONNX Runtime 1.14.0 documentation + + - - - - - + + + + + + + @@ -37,60 +40,125 @@
-
-

API Summary

-

Summary of public functions and classes exposed -in ONNX Runtime.

+
+

API

-
-

OrtValue

-

ONNX Runtime works with native Python data structures which are mapped into ONNX data formats : -Numpy arrays (tensors), dictionaries (maps), and a list of Numpy arrays (sequences). -The data backing these are on CPU.

-

ONNX Runtime supports a custom data structure that supports all ONNX data formats that allows users -to place the data backing these on a device, for example, on a CUDA supported device. This allows for -interesting IOBinding scenarios (discussed below). In addition, ONNX Runtime supports directly -working with OrtValue (s) while inferencing a model if provided as part of the input feed.

-

Below is an example showing creation of an OrtValue from a Numpy array while placing its backing memory -on a CUDA device:

-
#X is numpy array on cpu, create an OrtValue and place it on cuda device id = 0
-ortvalue = onnxruntime.OrtValue.ortvalue_from_numpy(X, 'cuda', 0)
-ortvalue.device_name()  # 'cuda'
-ortvalue.shape()  # shape of the numpy array X
-ortvalue.data_type()  # 'tensor(float)'
-ortvalue.is_tensor()  # 'True'
-np.array_equal(ortvalue.numpy(), X)  # 'True'
+
+

API Overview

+

ONNX Runtime loads and runs inference on a model in ONNX graph format, or ORT format (for memory and disk constrained environments).

+

The data consumed and produced by the model can be specified and accessed in the way that best matches your scenario.

+
+

Load and run a model

+

InferenceSession is the main class of ONNX Runtime. It is used to load and run an ONNX model, +as well as specify environment and application configuration options.

+
session = onnxruntime.InferenceSession('model.onnx')
 
-#ortvalue can be provided as part of the input feed to a model
-ses = onnxruntime.InferenceSession('model.onnx')
-res = sess.run(["Y"], {"X": ortvalue})
+outputs = session.run([output names], inputs)
 
+

ONNX and ORT format models consist of a graph of computations, modeled as operators, +and implemented as optimized operator kernels for different hardware targets. +ONNX Runtime orchestrates the execution of operator kernels via execution providers. +An execution provider contains the set of kernels for a specific execution target (CPU, GPU, IoT etc). +Execution provides are configured using the providers parameter. Kernels from different execution +providers are chosen in the priority order given in the list of providers. In the example below +if there is a kernel in the CUDA execution provider ONNX Runtime executes that on GPU. If not +the kernel is executed on CPU.

+
session = onnxruntime.InferenceSession(model,
+                                       providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
+
-
-

IOBinding

-

By default, ONNX Runtime always places input(s) and output(s) on CPU, which -is not optimal if the input or output is consumed and produced on a device -other than CPU because it introduces data copy between CPU and the device. -ONNX Runtime provides a feature, IO Binding, which addresses this issue by -enabling users to specify which device to place input(s) and output(s) on. -Here are scenarios to use this feature.

-

(In the following code snippets, model.onnx is the model to execute, -X is the input data to feed, and Y is the output data.)

-

Scenario 1:

+

The list of available execution providers can be found here: Execution Providers.

+

Since ONNX Runtime 1.10, you must explicitly specify the execution provider for your target. +Running on CPU is the only time the API allows no explicit setting of the provider parameter. +In the examples that follow, the CUDAExecutionProvider and CPUExecutionProvider are used, assuming the application is running on NVIDIA GPUs. +Replace these with the execution provider specific to your environment.

+

You can supply other session configurations via the session options parameter. For example, to enable +profiling on the session:

+
options = onnxruntime.SessionOptions()
+options.enable_profiling=True
+session = onnxruntime.InferenceSession('model.onnx', sess_options=options, providers=['CUDAExecutionProvider', 'CPUExecutionProvider']))
+
+
+
+
+

Data inputs and outputs

+

The ONNX Runtime Inference Session consumes and produces data using its OrtValue class.

+
+

Data on CPU

+

On CPU (the default), OrtValues can be mapped to and from native Python data structures: numpy arrays, dictionaries and lists of +numpy arrays.

+
# X is numpy array on cpu
+ortvalue = onnxruntime.OrtValue.ortvalue_from_numpy(X)
+ortvalue.device_name()  # 'cpu'
+ortvalue.shape()        # shape of the numpy array X
+ortvalue.data_type()    # 'tensor(float)'
+ortvalue.is_tensor()    # 'True'
+np.array_equal(ortvalue.numpy(), X)  # 'True'
+
+# ortvalue can be provided as part of the input feed to a model
+session = onnxruntime.InferenceSession('model.onnx', providers=['CUDAExecutionProvider', 'CPUExecutionProvider']))
+results = session.run(["Y"], {"X": ortvalue})
+
+
+

By default, ONNX Runtime always places input(s) and output(s) on CPU. Having the data on CPU +may not optimal if the input or output is consumed and produced on a device +other than CPU because it introduces data copy between CPU and the device.

+
+
+

Data on device

+

ONNX Runtime supports a custom data structure that supports all ONNX data formats that allows users +to place the data backing these on a device, for example, on a CUDA supported device. In ONNX Runtime, +this called IOBinding.

+

To use the IOBinding feature, replace InferenceSession.run() with InferenceSession.run_with_iobinding().

A graph is executed on a device other than CPU, for instance CUDA. Users can -use IOBinding to put input on CUDA as the follows.

-
#X is numpy array on cpu
-session = onnxruntime.InferenceSession('model.onnx')
+use IOBinding to copy the data onto the GPU.

+
# X is numpy array on cpu
+session = onnxruntime.InferenceSession('model.onnx', providers=['CUDAExecutionProvider', 'CPUExecutionProvider']))
 io_binding = session.io_binding()
 # OnnxRuntime will copy the data over to the CUDA device if 'input' is consumed by nodes on the CUDA device
 io_binding.bind_cpu_input('input', X)
@@ -99,11 +167,10 @@ 

IOBindingY = io_binding.copy_outputs_to_cpu()[0]

-

Scenario 2:

The input data is on a device, users directly use the input. The output data is on CPU.

-
#X is numpy array on cpu
+
-

Scenario 3:

The input data and output data are both on a device, users directly use the input and also place output on the device.

#X is numpy array on cpu
 X_ortvalue = onnxruntime.OrtValue.ortvalue_from_numpy(X, 'cuda', 0)
 Y_ortvalue = onnxruntime.OrtValue.ortvalue_from_shape_and_type([3, 2], np.float32, 'cuda', 0)  # Change the shape to the actual shape of the output being bound
-session = onnxruntime.InferenceSession('model.onnx')
+session = onnxruntime.InferenceSession('model.onnx', providers=['CUDAExecutionProvider', 'CPUExecutionProvider']))
 io_binding = session.io_binding()
 io_binding.bind_input(name='input', device_type=X_ortvalue.device_name(), device_id=0, element_type=np.float32, shape=X_ortvalue.shape(), buffer_ptr=X_ortvalue.data_ptr())
 io_binding.bind_output(name='output', device_type=Y_ortvalue.device_name(), device_id=0, element_type=np.float32, shape=Y_ortvalue.shape(), buffer_ptr=Y_ortvalue.data_ptr())
 session.run_with_iobinding(io_binding)
 
-

Scenario 4:

Users can request ONNX Runtime to allocate an output on a device. This is particularly useful for dynamic shaped outputs. Users can use the get_outputs() API to get access to the OrtValue (s) corresponding to the allocated output(s). Users can thus consume the ONNX Runtime allocated memory for the output as an OrtValue.

-

Scenario 5:

+

In addition, ONNX Runtime supports directly working with OrtValue (s) while inferencing a model if provided as part of the input feed.

Users can bind OrtValue (s) directly.

#X is numpy array on cpu
 #X is numpy array on cpu
 X_ortvalue = onnxruntime.OrtValue.ortvalue_from_numpy(X, 'cuda', 0)
 Y_ortvalue = onnxruntime.OrtValue.ortvalue_from_shape_and_type([3, 2], np.float32, 'cuda', 0)  # Change the shape to the actual shape of the output being bound
-session = onnxruntime.InferenceSession('model.onnx')
+session = onnxruntime.InferenceSession('model.onnx', providers=['CUDAExecutionProvider', 'CPUExecutionProvider']))
 io_binding = session.io_binding()
 io_binding.bind_ortvalue_input('input', X_ortvalue)
 io_binding.bind_ortvalue_output('output', Y_ortvalue)
 session.run_with_iobinding(io_binding)
 
+

You can also bind inputs and outputs directly to a PyTorch tensor.

+
# X is a PyTorch tensor on device
+session = onnxruntime.InferenceSession('model.onnx', providers=['CUDAExecutionProvider', 'CPUExecutionProvider']))
+binding = session.io_binding()
+
+X_tensor = X.contiguous()
+
+binding.bind_input(
+    name='X',
+    device_type='cuda',
+    device_id=0,
+    element_type=np.float32,
+    shape=tuple(x_tensor.shape),
+    buffer_ptr=x_tensor.data_ptr(),
+    )
+
+## Allocate the PyTorch tensor for the model output
+Y_shape = ... # You need to specify the output PyTorch tensor shape
+Y_tensor = torch.empty(Y_shape, dtype=torch.float32, device='cuda:0').contiguous()
+binding.bind_output(
+    name='Y',
+    device_type='cuda',
+    device_id=0,
+    element_type=np.float32,
+    shape=tuple(Y_tensor.shape),
+    buffer_ptr=Y_tensor.data_ptr(),
+)
+
+session.run_with_iobinding(binding)
+
+
+
+
+
+
+

API Details

+
+

InferenceSession

+
+
+class onnxruntime.InferenceSession(path_or_bytes, sess_options=None, providers=None, provider_options=None, **kwargs)[source]
+

This is the main class used to run a model.

+
+
Parameters
+
    +
  • path_or_bytes – filename or serialized ONNX or ORT format model in a byte string

  • +
  • sess_options – session options

  • +
  • providers – Optional sequence of providers in order of decreasing +precedence. Values can either be provider names or tuples of +(provider name, options dict). If not provided, then all available +providers are used with the default precedence.

  • +
  • provider_options – Optional sequence of options dicts corresponding +to the providers listed in ‘providers’.

  • +
+
+
+

The model type will be inferred unless explicitly set in the SessionOptions. +To explicitly set:

+
so = onnxruntime.SessionOptions()
+# so.add_session_config_entry('session.load_model_format', 'ONNX') or
+so.add_session_config_entry('session.load_model_format', 'ORT')
+
-
-

Device

-

The package is compiled for a specific device, GPU or CPU. -The CPU implementation includes optimizations -such as MKL (Math Kernel Libary). The following function -indicates the chosen option:

-
-
-onnxruntime.get_device()str
-

Return the device used to compute the prediction (CPU, MKL, …)

+

A file extension of ‘.ort’ will be inferred as an ORT format model. +All other filenames are assumed to be ONNX format models.

+

‘providers’ can contain either names or names and options. When any options +are given in ‘providers’, ‘provider_options’ should not be used.

+

The list of providers is ordered by precedence. For example +[‘CUDAExecutionProvider’, ‘CPUExecutionProvider’] +means execute a node using CUDAExecutionProvider +if capable, otherwise execute using CPUExecutionProvider.

+
+
+disable_fallback()
+

Disable session.run() fallback mechanism.

-
-
-

Examples and datasets

-

The package contains a few models stored in ONNX format -used in the documentation. These don’t need to be downloaded -as they are installed with the package.

-
-
-onnxruntime.datasets.get_example(name)[source]
-

Retrieves the absolute file name of an example.

+
+
+enable_fallback()
+

Enable session.Run() fallback mechanism. If session.Run() fails due to an internal Execution Provider failure, +reset the Execution Providers enabled for this session. +If GPU is enabled, fall back to CUDAExecutionProvider. +otherwise fall back to CPUExecutionProvider.

-
-
-

Load and run a model

-

ONNX Runtime reads a model saved in ONNX format. -The main class InferenceSession wraps these functionalities -in a single place.

-
-
-class onnxruntime.ModelMetadata
-

Pre-defined and custom metadata about the model. -It is usually used to identify the model used to run the prediction and -facilitate the comparison.

-
-property custom_metadata_map
-

additional metadata

+
+end_profiling()
+

End profiling and return results in a file.

+

The results are stored in a filename if the option +onnxruntime.SessionOptions.enable_profiling().

-
-property description
-

description of the model

+
+get_inputs()
+

Return the inputs metadata as a list of onnxruntime.NodeArg.

-
-property domain
-

ONNX domain

+
+get_modelmeta()
+

Return the metadata. See onnxruntime.ModelMetadata.

-
-property graph_description
-

description of the graph hosted in the model

+
+get_outputs()
+

Return the outputs metadata as a list of onnxruntime.NodeArg.

-
-property graph_name
-

graph name

+
+get_overridable_initializers()
+

Return the inputs (including initializers) metadata as a list of onnxruntime.NodeArg.

-
-property producer_name
-

producer name

+
+get_profiling_start_time_ns()
+

Return the nanoseconds of profiling’s start time +Comparable to time.monotonic_ns() after Python 3.3 +On some platforms, this timer may not be as precise as nanoseconds +For instance, on Windows and MacOS, the precision will be ~100ns

-
-property version
-

version of the model

+
+get_provider_options()
+

Return registered execution providers’ configurations.

+
+
+get_providers()
+

Return list of registered execution providers.

-
-
-class onnxruntime.InferenceSession(path_or_bytes, sess_options=None, providers=None, provider_options=None)[source]
-

This is the main class used to run a model. The next release (ORT 1.10) will require explicitly setting the providers parameter if you want to use execution providers other than the default CPU provider (as opposed to the current behavior of providers getting set/registered by default based on the build flags) when instantiating InferenceSession.

+
+
+get_session_options()
+

Return the session options. See onnxruntime.SessionOptions.

-
-
-class onnxruntime.NodeArg
-

Node argument definition, for both input and output, -including arg name, arg type (contains both type and shape).

-
-property name
-

node name

+
+io_binding()
+

Return an onnxruntime.IOBinding object`.

-
-property shape
-

node shape (assuming the node holds a tensor)

+
+run(output_names, input_feed, run_options=None)
+

Compute the predictions.

+
+
Parameters
+
    +
  • output_names – name of the outputs

  • +
  • input_feed – dictionary { input_name: input_value }

  • +
  • run_options – See onnxruntime.RunOptions.

  • +
+
+
Returns
+

list of results, every result is either a numpy array, +a sparse tensor, a list or a dictionary.

+
+
+
sess.run([output_name], {input_name: x})
+
+
-
-property type
-

node type

+
+run_with_iobinding(iobinding, run_options=None)
+

Compute the predictions.

+
+
Parameters
+
    +
  • iobinding – the iobinding object that has graph inputs/outputs bind.

  • +
  • run_options – See onnxruntime.RunOptions.

  • +
+
+
+
+
+run_with_ort_values(output_names, input_dict_ort_values, run_options=None)
+

Compute the predictions.

+
+
Parameters
+
    +
  • output_names – name of the outputs

  • +
  • input_dict_ort_values – dictionary { input_name: input_ort_value } +See OrtValue class how to create OrtValue +from numpy array or SparseTensor

  • +
  • run_options – See onnxruntime.RunOptions.

  • +
+
+
Returns
+

an array of OrtValue

+
+
+
sess.run([output_name], {input_name: x})
+
+
+
+
+set_providers(providers=None, provider_options=None)
+

Register the input list of execution providers. The underlying session is re-created.

+
+
Parameters
+
    +
  • providers – Optional sequence of providers in order of decreasing +precedence. Values can either be provider names or tuples of +(provider name, options dict). If not provided, then all available +providers are used with the default precedence.

  • +
  • provider_options – Optional sequence of options dicts corresponding +to the providers listed in ‘providers’.

  • +
+
+
+

‘providers’ can contain either names or names and options. When any options +are given in ‘providers’, ‘provider_options’ should not be used.

+

The list of providers is ordered by precedence. For example +[‘CUDAExecutionProvider’, ‘CPUExecutionProvider’] +means execute a node using CUDAExecutionProvider if capable, +otherwise execute using CPUExecutionProvider.

+
+ +
+ +
+
+

Options

+
+

RunOptions

-
-class onnxruntime.RunOptions
+
+class onnxruntime.RunOptions(self: onnxruntime.capi.onnxruntime_pybind11_state.RunOptions)

Configuration information for a single Run.

-
-property log_severity_level
-

Log severity level for a particular Run() invocation. 0:Verbose, 1:Info, 2:Warning. 3:Error, 4:Fatal. Default is 2.

+
+add_run_config_entry(self: onnxruntime.capi.onnxruntime_pybind11_state.RunOptions, arg0: str, arg1: str) None
+

Set a single run configuration entry as a pair of strings.

-
-property log_verbosity_level
+
+get_run_config_entry(self: onnxruntime.capi.onnxruntime_pybind11_state.RunOptions, arg0: str) str
+

Get a single run configuration value using the given configuration key.

+
+ +
+
+property log_severity_level
+

Log severity level for a particular Run() invocation. 0:Verbose, 1:Info, 2:Warning. 3:Error, 4:Fatal. Default is 2.

+
+ +
+
+property log_verbosity_level

VLOG level if DEBUG build and run_log_severity_level is 0. Applies to a particular Run() invocation. Default is 0.

-
-
-property logid
+
+
+property logid

To identify logs generated by a particular Run() invocation.

-
-
-property only_execute_path_to_fetches
+
+
+property only_execute_path_to_fetches

Only execute the nodes needed by fetch list

-
-
-property terminate
+
+
+property terminate

Set to True to terminate any currently executing calls that are using this RunOptions instance. The individual calls will exit gracefully and return an error status.

+
+
+

SessionOptions

-
-class onnxruntime.SessionOptions
+
+class onnxruntime.SessionOptions(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions)

Configuration information for a session.

-
-add_free_dimension_override_by_denotation(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions, arg0: str, arg1: int)None
+
+add_external_initializers(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions, arg0: list, arg1: list) None
+
+ +
+
+add_free_dimension_override_by_denotation(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions, arg0: str, arg1: int) None

Specify the dimension size for each denotation associated with an input’s free dimension.

-
-add_free_dimension_override_by_name(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions, arg0: str, arg1: int)None
+
+add_free_dimension_override_by_name(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions, arg0: str, arg1: int) None

Specify values of named dimensions within model inputs.

-
-add_initializer(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions, arg0: str, arg1: object)None
+
+add_initializer(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions, arg0: str, arg1: object) None
-
-add_session_config_entry(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions, arg0: str, arg1: str)None
+
+add_session_config_entry(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions, arg0: str, arg1: str) None

Set a single session configuration entry as a pair of strings.

-
-
-property enable_cpu_mem_arena
+
+
+property enable_cpu_mem_arena

Enables the memory arena on CPU. Arena may pre-allocate memory for future usage. Set this option to false if you don’t want it. Default is True.

-
-
-property enable_mem_pattern
+
+
+property enable_mem_pattern

Enable the memory pattern optimization. Default is true.

-
-
-property enable_profiling
+
+
+property enable_mem_reuse
+

Enable the memory reuse optimization. Default is true.

+
+ +
+
+property enable_profiling

Enable profiling for this session. Default is false.

-
-
-property execution_mode
+
+
+property execution_mode

Sets the execution mode. Default is sequential.

-
-
-property execution_order
+
+
+property execution_order

Sets the execution order. Default is basic topological order.

-
-get_session_config_entry(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions, arg0: str)str
+
+get_session_config_entry(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions, arg0: str) str

Get a single session configuration value using the given configuration key.

-
-
-property graph_optimization_level
+
+
+property graph_optimization_level

Graph optimization level for this session.

-
-
-property inter_op_num_threads
+
+
+property inter_op_num_threads

Sets the number of threads used to parallelize the execution of the graph (across nodes). Default is 0 to let onnxruntime choose.

-
-
-property intra_op_num_threads
+
+
+property intra_op_num_threads

Sets the number of threads used to parallelize the execution within nodes. Default is 0 to let onnxruntime choose.

-
-
-property log_severity_level
+
+
+property log_severity_level

Log severity level. Applies to session load, initialization, etc. 0:Verbose, 1:Info, 2:Warning. 3:Error, 4:Fatal. Default is 2.

-
-
-property log_verbosity_level
+
+
+property log_verbosity_level

VLOG level if DEBUG build and session_log_severity_level is 0. Applies to session load, initialization, etc. Default is 0.

-
-
-property logid
+
+
+property logid

Logger id to use for session output.

-
-
-property optimized_model_filepath
-

File path to serialize optimized model to. +

+
+property optimized_model_filepath
+

File path to serialize optimized model to. Optimized model is not serialized unless optimized_model_filepath is set. -Serialized model format will default to ONNX unless:

-
-
    -
  • add_session_config_entry is used to set ‘session.save_model_format’ to ‘ORT’, or

  • -
  • there is no ‘session.save_model_format’ config entry and optimized_model_filepath ends in ‘.ort’ (case insensitive)

  • -
-
+Serialized model format will default to ONNX unless: +- add_session_config_entry is used to set ‘session.save_model_format’ to ‘ORT’, or +- there is no ‘session.save_model_format’ config entry and optimized_model_filepath ends in ‘.ort’ (case insensitive)

-
-
-property profile_file_prefix
+
+
+property profile_file_prefix

The prefix of the profile file. The current time will be appended to the file name.

-
-register_custom_ops_library(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions, arg0: str)None
+
+register_custom_ops_library(self: onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions, arg0: str) None

Specify the path to the shared library containing the custom op kernels required to run a model.

-
-
-property use_deterministic_compute
+
+
+property use_deterministic_compute

Whether to use deterministic compute. Default is false.

+
+
+
+

Data

+
+

OrtValue

+
+
+class onnxruntime.OrtValue(ortvalue, numpy_obj=None)[source]
+

A data structure that supports all ONNX data formats (tensors and non-tensors) that allows users +to place the data backing these on a device, for example, on a CUDA supported device. +This class provides APIs to construct and deal with OrtValues.

+
+
+as_sparse_tensor()[source]
+

The function will return SparseTensor contained in this OrtValue

+
+ +
+
+data_ptr()[source]
+

Returns the address of the first element in the OrtValue’s data buffer

+
+ +
+
+data_type()[source]
+

Returns the data type of the data in the OrtValue

+
+ +
+
+device_name()[source]
+

Returns the name of the device where the OrtValue’s data buffer resides e.g. cpu, cuda

+
+ +
+
+element_type()[source]
+

Returns the proto type of the data in the OrtValue +if the OrtValue is a tensor.

+
+ +
+
+has_value()[source]
+

Returns True if the OrtValue corresponding to an +optional type contains data, else returns False

+
+ +
+
+is_sparse_tensor()[source]
+

Returns True if the OrtValue contains a SparseTensor, else returns False

+
+ +
+
+is_tensor()[source]
+

Returns True if the OrtValue contains a Tensor, else returns False

+
+ +
+
+is_tensor_sequence()[source]
+

Returns True if the OrtValue contains a Tensor Sequence, else returns False

+
+ +
+
+numpy()[source]
+

Returns a Numpy object from the OrtValue. +Valid only for OrtValues holding Tensors. Throws for OrtValues holding non-Tensors. +Use accessors to gain a reference to non-Tensor objects such as SparseTensor

+
+ +
+
+static ort_value_from_sparse_tensor(sparse_tensor)[source]
+

The function will construct an OrtValue instance from a valid SparseTensor +The new instance of OrtValue will assume the ownership of sparse_tensor

+
+ +
+
+static ortvalue_from_numpy(numpy_obj, device_type='cpu', device_id=0)[source]
+

Factory method to construct an OrtValue (which holds a Tensor) from a given Numpy object +A copy of the data in the Numpy object is held by the OrtValue only if the device is NOT cpu

+
+
Parameters
+
    +
  • numpy_obj – The Numpy object to construct the OrtValue from

  • +
  • device_type – e.g. cpu, cuda, cpu by default

  • +
  • device_id – device id, e.g. 0

  • +
+
+
+
+ +
+
+static ortvalue_from_shape_and_type(shape=None, element_type=None, device_type='cpu', device_id=0)[source]
+

Factory method to construct an OrtValue (which holds a Tensor) from given shape and element_type

+
+
Parameters
+
    +
  • shape – List of integers indicating the shape of the OrtValue

  • +
  • element_type – The data type of the elements in the OrtValue (numpy type)

  • +
  • device_type – e.g. cpu, cuda, cpu by default

  • +
  • device_id – device id, e.g. 0

  • +
+
+
+
+ +
+
+shape()[source]
+

Returns the shape of the data in the OrtValue

+
+ +
+
+update_inplace(np_arr)[source]
+

Update the OrtValue in place with a new Numpy array. The numpy contents +are copied over to the device memory backing the OrtValue. It can be used +to update the input valuess for an InferenceSession with CUDA graph +enabled or other scenarios where the OrtValue needs to be updated while +the memory address can not be changed.

+
+ +
+ +
+
+

SparseTensor

+
+
+class onnxruntime.SparseTensor(sparse_tensor)[source]
+

A data structure that project the C++ SparseTensor object +The class provides API to work with the object. +Depending on the format, the class will hold more than one buffer +depending on the format

+

Internal constructor

+
+
+as_blocksparse_view()[source]
+

The method will return coo representation of the sparse tensor which will enable +querying BlockSparse indices. If the instance did not contain BlockSparse format, it would throw. +You can query coo indices as:

+
block_sparse_indices = sparse_tensor.as_blocksparse_view().indices()
+
-
-

Backend

+

which will return a numpy array that is backed by the native memory

+
+ +
+
+as_coo_view()[source]
+

The method will return coo representation of the sparse tensor which will enable +querying COO indices. If the instance did not contain COO format, it would throw. +You can query coo indices as:

+
coo_indices = sparse_tensor.as_coo_view().indices()
+
+
+

which will return a numpy array that is backed by the native memory.

+
+ +
+
+as_csrc_view()[source]
+

The method will return CSR(C) representation of the sparse tensor which will enable +querying CRS(C) indices. If the instance dit not contain CSR(C) format, it would throw. +You can query indices as:

+
inner_ndices = sparse_tensor.as_csrc_view().inner()
+outer_ndices = sparse_tensor.as_csrc_view().outer()
+
+
+

returning numpy arrays backed by the native memory.

+
+ +
+
+data_type()[source]
+

Returns a string data type of the data in the OrtValue

+
+ +
+
+dense_shape()[source]
+

Returns a numpy array(int64) containing a dense shape of a sparse tensor

+
+ +
+
+device_name()[source]
+

Returns the name of the device where the SparseTensor data buffers reside e.g. cpu, cuda

+
+ +
+
+format()[source]
+

Returns a OrtSparseFormat enumeration

+
+ +
+
+static sparse_coo_from_numpy(dense_shape, values, coo_indices, ort_device)[source]
+

Factory method to construct a SparseTensor in COO format from given arguments

+
+
Parameters
+
    +
  • dense_shape – 1-D numpy array(int64) or a python list that contains a dense_shape of the sparse tensor +must be on cpu memory

  • +
  • values – a homogeneous, contiguous 1-D numpy array that contains non-zero elements of the tensor +of a type.

  • +
  • coo_indices – contiguous numpy array(int64) that contains COO indices for the tensor. coo_indices may +have a 1-D shape when it contains a linear index of non-zero values and its length must be equal to +that of the values. It can also be of 2-D shape, in which has it contains pairs of coordinates for +each of the nnz values and its length must be exactly twice of the values length.

  • +
  • ort_device

      +
    • describes the backing memory owned by the supplied nummpy arrays. Only CPU memory is

    • +
    +

    suppored for non-numeric data types.

    +

  • +
+
+
+

For primitive types, the method will map values and coo_indices arrays into native memory and will use +them as backing storage. It will increment the reference count for numpy arrays and it will decrement it +on GC. The buffers may reside in any storage either CPU or GPU. +For strings and objects, it will create a copy of the arrays in CPU memory as ORT does not support those +on other devices and their memory can not be mapped.

+
+ +
+
+static sparse_csr_from_numpy(dense_shape, values, inner_indices, outer_indices, ort_device)[source]
+

Factory method to construct a SparseTensor in CSR format from given arguments

+
+
Parameters
+
    +
  • dense_shape – 1-D numpy array(int64) or a python list that contains a dense_shape of the +sparse tensor (rows, cols) must be on cpu memory

  • +
  • values – a contiguous, homogeneous 1-D numpy array that contains non-zero elements of the tensor +of a type.

  • +
  • inner_indices – contiguous 1-D numpy array(int64) that contains CSR inner indices for the tensor. +Its length must be equal to that of the values.

  • +
  • outer_indices – contiguous 1-D numpy array(int64) that contains CSR outer indices for the tensor. +Its length must be equal to the number of rows + 1.

  • +
  • ort_device

      +
    • describes the backing memory owned by the supplied nummpy arrays. Only CPU memory is

    • +
    +

    suppored for non-numeric data types.

    +

  • +
+
+
+

For primitive types, the method will map values and indices arrays into native memory and will use them as +backing storage. It will increment the reference count and it will decrement then count when it is GCed. +The buffers may reside in any storage either CPU or GPU. +For strings and objects, it will create a copy of the arrays in CPU memory as ORT does not support those +on other devices and their memory can not be mapped.

+
+ +
+
+to_cuda(ort_device)[source]
+

Returns a copy of this instance on the specified cuda device

+
+
Parameters
+

ort_device – with name ‘cuda’ and valid gpu device id

+
+
+

The method will throw if:

+
    +
  • this instance contains strings

  • +
  • this instance is already on GPU. Cross GPU copy is not supported

  • +
  • CUDA is not present in this build

  • +
  • if the specified device is not valid

  • +
+
+ +
+
+values()[source]
+

The method returns a numpy array that is backed by the native memory +if the data type is numeric. Otherwise, the returned numpy array that contains +copies of the strings.

+
+ +
+ +
+
+
+

Devices

+
+

IOBinding

+
+
+class onnxruntime.IOBinding(session)[source]
+

This class provides API to bind input/output to a specified device, e.g. GPU.

+
+
+bind_cpu_input(name, arr_on_cpu)[source]
+

bind an input to array on CPU +:param name: input name +:param arr_on_cpu: input values as a python array on CPU

+
+ +
+
+bind_input(name, device_type, device_id, element_type, shape, buffer_ptr)[source]
+
+
Parameters
+
    +
  • name – input name

  • +
  • device_type – e.g. cpu, cuda

  • +
  • device_id – device id, e.g. 0

  • +
  • element_type – input element type

  • +
  • shape – input shape

  • +
  • buffer_ptr – memory pointer to input data

  • +
+
+
+
+ +
+
+bind_ortvalue_input(name, ortvalue)[source]
+
+
Parameters
+
    +
  • name – input name

  • +
  • ortvalue – OrtValue instance to bind

  • +
+
+
+
+ +
+
+bind_ortvalue_output(name, ortvalue)[source]
+
+
Parameters
+
    +
  • name – output name

  • +
  • ortvalue – OrtValue instance to bind

  • +
+
+
+
+ +
+
+bind_output(name, device_type='cpu', device_id=0, element_type=None, shape=None, buffer_ptr=None)[source]
+
+
Parameters
+
    +
  • name – output name

  • +
  • device_type – e.g. cpu, cuda, cpu by default

  • +
  • device_id – device id, e.g. 0

  • +
  • element_type – output element type

  • +
  • shape – output shape

  • +
  • buffer_ptr – memory pointer to output data

  • +
+
+
+
+ +
+
+copy_outputs_to_cpu()[source]
+

Copy output contents to CPU (if on another device). No-op if already on the CPU.

+
+ +
+
+get_outputs()[source]
+

Returns the output OrtValues from the Run() that preceded the call. +The data buffer of the obtained OrtValues may not reside on CPU memory

+
+ +
+ +
+
+

OrtDevice

+
+
+class onnxruntime.OrtDevice(c_ort_device)[source]
+

A data structure that exposes the underlying C++ OrtDevice

+

Internal constructor

+
+ +
+
+
+

Internal classes

+

These classes cannot be instantiated by users but they are returned +by methods or functions of this library.

+
+

ModelMetadata

+
+
+class onnxruntime.ModelMetadata
+

Pre-defined and custom metadata about the model. +It is usually used to identify the model used to run the prediction and +facilitate the comparison.

+
+
+property custom_metadata_map
+

additional metadata

+
+ +
+
+property description
+

description of the model

+
+ +
+
+property domain
+

ONNX domain

+
+ +
+
+property graph_description
+

description of the graph hosted in the model

+
+ +
+
+property graph_name
+

graph name

+
+ +
+
+property producer_name
+

producer name

+
+ +
+
+property version
+

version of the model

+
+ +
+ +
+
+

NodeArg

+
+
+class onnxruntime.NodeArg
+

Node argument definition, for both input and output, +including arg name, arg type (contains both type and shape).

+
+
+property name
+

node name

+
+ +
+
+property shape
+

node shape (assuming the node holds a tensor)

+
+ +
+
+property type
+

node type

+
+ +
+ +
+
+
+
+

Backend

In addition to the regular API which is optimized for performance and usability, ONNX Runtime also implements the -ONNX backend API +ONNX backend API for verification of ONNX specification conformance. The following functions are supported:

-
-onnxruntime.backend.is_compatible(model, device=None, **kwargs)
+
+onnxruntime.backend.is_compatible(model, device=None, **kwargs)

Return whether the model is compatible with the backend.

Parameters
@@ -464,8 +1164,8 @@

Backend -
-onnxruntime.backend.prepare(model, device=None, **kwargs)
+
+onnxruntime.backend.prepare(model, device=None, **kwargs)

Load the model and creates a onnxruntime.InferenceSession ready to be used as a backend.

@@ -486,8 +1186,8 @@

Backend -
-onnxruntime.backend.run(model, inputs, device=None, **kwargs)
+
+onnxruntime.backend.run(model, inputs, device=None, **kwargs)

Compute the prediction.

Parameters
@@ -508,26 +1208,77 @@

Backend -
-onnxruntime.backend.supports_device(device)
+
+onnxruntime.backend.supports_device(device)

Check whether the backend is compiled with particular device support. In particular it’s used in the testing suite.

-

-
+
+
+ +
+ -
-

ONNX Runtime Backend for ONNX

+
+

ONNX Runtime Backend for ONNX

ONNX Runtime extends the -onnx backend API +onnx backend API to run predictions using this runtime. Let’s use the API to compute the prediction of a simple logistic regression model.

import numpy as np
-from onnxruntime import datasets
-from onnxruntime.capi.onnxruntime_pybind11_state import InvalidArgument
-import onnxruntime.backend as backend
 from onnx import load
 
+import onnxruntime.backend as backend
+
+
+

The device depends on how the package was compiled, +GPU or CPU.

+
from onnxruntime import datasets, get_device
+from onnxruntime.capi.onnxruntime_pybind11_state import InvalidArgument
+
+device = get_device()
+
 name = datasets.get_example("logreg_iris.onnx")
 model = load(name)
 
-rep = backend.prepare(model, 'CPU')
+rep = backend.prepare(model, device)
 x = np.array([[-1.0, -2.0]], dtype=np.float32)
 try:
     label, proba = rep.run(x)
@@ -68,25 +78,14 @@
     print(e)
 
-

Out:

[ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Got invalid dimensions for input: float_input for the following indices
  index: 0 Got: 1 Expected: 3
  Please fix either the inputs or the model.
 
-

The device depends on how the package was compiled, -GPU or CPU.

-
from onnxruntime import get_device
-print(get_device())
-
-
-

Out:

-
CPU
-
-

The backend can also directly load the model without using onnx.

-
rep = backend.prepare(name, 'CPU')
+
rep = backend.prepare(name, device)
 x = np.array([[-1.0, -2.0]], dtype=np.float32)
 try:
     label, proba = rep.run(x)
@@ -96,7 +95,6 @@
     print(e)
 
-

Out:

[ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Got invalid dimensions for input: float_input for the following indices
  index: 0 Got: 1 Expected: 3
  Please fix either the inputs or the model.
@@ -105,8 +103,8 @@
 

The backend API is implemented by other frameworks and makes it easier to switch between multiple runtimes with the same API.

-

Total running time of the script: ( 0 minutes 0.019 seconds)

-

Gallery generated by Sphinx-Gallery

-
+
@@ -139,7 +137,7 @@

ONNX Runtime

Navigation

@@ -158,12 +156,12 @@

Related Topics

Quick search

- + @@ -180,7 +178,7 @@

Quick search

©2018-2021, Microsoft. | - Powered by Sphinx 3.5.1 + Powered by Sphinx 5.3.0 & Alabaster 0.7.12 | diff --git a/docs/api/python/auto_examples/plot_common_errors.html b/docs/api/python/auto_examples/plot_common_errors.html index b669a1cb59ac9..b5e0dfaa4b742 100644 --- a/docs/api/python/auto_examples/plot_common_errors.html +++ b/docs/api/python/auto_examples/plot_common_errors.html @@ -4,19 +4,22 @@ - - Common errors with onnxruntime — ONNX Runtime 1.7.0 documentation - - + + + Common errors with onnxruntime — ONNX Runtime 1.14.0 documentation + + - - - - - + + + + + + + @@ -42,8 +45,8 @@

Click here to download the full example code

-
-

Common errors with onnxruntime

+
+

Common errors with onnxruntime

This example looks into several common situations in which onnxruntime does not return the model prediction but raises an exception instead. @@ -51,13 +54,14 @@ Step 1: Train a model using your favorite framework which produced a logistic regression trained on Iris datasets. The model takes a vector of dimension 2 and returns a class among three.

-
import onnxruntime as rt
+
import numpy
+
+import onnxruntime as rt
 from onnxruntime.capi.onnxruntime_pybind11_state import InvalidArgument
-import numpy
 from onnxruntime.datasets import get_example
 
 example2 = get_example("logreg_iris.onnx")
-sess = rt.InferenceSession(example2)
+sess = rt.InferenceSession(example2, providers=rt.get_available_providers())
 
 input_name = sess.get_inputs()[0].name
 output_name = sess.get_outputs()[0].name
@@ -74,7 +78,6 @@
     print("{0}: {1}".format(type(e), e))
 
-

Out:

Unexpected type
 <class 'onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument'>: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Unexpected input data type. Actual: (tensor(double)) , expected: (tensor(float))
 
@@ -89,7 +92,6 @@ print("{0}: {1}".format(type(e), e))
-

Out:

Misspelled output name
 <class 'onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument'>: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid Output Name:misspelled
 
@@ -105,7 +107,6 @@ print(e)
-

Out:

All outputs
 [array([0, 0, 0], dtype=int64), [{0: 0.9505997896194458, 1: 0.027834143489599228, 2: 0.021566055715084076}, {0: 0.9974970817565918, 1: 5.6270167988259345e-05, 2: 0.0024466365575790405}, {0: 0.9997311234474182, 1: 1.787709464906584e-07, 2: 0.0002686927327886224}]]
 
@@ -119,7 +120,6 @@ print("{0}: {1}".format(type(e), e))
-

Out:

Misspelled input name
 <class 'onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument'>: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid Feed Input Name:misspelled
 
@@ -127,12 +127,12 @@

onnxruntime does not necessarily fail if the input dimension is a multiple of the expected input dimension.

for x in [
-        numpy.array([1.0, 2.0, 3.0, 4.0], dtype=numpy.float32),
-        numpy.array([[1.0, 2.0, 3.0, 4.0]], dtype=numpy.float32),
-        numpy.array([[1.0, 2.0], [3.0, 4.0]], dtype=numpy.float32),
-        numpy.array([1.0, 2.0, 3.0], dtype=numpy.float32),
-        numpy.array([[1.0, 2.0, 3.0]], dtype=numpy.float32),
-        ]:
+    numpy.array([1.0, 2.0, 3.0, 4.0], dtype=numpy.float32),
+    numpy.array([[1.0, 2.0, 3.0, 4.0]], dtype=numpy.float32),
+    numpy.array([[1.0, 2.0], [3.0, 4.0]], dtype=numpy.float32),
+    numpy.array([1.0, 2.0, 3.0], dtype=numpy.float32),
+    numpy.array([[1.0, 2.0, 3.0]], dtype=numpy.float32),
+]:
     try:
         r = sess.run([output_name], {input_name: x})
         print("Shape={0} and predicted labels={1}".format(x.shape, r))
@@ -140,12 +140,12 @@
         print("ERROR with Shape={0} - {1}".format(x.shape, e))
 
 for x in [
-        numpy.array([1.0, 2.0, 3.0, 4.0], dtype=numpy.float32),
-        numpy.array([[1.0, 2.0, 3.0, 4.0]], dtype=numpy.float32),
-        numpy.array([[1.0, 2.0], [3.0, 4.0]], dtype=numpy.float32),
-        numpy.array([1.0, 2.0, 3.0], dtype=numpy.float32),
-        numpy.array([[1.0, 2.0, 3.0]], dtype=numpy.float32),
-        ]:
+    numpy.array([1.0, 2.0, 3.0, 4.0], dtype=numpy.float32),
+    numpy.array([[1.0, 2.0, 3.0, 4.0]], dtype=numpy.float32),
+    numpy.array([[1.0, 2.0], [3.0, 4.0]], dtype=numpy.float32),
+    numpy.array([1.0, 2.0, 3.0], dtype=numpy.float32),
+    numpy.array([[1.0, 2.0, 3.0]], dtype=numpy.float32),
+]:
     try:
         r = sess.run(None, {input_name: x})
         print("Shape={0} and predicted probabilities={1}".format(x.shape, r[1]))
@@ -153,7 +153,6 @@
         print("ERROR with Shape={0} - {1}".format(x.shape, e))
 
-

Out:

ERROR with Shape=(4,) - [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid rank for input: float_input Got: 1 Expected: 2 Please fix either the inputs or the model.
 ERROR with Shape=(1, 4) - [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Got invalid dimensions for input: float_input for the following indices
  index: 0 Got: 1 Expected: 3
@@ -185,10 +184,10 @@
 

It does not fail either if the number of dimension is higher than expects but produces a warning.

for x in [
-        numpy.array([[[1.0, 2.0], [3.0, 4.0]]], dtype=numpy.float32),
-        numpy.array([[[1.0, 2.0, 3.0]]], dtype=numpy.float32),
-        numpy.array([[[1.0, 2.0]], [[3.0, 4.0]]], dtype=numpy.float32),
-        ]:
+    numpy.array([[[1.0, 2.0], [3.0, 4.0]]], dtype=numpy.float32),
+    numpy.array([[[1.0, 2.0, 3.0]]], dtype=numpy.float32),
+    numpy.array([[[1.0, 2.0]], [[3.0, 4.0]]], dtype=numpy.float32),
+]:
     try:
         r = sess.run([output_name], {input_name: x})
         print("Shape={0} and predicted labels={1}".format(x.shape, r))
@@ -196,14 +195,13 @@
         print("ERROR with Shape={0} - {1}".format(x.shape, e))
 
-

Out:

ERROR with Shape=(1, 2, 2) - [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid rank for input: float_input Got: 3 Expected: 2 Please fix either the inputs or the model.
 ERROR with Shape=(1, 1, 3) - [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid rank for input: float_input Got: 3 Expected: 2 Please fix either the inputs or the model.
 ERROR with Shape=(2, 1, 2) - [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid rank for input: float_input Got: 3 Expected: 2 Please fix either the inputs or the model.
 
-

Total running time of the script: ( 0 minutes 0.018 seconds)

-

Gallery generated by Sphinx-Gallery

-
+
@@ -236,7 +234,7 @@

ONNX Runtime

Navigation

@@ -255,12 +253,12 @@

Related Topics

Quick search

- + @@ -277,7 +275,7 @@

Quick search

©2018-2021, Microsoft. | - Powered by Sphinx 3.5.1 + Powered by Sphinx 5.3.0 & Alabaster 0.7.12 | diff --git a/docs/api/python/auto_examples/plot_convert_pipeline_vectorizer.html b/docs/api/python/auto_examples/plot_convert_pipeline_vectorizer.html index 77501af9fcdca..042216a3563ee 100644 --- a/docs/api/python/auto_examples/plot_convert_pipeline_vectorizer.html +++ b/docs/api/python/auto_examples/plot_convert_pipeline_vectorizer.html @@ -4,19 +4,22 @@ - - Train, convert and predict with ONNX Runtime — ONNX Runtime 1.7.0 documentation - - + + + Train, convert and predict with ONNX Runtime — ONNX Runtime 1.14.0 documentation + + - - - - - + + + + + + + @@ -42,8 +45,8 @@

Click here to download the full example code

-
-

Train, convert and predict with ONNX Runtime

+
+

Train, convert and predict with ONNX Runtime

This example demonstrates an end to end scenario starting with the training of a scikit-learn pipeline which takes as inputs not a regular vector but a @@ -55,37 +58,75 @@

  • Conversion to ONNX format

  • -
    -

    Train a pipeline

    +
    +

    Train a pipeline

    The first step consists in retrieving the boston datset.

    import pandas
     from sklearn.datasets import load_boston
    +
     boston = load_boston()
     X, y = boston.data, boston.target
     
     from sklearn.model_selection import train_test_split
    +
     X_train, X_test, y_train, y_test = train_test_split(X, y)
    -X_train_dict = pandas.DataFrame(X_train[:,1:]).T.to_dict().values()
    -X_test_dict = pandas.DataFrame(X_test[:,1:]).T.to_dict().values()
    +X_train_dict = pandas.DataFrame(X_train[:, 1:]).T.to_dict().values()
    +X_test_dict = pandas.DataFrame(X_test[:, 1:]).T.to_dict().values()
    +
    +
    +
    /home/runner/.local/lib/python3.8/site-packages/sklearn/utils/deprecation.py:87: FutureWarning: Function load_boston is deprecated; `load_boston` is deprecated in 1.0 and will be removed in 1.2.
    +
    +    The Boston housing prices dataset has an ethical problem. You can refer to
    +    the documentation of this function for further details.
    +
    +    The scikit-learn maintainers therefore strongly discourage the use of this
    +    dataset unless the purpose of the code is to study and educate about
    +    ethical issues in data science and machine learning.
    +
    +    In this special case, you can fetch the dataset from the original
    +    source::
    +
    +        import pandas as pd
    +        import numpy as np
    +
    +        data_url = "http://lib.stat.cmu.edu/datasets/boston"
    +        raw_df = pd.read_csv(data_url, sep="\s+", skiprows=22, header=None)
    +        data = np.hstack([raw_df.values[::2, :], raw_df.values[1::2, :2]])
    +        target = raw_df.values[1::2, 2]
    +
    +    Alternative datasets include the California housing dataset (i.e.
    +    :func:`~sklearn.datasets.fetch_california_housing`) and the Ames housing
    +    dataset. You can load the datasets as follows::
    +
    +        from sklearn.datasets import fetch_california_housing
    +        housing = fetch_california_housing()
    +
    +    for the California housing dataset and::
    +
    +        from sklearn.datasets import fetch_openml
    +        housing = fetch_openml(name="house_prices", as_frame=True)
    +
    +    for the Ames housing dataset.
    +  warnings.warn(msg, category=FutureWarning)
     

    We create a pipeline.

    -
    from sklearn.pipeline import make_pipeline
    -from sklearn.ensemble import GradientBoostingRegressor
    +
    from sklearn.ensemble import GradientBoostingRegressor
     from sklearn.feature_extraction import DictVectorizer
    -pipe = make_pipeline(
    -            DictVectorizer(sparse=False),
    -            GradientBoostingRegressor())
    +from sklearn.pipeline import make_pipeline
    +
    +pipe = make_pipeline(DictVectorizer(sparse=False), GradientBoostingRegressor())
     
     pipe.fit(X_train_dict, y_train)
     
    -

    Out:

    -
    Pipeline(steps=[('dictvectorizer', DictVectorizer(sparse=False)),
    -                ('gradientboostingregressor', GradientBoostingRegressor())])
    -
    +
    +
    Pipeline(steps=[('dictvectorizer', DictVectorizer(sparse=False)),
    +                ('gradientboostingregressor', GradientBoostingRegressor())])
    In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
    On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
    -

    We compute the prediction on the test set +
    +

    We compute the prediction on the test set and we show the confusion matrix.

    from sklearn.metrics import r2_score
     
    @@ -93,21 +134,20 @@ 

    Train a pipelineprint(r2_score(y_test, pred))

    -

    Out:

    -
    +
    +

    Conversion to ONNX format

    We use module sklearn-onnx to convert the model into ONNX format.

    from skl2onnx import convert_sklearn
    -from skl2onnx.common.data_types import FloatTensorType, Int64TensorType, DictionaryType, SequenceType
    +from skl2onnx.common.data_types import DictionaryType, FloatTensorType, Int64TensorType, SequenceType
     
     # initial_type = [('float_input', DictionaryType(Int64TensorType([1]), FloatTensorType([])))]
    -initial_type = [('float_input', DictionaryType(Int64TensorType([1]), FloatTensorType([])))]
    +initial_type = [("float_input", DictionaryType(Int64TensorType([1]), FloatTensorType([])))]
     onx = convert_sklearn(pipe, initial_types=initial_type)
     with open("pipeline_vectorize.onnx", "wb") as f:
         f.write(onx.SerializeToString())
    @@ -118,15 +158,15 @@ 

    Conversion to ONNX format
    import onnxruntime as rt
     from onnxruntime.capi.onnxruntime_pybind11_state import InvalidArgument
     
    -sess = rt.InferenceSession("pipeline_vectorize.onnx")
    +sess = rt.InferenceSession("pipeline_vectorize.onnx", providers=rt.get_available_providers())
     
     import numpy
    +
     inp, out = sess.get_inputs()[0], sess.get_outputs()[0]
     print("input name='{}' and shape={} and type={}".format(inp.name, inp.shape, inp.type))
     print("output name='{}' and shape={} and type={}".format(out.name, out.shape, out.type))
     

    -

    Out:

    -

    Out:

    [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Unexpected input data type. Actual: ((seq(map(int64,tensor(float))))) , expected: ((map(int64,tensor(float))))
     
    @@ -152,14 +191,13 @@

    Conversion to ONNX format
    print(r2_score(pred, pred_onx))
     

    -

    Out:

    -
    0.9999999999999528
    +
    0.9999999999999738
     

    Very similar. ONNX Runtime uses floats instead of doubles, that explains the small discrepencies.

    -

    Total running time of the script: ( 0 minutes 1.592 seconds)

    -
    + +
    @@ -193,7 +231,7 @@

    ONNX Runtime

    Navigation

    @@ -212,12 +250,12 @@

    Related Topics

    Quick search

    - + @@ -234,7 +272,7 @@

    Quick search

    ©2018-2021, Microsoft. | - Powered by Sphinx 3.5.1 + Powered by Sphinx 5.3.0 & Alabaster 0.7.12 | diff --git a/docs/api/python/auto_examples/plot_dl_keras.html b/docs/api/python/auto_examples/plot_dl_keras.html deleted file mode 100644 index 6b91709778485..0000000000000 --- a/docs/api/python/auto_examples/plot_dl_keras.html +++ /dev/null @@ -1,216 +0,0 @@ - - - - - - - - ONNX Runtime for Keras — ONNX Runtime 1.6.0 documentation - - - - - - - - - - - - - - - - - - - - - - -
    -
    -
    - - -
    - - -
    -

    ONNX Runtime for Keras

    -

    The following demonstrates how to compute the predictions -of a pretrained deep learning model obtained from -keras -with onnxruntime. The conversion requires -keras, -tensorflow, -keras-onnx, -onnxmltools -but then only onnxruntime is required -to compute the predictions.

    -
    import os
    -if not os.path.exists('dense121.onnx'):
    -    from keras.applications.densenet import DenseNet121
    -    model = DenseNet121(include_top=True, weights='imagenet')
    -
    -    from keras2onnx import convert_keras
    -    onx = convert_keras(model, 'dense121.onnx')
    -    with open("dense121.onnx", "wb") as f:
    -        f.write(onx.SerializeToString())
    -
    -
    -
    Traceback (most recent call last):
    -  File "C:\xadupre\microsoft_xadupre\onnxruntime\docs\python\examples\plot_dl_keras.py", line 28, in <module>
    -    onx = convert_keras(model, 'dense121.onnx')
    -  File "C:\xadupre\microsoft_xadupre\keras-onnx\keras2onnx\main.py", line 82, in convert_keras
    -    tf_graph = build_layer_output_from_model(model, output_dict, input_names,
    -  File "C:\xadupre\microsoft_xadupre\keras-onnx\keras2onnx\_parser_tf.py", line 308, in build_layer_output_from_model
    -    graph = model.outputs[0].graph
    -AttributeError: 'KerasTensor' object has no attribute 'graph'
    -
    -
    -

    Let’s load an image (source: wikipedia).

    -
    from keras.preprocessing.image import array_to_img, img_to_array, load_img
    -img = load_img('Sannosawa1.jpg')
    -ximg = img_to_array(img)
    -
    -import matplotlib.pyplot as plt
    -plt.imshow(ximg / 255)
    -plt.axis('off')
    -
    -
    -

    Let’s load the model with onnxruntime.

    -
    import onnxruntime as rt
    -from onnxruntime.capi.onnxruntime_pybind11_state import InvalidGraph
    -
    -try:
    -    sess = rt.InferenceSession('dense121.onnx')
    -    ok = True
    -except (InvalidGraph, TypeError, RuntimeError) as e:
    -    # Probably a mismatch between onnxruntime and onnx version.
    -    print(e)
    -    ok = False
    -
    -if ok:
    -    print("The model expects input shape:", sess.get_inputs()[0].shape)
    -    print("image shape:", ximg.shape)
    -
    -
    -

    Let’s resize the image.

    -
    if ok:
    -    from skimage.transform import resize
    -    import numpy
    -
    -    ximg224 = resize(ximg / 255, (224, 224, 3), anti_aliasing=True)
    -    ximg = ximg224[numpy.newaxis, :, :, :]
    -    ximg = ximg.astype(numpy.float32)
    -
    -    print("new shape:", ximg.shape)
    -
    -
    -

    Let’s compute the output.

    -
    if ok:
    -    input_name = sess.get_inputs()[0].name
    -    res = sess.run(None, {input_name: ximg})
    -    prob = res[0]
    -    print(prob.ravel()[:10])  # Too big to be displayed.
    -
    -
    -

    Let’s get more comprehensive results.

    -
    if ok:
    -    from keras.applications.densenet import decode_predictions
    -    decoded = decode_predictions(prob)
    -
    -    import pandas
    -    df = pandas.DataFrame(decoded[0], columns=["class_id", "name", "P"])
    -    print(df)
    -
    -
    -

    Total running time of the script: ( 0 minutes 6.417 seconds)

    - -

    Gallery generated by Sphinx-Gallery

    -
    - - -
    - -
    -
    - -
    -
    - - - - - - - \ No newline at end of file diff --git a/docs/api/python/auto_examples/plot_load_and_predict.html b/docs/api/python/auto_examples/plot_load_and_predict.html index 0d04cc3019c55..be16a4d7d60a1 100644 --- a/docs/api/python/auto_examples/plot_load_and_predict.html +++ b/docs/api/python/auto_examples/plot_load_and_predict.html @@ -4,19 +4,22 @@ - - Load and predict with ONNX Runtime and a very simple model — ONNX Runtime 1.7.0 documentation - - + + + Load and predict with ONNX Runtime and a very simple model — ONNX Runtime 1.14.0 documentation + + - - - - - + + + + + + + @@ -42,27 +45,21 @@

    Click here to download the full example code

    -
    -

    Load and predict with ONNX Runtime and a very simple model

    +
    +

    Load and predict with ONNX Runtime and a very simple model

    This example demonstrates how to load a model and compute the output for an input vector. It also shows how to retrieve the definition of its inputs and outputs.

    -
    import onnxruntime as rt
    -import numpy
    +
    import numpy
    +
    +import onnxruntime as rt
     from onnxruntime.datasets import get_example
     

    Let’s load a very simple model. -The model is available on github onnx…test_sigmoid. -Note: The next release (ORT 1.10) will require explicitly setting -the providers parameter if you want to use execution providers other -than the default CPU provider (as opposed to the current behavior of -providers getting set/registered by default based on the build flags) when -instantiating InferenceSession. Following code assumes NVIDIA GPU is available, -you can specify other execution providers or don't include providers parameter -to use default CPU provider.

    +The model is available on github onnx…test_sigmoid.

    example1 = get_example("sigmoid.onnx")
    -sess = rt.InferenceSession(example1, providers=["CUDAExecutionProvider"])
    +sess = rt.InferenceSession(example1, providers=rt.get_available_providers())
     

    Let’s see the input name and shape.

    @@ -74,7 +71,6 @@ print("input type", input_type)
    -

    Out:

    input name x
     input shape [3, 4, 5]
     input type tensor(float)
    @@ -89,7 +85,6 @@
     print("output type", output_type)
     
    -

    Out:

    output name y
     output shape [3, 4, 5]
     output type tensor(float)
    @@ -97,32 +92,32 @@
     

    Let’s compute its outputs (or predictions if it is a machine learned model).

    import numpy.random
    -x = numpy.random.random((3,4,5))
    +
    +x = numpy.random.random((3, 4, 5))
     x = x.astype(numpy.float32)
     res = sess.run([output_name], {input_name: x})
     print(res)
     
    -

    Out:

    -
    [array([[[0.56617785, 0.551158  , 0.57431483, 0.62868774, 0.5294609 ],
    -        [0.6545371 , 0.64250827, 0.6819708 , 0.5105157 , 0.5584753 ],
    -        [0.66830933, 0.7094791 , 0.70664704, 0.6744693 , 0.7030401 ],
    -        [0.5395019 , 0.7210481 , 0.5845876 , 0.59664494, 0.6563896 ]],
    -
    -       [[0.71235013, 0.6528918 , 0.5907483 , 0.66855776, 0.61100346],
    -        [0.51468205, 0.60125333, 0.5410304 , 0.57149607, 0.56778824],
    -        [0.5155948 , 0.54921585, 0.5138594 , 0.7051111 , 0.62632954],
    -        [0.5651827 , 0.55247986, 0.6941072 , 0.50415695, 0.7062323 ]],
    -
    -       [[0.51758766, 0.67160237, 0.59442437, 0.5007695 , 0.56175166],
    -        [0.72844744, 0.5150477 , 0.5052765 , 0.5447472 , 0.7088654 ],
    -        [0.596162  , 0.5197903 , 0.6099661 , 0.724396  , 0.5885481 ],
    -        [0.6910895 , 0.53817046, 0.596786  , 0.6119356 , 0.5707261 ]]],
    +
    [array([[[0.72793055, 0.7276279 , 0.5402208 , 0.52651   , 0.6187782 ],
    +        [0.6260148 , 0.72244334, 0.5919314 , 0.51697546, 0.6279962 ],
    +        [0.5698271 , 0.5472146 , 0.5360143 , 0.6660787 , 0.5606206 ],
    +        [0.6521884 , 0.6324413 , 0.70974916, 0.6610228 , 0.51202065]],
    +
    +       [[0.50950056, 0.5815906 , 0.646681  , 0.5778367 , 0.71506053],
    +        [0.5555914 , 0.6803483 , 0.60739946, 0.6383947 , 0.53390664],
    +        [0.67096996, 0.60189897, 0.67437965, 0.50587314, 0.58212054],
    +        [0.6435938 , 0.5710239 , 0.5625591 , 0.6876829 , 0.52768266]],
    +
    +       [[0.5844447 , 0.55838233, 0.7304268 , 0.56796575, 0.52834886],
    +        [0.5544108 , 0.6697368 , 0.50538224, 0.6169538 , 0.5239782 ],
    +        [0.7309318 , 0.70566034, 0.6265241 , 0.5540862 , 0.6926802 ],
    +        [0.58989674, 0.71908456, 0.72978514, 0.63463324, 0.58517313]]],
           dtype=float32)]
     
    -

    Total running time of the script: ( 0 minutes 0.012 seconds)

    -

    Gallery generated by Sphinx-Gallery

    -
    +
    @@ -155,7 +150,7 @@

    ONNX Runtime

    Navigation

    @@ -174,12 +169,12 @@

    Related Topics

    Quick search

    - + @@ -196,7 +191,7 @@

    Quick search

    ©2018-2021, Microsoft. | - Powered by Sphinx 3.5.1 + Powered by Sphinx 5.3.0 & Alabaster 0.7.12 | diff --git a/docs/api/python/auto_examples/plot_metadata.html b/docs/api/python/auto_examples/plot_metadata.html index 7c6d669165569..82218926e9394 100644 --- a/docs/api/python/auto_examples/plot_metadata.html +++ b/docs/api/python/auto_examples/plot_metadata.html @@ -4,19 +4,22 @@ - - Metadata — ONNX Runtime 1.7.0 documentation - - + + + Metadata — ONNX Runtime 1.14.0 documentation + + - - - - - + + + + + + + @@ -42,8 +45,8 @@

    Click here to download the full example code

    -
    -

    Metadata

    +
    +

    Metadata

    ONNX format contains metadata related to how the model was produced. It is useful when the model is deployed to production to keep track of which @@ -52,9 +55,11 @@ logistic regression model trained with scikit-learn and converted with sklearn-onnx.

    from onnxruntime.datasets import get_example
    +
     example = get_example("logreg_iris.onnx")
     
     import onnx
    +
     model = onnx.load(example)
     
     print("doc_string={}".format(model.doc_string))
    @@ -66,7 +71,6 @@
     print("producer_version={}".format(model.producer_version))
     
    -

    Out:

    doc_string=
     domain=onnxml
     ir_version=3
    @@ -77,8 +81,9 @@
     

    With ONNX Runtime:

    -
    from onnxruntime import InferenceSession
    -sess = InferenceSession(example)
    +
    import onnxruntime as rt
    +
    +sess = rt.InferenceSession(example, providers=rt.get_available_providers())
     meta = sess.get_modelmeta()
     
     print("custom_metadata_map={}".format(meta.custom_metadata_map))
    @@ -89,7 +94,6 @@
     print("version={}".format(meta.version))
     
    -

    Out:

    custom_metadata_map={}
     description=
     domain=onnxml
    @@ -98,8 +102,8 @@
     version=0
     
    -

    Total running time of the script: ( 0 minutes 0.006 seconds)

    -

    Gallery generated by Sphinx-Gallery

    -
    +
    @@ -132,7 +136,7 @@

    ONNX Runtime

    Navigation

    @@ -151,12 +155,12 @@

    Related Topics

    Quick search

    - + @@ -173,7 +177,7 @@

    Quick search

    ©2018-2021, Microsoft. | - Powered by Sphinx 3.5.1 + Powered by Sphinx 5.3.0 & Alabaster 0.7.12 | diff --git a/docs/api/python/auto_examples/plot_pipeline.html b/docs/api/python/auto_examples/plot_pipeline.html index 801f6fe9ef37d..03a57f7bd244f 100644 --- a/docs/api/python/auto_examples/plot_pipeline.html +++ b/docs/api/python/auto_examples/plot_pipeline.html @@ -4,19 +4,22 @@ - - Draw a pipeline — ONNX Runtime 1.7.0 documentation - - + + + Draw a pipeline — ONNX Runtime 1.14.0 documentation + + - - - - - + + + + + + + @@ -42,8 +45,8 @@

    Click here to download the full example code

    -
    -

    Draw a pipeline

    +
    +

    Draw a pipeline

    There is no other way to look into one model stored in ONNX format than looking into its node with onnx. This example demonstrates @@ -55,19 +58,20 @@

  • Draw a model with ONNX

  • -
    -

    Retrieve a model in JSON format

    +
    +

    Retrieve a model in JSON format

    That’s the most simple way.

    from onnxruntime.datasets import get_example
    +
     example1 = get_example("mul_1.onnx")
     
     import onnx
    +
     model = onnx.load(example1)  # model is a ModelProto protobuf message
     
     print(model)
     
    -

    Out:

    ir_version: 3
     producer_name: "chenta"
     graph {
    @@ -130,49 +134,51 @@ 

    Retrieve a model in JSON format

    -
    -
    -

    Draw a model with ONNX

    -

    We use net_drawer.py + +

    +

    Draw a model with ONNX

    +

    We use net_drawer.py included in onnx package. We use onnx to load the model in a different way than before.

    from onnx import ModelProto
    +
     model = ModelProto()
    -with open(example1, 'rb') as fid:
    +with open(example1, "rb") as fid:
         content = fid.read()
         model.ParseFromString(content)
     

    We convert it into a graph.

    -
    from onnx.tools.net_drawer import GetPydotGraph, GetOpNodeProducer
    -pydot_graph = GetPydotGraph(model.graph, name=model.graph.name, rankdir="LR",
    -                            node_producer=GetOpNodeProducer("docstring"))
    +
    from onnx.tools.net_drawer import GetOpNodeProducer, GetPydotGraph
    +
    +pydot_graph = GetPydotGraph(
    +    model.graph, name=model.graph.name, rankdir="LR", node_producer=GetOpNodeProducer("docstring")
    +)
     pydot_graph.write_dot("graph.dot")
     

    Then into an image

    import os
    -os.system('dot -O -Tpng graph.dot')
    +
    +os.system("dot -O -Tpng graph.dot")
     
    -

    Out:

    0
     

    Which we display…

    import matplotlib.pyplot as plt
    +
     image = plt.imread("graph.dot.png")
     plt.imshow(image)
     
    -plot pipeline -

    Out:

    -
    <matplotlib.image.AxesImage object at 0x7fd0a620e860>
    +plot pipeline
    <matplotlib.image.AxesImage object at 0x7efc140b5f10>
     
    -

    Total running time of the script: ( 0 minutes 0.244 seconds)

    - +
    +
    @@ -206,7 +212,7 @@

    ONNX Runtime

    Navigation

    @@ -225,12 +231,12 @@

    Related Topics

    Quick search

    - + @@ -247,7 +253,7 @@

    Quick search

    ©2018-2021, Microsoft. | - Powered by Sphinx 3.5.1 + Powered by Sphinx 5.3.0 & Alabaster 0.7.12 | diff --git a/docs/api/python/auto_examples/plot_profiling.html b/docs/api/python/auto_examples/plot_profiling.html index 058efa16252ea..b7e8c016e5315 100644 --- a/docs/api/python/auto_examples/plot_profiling.html +++ b/docs/api/python/auto_examples/plot_profiling.html @@ -4,19 +4,22 @@ - - Profile the execution of a simple model — ONNX Runtime 1.7.0 documentation - - + + + Profile the execution of a simple model — ONNX Runtime 1.14.0 documentation + + - - - - - + + + + + + + @@ -42,13 +45,14 @@

    Click here to download the full example code

    -
    -

    Profile the execution of a simple model

    +
    +

    Profile the execution of a simple model

    ONNX Runtime can profile the execution of the model. This example shows how to interpret the results.

    -
    import onnx
    +
    import numpy
    +import onnx
    +
     import onnxruntime as rt
    -import numpy
     from onnxruntime.datasets import get_example
     
     
    @@ -66,7 +70,7 @@
     
    example1 = get_example("mul_1.onnx")
     onnx_model = change_ir_version(example1)
     onnx_model_str = onnx_model.SerializeToString()
    -sess = rt.InferenceSession(onnx_model_str)
    +sess = rt.InferenceSession(onnx_model_str, providers=rt.get_available_providers())
     input_name = sess.get_inputs()[0].name
     
     x = numpy.array([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], dtype=numpy.float32)
    @@ -74,7 +78,6 @@
     print(res)
     
    -

    Out:

    [array([[ 1.,  4.],
            [ 9., 16.],
            [25., 36.]], dtype=float32)]
    @@ -84,7 +87,7 @@
     before running the predictions.

    options = rt.SessionOptions()
     options.enable_profiling = True
    -sess_profile = rt.InferenceSession(onnx_model_str, options)
    +sess_profile = rt.InferenceSession(onnx_model_str, options, providers=rt.get_available_providers())
     input_name = sess.get_inputs()[0].name
     
     x = numpy.array([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], dtype=numpy.float32)
    @@ -94,40 +97,40 @@
     print(prof_file)
     
    -

    Out:

    -
    onnxruntime_profile__2021-03-04_18-40-00.json
    +
    onnxruntime_profile__2022-11-01_10-29-57.json
     

    The results are stored un a file in JSON format. Let’s see what it contains.

    import json
    +
     with open(prof_file, "r") as f:
         sess_time = json.load(f)
     import pprint
    +
     pprint.pprint(sess_time)
     
    -

    Out:

    [{'args': {},
       'cat': 'Session',
    -  'dur': 101,
    +  'dur': 49,
       'name': 'model_loading_array',
       'ph': 'X',
    -  'pid': 77,
    -  'tid': 77,
    -  'ts': 3},
    +  'pid': 3109,
    +  'tid': 3109,
    +  'ts': 1},
      {'args': {},
       'cat': 'Session',
    -  'dur': 227,
    +  'dur': 236,
       'name': 'session_initialization',
       'ph': 'X',
    -  'pid': 77,
    -  'tid': 77,
    -  'ts': 134}]
    +  'pid': 3109,
    +  'tid': 3109,
    +  'ts': 59}]
     
    -

    Total running time of the script: ( 0 minutes 0.010 seconds)

    -

    Gallery generated by Sphinx-Gallery

    -
    +
    @@ -160,7 +163,7 @@

    ONNX Runtime

    Navigation

    @@ -179,12 +182,12 @@

    Related Topics

    Quick search

    - + @@ -201,7 +204,7 @@

    Quick search

    ©2018-2021, Microsoft. | - Powered by Sphinx 3.5.1 + Powered by Sphinx 5.3.0 & Alabaster 0.7.12 | diff --git a/docs/api/python/auto_examples/plot_train_convert_predict.html b/docs/api/python/auto_examples/plot_train_convert_predict.html index f5678cff23651..57efc6ae31c51 100644 --- a/docs/api/python/auto_examples/plot_train_convert_predict.html +++ b/docs/api/python/auto_examples/plot_train_convert_predict.html @@ -4,19 +4,22 @@ - - Train, convert and predict with ONNX Runtime — ONNX Runtime 1.7.0 documentation - - + + + Train, convert and predict with ONNX Runtime — ONNX Runtime 1.14.0 documentation + + - - - - - + + + + + + + @@ -41,8 +44,8 @@

    Click here to download the full example code

    -
    -

    Train, convert and predict with ONNX Runtime

    +
    +

    Train, convert and predict with ONNX Runtime

    This example demonstrates an end to end scenario starting with the training of a machine learned model to its use in its converted from.

    @@ -54,37 +57,31 @@
  • Benchmark with RandomForest

  • -
    -

    Train a logistic regression

    +
    +

    Train a logistic regression

    The first step consists in retrieving the iris datset.

    from sklearn.datasets import load_iris
    +
     iris = load_iris()
     X, y = iris.data, iris.target
     
     from sklearn.model_selection import train_test_split
    +
     X_train, X_test, y_train, y_test = train_test_split(X, y)
     

    Then we fit a model.

    from sklearn.linear_model import LogisticRegression
    +
     clr = LogisticRegression()
     clr.fit(X_train, y_train)
     
    -

    Out:

    -
    /opt/miniconda/lib/python3.7/site-packages/sklearn/linear_model/_logistic.py:765: ConvergenceWarning: lbfgs failed to converge (status=1):
    -STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
    -
    -Increase the number of iterations (max_iter) or scale the data as shown in:
    -    https://scikit-learn.org/stable/modules/preprocessing.html
    -Please also refer to the documentation for alternative solver options:
    -    https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
    -  extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
    -
    -LogisticRegression()
    -
    +
    +
    LogisticRegression()
    In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
    On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
    -

    We compute the prediction on the test set +
    +

    We compute the prediction on the test set and we show the confusion matrix.

    from sklearn.metrics import confusion_matrix
     
    @@ -92,22 +89,21 @@ 

    Train a logistic regressionprint(confusion_matrix(y_test, pred))

    -

    Out:

    -
    +
    +

    Conversion to ONNX format

    We use module sklearn-onnx to convert the model into ONNX format.

    -

    Out:

    -
    +
    +

    Probabilities

    Probabilities are needed to compute other relevant metrics such as the ROC Curve. Let’s see how to get them first with @@ -156,10 +150,9 @@

    Probabilitiesprint(prob_sklearn[:3])

    -

    Out:

    -
    [[4.15987711e-03 8.54898335e-01 1.40941788e-01]
    - [9.44228260e-01 5.57707615e-02 9.78002823e-07]
    - [5.17324282e-02 8.88396143e-01 5.98714288e-02]]
    +
    [[2.11305775e-04 1.94108929e-01 8.05679765e-01]
    + [2.94214236e-03 7.96030729e-01 2.01027129e-01]
    + [6.94823786e-02 9.18372171e-01 1.21454503e-02]]
     

    And then with ONNX Runtime. @@ -168,18 +161,19 @@

    Probabilitiesprob_rt = sess.run([prob_name], {input_name: X_test.astype(numpy.float32)})[0] import pprint + pprint.pprint(prob_rt[0:3])

    -

    Out:

    -
    [{0: 0.0041598789393901825, 1: 0.8548984527587891, 2: 0.14094172418117523},
    - {0: 0.9442282915115356, 1: 0.055770788341760635, 2: 9.78002844931325e-07},
    - {0: 0.05173242464661598, 1: 0.888396143913269, 2: 0.05987146869301796}]
    +
    [{0: 0.0002113060181727633, 1: 0.19410927593708038, 2: 0.805679440498352},
    + {0: 0.0029421390499919653, 1: 0.7960308194160461, 2: 0.2010270357131958},
    + {0: 0.06948233395814896, 1: 0.9183722734451294, 2: 0.012145438231527805}]
     

    Let’s benchmark.

    -

    Out:

    Execution time for clr.predict
    -Average 8.56e-05 min=7.72e-05 max=0.000101
    +Average 4.08e-05 min=3.96e-05 max=4.77e-05
     Execution time for ONNX Runtime
    -Average 5.02e-05 min=4.61e-05 max=6.17e-05
    +Average 1.85e-05 min=1.77e-05 max=2.14e-05
     
    -5.0215695519000296e-05
    +1.8539305000189187e-05
     

    Let’s benchmark a scenario similar to what a webservice @@ -213,71 +207,77 @@

    Probabilitiesn = nrow for i in range(0, n): im = i % nrow - fct(X_test[im: im+1]) + fct(X_test[im : im + 1]) + print("Execution time for clr.predict") speed("loop(X_test, clr.predict, 100)") + def sess_predict(x): return sess.run([label_name], {input_name: x.astype(numpy.float32)})[0] + print("Execution time for sess_predict") speed("loop(X_test, sess_predict, 100)")

    -

    Out:

    Execution time for clr.predict
    -Average 0.00723 min=0.00694 max=0.00837
    +Average 0.00381 min=0.00373 max=0.00389
     Execution time for sess_predict
    -Average 0.00156 min=0.00152 max=0.00166
    +Average 0.000817 min=0.000808 max=0.000836
     
    -0.0015644603711552918
    +0.0008169881699997461
     

    Let’s do the same for the probabilities.

    print("Execution time for predict_proba")
     speed("loop(X_test, clr.predict_proba, 100)")
     
    +
     def sess_predict_proba(x):
         return sess.run([prob_name], {input_name: x.astype(numpy.float32)})[0]
     
    +
     print("Execution time for sess_predict_proba")
     speed("loop(X_test, sess_predict_proba, 100)")
     
    -

    Out:

    Execution time for predict_proba
    -Average 0.0108 min=0.0104 max=0.0115
    +Average 0.00572 min=0.00558 max=0.00581
     Execution time for sess_predict_proba
    -Average 0.0017 min=0.00163 max=0.00188
    +Average 0.000836 min=0.000824 max=0.000864
     
    -0.0016972313076257703
    +0.0008363299950003977
     

    This second comparison is better as ONNX Runtime, in this experience, computes the label and the probabilities in every case.

    - -
    -

    Benchmark with RandomForest

    + +
    +

    Benchmark with RandomForest

    We first train and save a model in ONNX format.

    from sklearn.ensemble import RandomForestClassifier
    +
     rf = RandomForestClassifier()
     rf.fit(X_train, y_train)
     
    -initial_type = [('float_input', FloatTensorType([1, 4]))]
    +initial_type = [("float_input", FloatTensorType([1, 4]))]
     onx = convert_sklearn(rf, initial_types=initial_type)
     with open("rf_iris.onnx", "wb") as f:
         f.write(onx.SerializeToString())
     

    We compare.

    -
    sess = rt.InferenceSession("rf_iris.onnx")
    +
    sess = rt.InferenceSession("rf_iris.onnx", providers=rt.get_available_providers())
    +
     
     def sess_predict_proba_rf(x):
         return sess.run([prob_name], {input_name: x.astype(numpy.float32)})[0]
     
    +
     print("Execution time for predict_proba")
     speed("loop(X_test, rf.predict_proba, 100)")
     
    @@ -285,13 +285,12 @@ 

    Benchmark with RandomForestspeed("loop(X_test, sess_predict_proba_rf, 100)")

    -

    Out:

    Execution time for predict_proba
    -Average 1.25 min=1.23 max=1.26
    +Average 0.65 min=0.641 max=0.67
     Execution time for sess_predict_proba
    -Average 0.00274 min=0.00245 max=0.00442
    +Average 0.00103 min=0.00101 max=0.00113
     
    -0.0027408220106735826
    +0.001025739425000154
     

    Let’s see with different number of trees.

    @@ -301,66 +300,66 @@

    Benchmark with RandomForestprint(n_trees) rf = RandomForestClassifier(n_estimators=n_trees) rf.fit(X_train, y_train) - initial_type = [('float_input', FloatTensorType([1, 4]))] + initial_type = [("float_input", FloatTensorType([1, 4]))] onx = convert_sklearn(rf, initial_types=initial_type) with open("rf_iris_%d.onnx" % n_trees, "wb") as f: f.write(onx.SerializeToString()) - sess = rt.InferenceSession("rf_iris_%d.onnx" % n_trees) + sess = rt.InferenceSession("rf_iris_%d.onnx" % n_trees, providers=rt.get_available_providers()) + def sess_predict_proba_loop(x): return sess.run([prob_name], {input_name: x.astype(numpy.float32)})[0] + tsk = speed("loop(X_test, rf.predict_proba, 100)", number=5, repeat=5) trt = speed("loop(X_test, sess_predict_proba_loop, 100)", number=5, repeat=5) - measures.append({'n_trees': n_trees, 'sklearn': tsk, 'rt': trt}) + measures.append({"n_trees": n_trees, "sklearn": tsk, "rt": trt}) from pandas import DataFrame + df = DataFrame(measures) ax = df.plot(x="n_trees", y="sklearn", label="scikit-learn", c="blue", logy=True) -df.plot(x="n_trees", y="rt", label="onnxruntime", - ax=ax, c="green", logy=True) +df.plot(x="n_trees", y="rt", label="onnxruntime", ax=ax, c="green", logy=True) ax.set_xlabel("Number of trees") ax.set_ylabel("Prediction time (s)") ax.set_title("Speed comparison between scikit-learn and ONNX Runtime\nFor a random forest on Iris dataset") ax.legend()

    -Speed comparison between scikit-learn and ONNX Runtime For a random forest on Iris dataset -

    Out:

    -
    5
    -Average 0.11 min=0.107 max=0.118
    -Average 0.00169 min=0.00161 max=0.00177
    +Speed comparison between scikit-learn and ONNX Runtime For a random forest on Iris dataset
    5
    +Average 0.0465 min=0.0462 max=0.0469
    +Average 0.000802 min=0.000789 max=0.000834
     10
    -Average 0.167 min=0.165 max=0.169
    -Average 0.0017 min=0.00165 max=0.00179
    +Average 0.0788 min=0.0783 max=0.0799
    +Average 0.000815 min=0.000807 max=0.000837
     15
    -Average 0.228 min=0.227 max=0.231
    -Average 0.0017 min=0.00167 max=0.00172
    +Average 0.109 min=0.108 max=0.11
    +Average 0.00082 min=0.000809 max=0.000844
     20
    -Average 0.291 min=0.286 max=0.296
    -Average 0.00175 min=0.00173 max=0.00176
    +Average 0.142 min=0.141 max=0.143
    +Average 0.000847 min=0.00084 max=0.000869
     25
    -Average 0.346 min=0.342 max=0.35
    -Average 0.00174 min=0.00172 max=0.00181
    +Average 0.173 min=0.171 max=0.174
    +Average 0.000846 min=0.000833 max=0.000859
     30
    -Average 0.407 min=0.404 max=0.41
    -Average 0.0018 min=0.00174 max=0.0019
    +Average 0.204 min=0.202 max=0.205
    +Average 0.000877 min=0.000864 max=0.000921
     35
    -Average 0.463 min=0.459 max=0.467
    -Average 0.0018 min=0.00176 max=0.00187
    +Average 0.235 min=0.235 max=0.236
    +Average 0.000879 min=0.000871 max=0.000896
     40
    -Average 0.531 min=0.521 max=0.556
    -Average 0.0018 min=0.00179 max=0.00183
    +Average 0.268 min=0.267 max=0.269
    +Average 0.000898 min=0.000889 max=0.000923
     45
    -Average 0.582 min=0.577 max=0.597
    -Average 0.00187 min=0.00185 max=0.00189
    +Average 0.299 min=0.299 max=0.3
    +Average 0.000913 min=0.000904 max=0.000935
     50
    -Average 0.642 min=0.64 max=0.645
    -Average 0.00188 min=0.00185 max=0.00196
    +Average 0.33 min=0.33 max=0.331
    +Average 0.000916 min=0.000904 max=0.000945
     
    -<matplotlib.legend.Legend object at 0x7fca300dcda0>
    +<matplotlib.legend.Legend object at 0x7efbfea26bb0>
     
    -

    Total running time of the script: ( 5 minutes 52.417 seconds)

    -
    +
    +
    @@ -394,7 +393,7 @@

    ONNX Runtime

    Navigation

    @@ -412,12 +411,12 @@

    Related Topics

    Quick search

    - + @@ -434,7 +433,7 @@

    Quick search

    ©2018-2021, Microsoft. | - Powered by Sphinx 3.5.1 + Powered by Sphinx 5.3.0 & Alabaster 0.7.12 | diff --git a/docs/api/python/auto_examples/sg_execution_times.html b/docs/api/python/auto_examples/sg_execution_times.html index 28af50a188cec..73ae2b46801a8 100644 --- a/docs/api/python/auto_examples/sg_execution_times.html +++ b/docs/api/python/auto_examples/sg_execution_times.html @@ -4,19 +4,22 @@ - - Computation times — ONNX Runtime 1.7.0 documentation - - + + + Computation times — ONNX Runtime 1.14.0 documentation + + - - - - - + + + + + + + @@ -35,9 +38,9 @@
    -
    -

    Computation times

    -

    05:52.417 total execution time for auto_examples files:

    +
    +

    Computation times

    +

    03:01.751 total execution time for auto_examples files:

    @@ -46,40 +49,40 @@ - + - - + + - - + + - - + + - - + + - - + + - - + + - - + +

    Train, convert and predict with ONNX Runtime (plot_train_convert_predict.py)

    05:52.417

    03:00.633

    0.0 MB

    ONNX Runtime Backend for ONNX (plot_backend.py)

    00:00.000

    Train, convert and predict with ONNX Runtime (plot_convert_pipeline_vectorizer.py)

    00:00.883

    0.0 MB

    Common errors with onnxruntime (plot_common_errors.py)

    00:00.000

    Draw a pipeline (plot_pipeline.py)

    00:00.187

    0.0 MB

    Train, convert and predict with ONNX Runtime (plot_convert_pipeline_vectorizer.py)

    00:00.000

    Common errors with onnxruntime (plot_common_errors.py)

    00:00.017

    0.0 MB

    Load and predict with ONNX Runtime and a very simple model (plot_load_and_predict.py)

    00:00.000

    ONNX Runtime Backend for ONNX (plot_backend.py)

    00:00.013

    0.0 MB

    Metadata (plot_metadata.py)

    00:00.000

    Load and predict with ONNX Runtime and a very simple model (plot_load_and_predict.py)

    00:00.011

    0.0 MB

    Draw a pipeline (plot_pipeline.py)

    00:00.000

    Profile the execution of a simple model (plot_profiling.py)

    00:00.004

    0.0 MB

    Profile the execution of a simple model (plot_profiling.py)

    00:00.000

    Metadata (plot_metadata.py)

    00:00.003

    0.0 MB

    -
    +
    @@ -103,7 +106,7 @@

    ONNX Runtime

    Navigation

    @@ -118,12 +121,12 @@

    Related Topics

    Quick search

    - + @@ -140,7 +143,7 @@

    Quick search

    ©2018-2021, Microsoft. | - Powered by Sphinx 3.5.1 + Powered by Sphinx 5.3.0 & Alabaster 0.7.12 | diff --git a/docs/api/python/downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip b/docs/api/python/downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip index c1b5af3fb627f..17b5d2d19e3f2 100644 Binary files a/docs/api/python/downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip and b/docs/api/python/downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip differ diff --git a/docs/api/python/downloads/1680115d3d937dfbb2d86adb705d9c5d/plot_train_convert_predict.ipynb b/docs/api/python/downloads/1680115d3d937dfbb2d86adb705d9c5d/plot_train_convert_predict.ipynb index 54589b00af045..470c162b0ae63 100644 --- a/docs/api/python/downloads/1680115d3d937dfbb2d86adb705d9c5d/plot_train_convert_predict.ipynb +++ b/docs/api/python/downloads/1680115d3d937dfbb2d86adb705d9c5d/plot_train_convert_predict.ipynb @@ -26,7 +26,7 @@ }, "outputs": [], "source": [ - "from sklearn.datasets import load_iris\niris = load_iris()\nX, y = iris.data, iris.target\n\nfrom sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y)" + "from sklearn.datasets import load_iris\n\niris = load_iris()\nX, y = iris.data, iris.target\n\nfrom sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(X, y)" ] }, { @@ -44,7 +44,7 @@ }, "outputs": [], "source": [ - "from sklearn.linear_model import LogisticRegression\nclr = LogisticRegression()\nclr.fit(X_train, y_train)" + "from sklearn.linear_model import LogisticRegression\n\nclr = LogisticRegression()\nclr.fit(X_train, y_train)" ] }, { @@ -69,7 +69,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Conversion to ONNX format\n\nWe use module \n`sklearn-onnx `_\nto convert the model into ONNX format.\n\n" + "## Conversion to ONNX format\n\nWe use module\n[sklearn-onnx](https://github.com/onnx/sklearn-onnx)\nto convert the model into ONNX format.\n\n" ] }, { @@ -80,7 +80,7 @@ }, "outputs": [], "source": [ - "from skl2onnx import convert_sklearn\nfrom skl2onnx.common.data_types import FloatTensorType\n\ninitial_type = [('float_input', FloatTensorType([None, 4]))]\nonx = convert_sklearn(clr, initial_types=initial_type)\nwith open(\"logreg_iris.onnx\", \"wb\") as f:\n f.write(onx.SerializeToString())" + "from skl2onnx import convert_sklearn\nfrom skl2onnx.common.data_types import FloatTensorType\n\ninitial_type = [(\"float_input\", FloatTensorType([None, 4]))]\nonx = convert_sklearn(clr, initial_types=initial_type)\nwith open(\"logreg_iris.onnx\", \"wb\") as f:\n f.write(onx.SerializeToString())" ] }, { @@ -98,7 +98,7 @@ }, "outputs": [], "source": [ - "import onnxruntime as rt\nsess = rt.InferenceSession(\"logreg_iris.onnx\")\n\nprint(\"input name='{}' and shape={}\".format(\n sess.get_inputs()[0].name, sess.get_inputs()[0].shape))\nprint(\"output name='{}' and shape={}\".format(\n sess.get_outputs()[0].name, sess.get_outputs()[0].shape))" + "import onnxruntime as rt\n\nsess = rt.InferenceSession(\"logreg_iris.onnx\", providers=rt.get_available_providers())\n\nprint(\"input name='{}' and shape={}\".format(sess.get_inputs()[0].name, sess.get_inputs()[0].shape))\nprint(\"output name='{}' and shape={}\".format(sess.get_outputs()[0].name, sess.get_outputs()[0].shape))" ] }, { @@ -116,7 +116,7 @@ }, "outputs": [], "source": [ - "input_name = sess.get_inputs()[0].name\nlabel_name = sess.get_outputs()[0].name\n\nimport numpy\npred_onx = sess.run([label_name], {input_name: X_test.astype(numpy.float32)})[0]\nprint(confusion_matrix(pred, pred_onx))" + "input_name = sess.get_inputs()[0].name\nlabel_name = sess.get_outputs()[0].name\n\nimport numpy\n\npred_onx = sess.run([label_name], {input_name: X_test.astype(numpy.float32)})[0]\nprint(confusion_matrix(pred, pred_onx))" ] }, { @@ -141,7 +141,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "And then with ONNX Runtime.\nThe probabilies appear to be \n\n" + "And then with ONNX Runtime.\nThe probabilies appear to be\n\n" ] }, { @@ -152,7 +152,7 @@ }, "outputs": [], "source": [ - "prob_name = sess.get_outputs()[1].name\nprob_rt = sess.run([prob_name], {input_name: X_test.astype(numpy.float32)})[0]\n\nimport pprint\npprint.pprint(prob_rt[0:3])" + "prob_name = sess.get_outputs()[1].name\nprob_rt = sess.run([prob_name], {input_name: X_test.astype(numpy.float32)})[0]\n\nimport pprint\n\npprint.pprint(prob_rt[0:3])" ] }, { @@ -170,7 +170,7 @@ }, "outputs": [], "source": [ - "from timeit import Timer\n\ndef speed(inst, number=10, repeat=20):\n timer = Timer(inst, globals=globals())\n raw = numpy.array(timer.repeat(repeat, number=number))\n ave = raw.sum() / len(raw) / number\n mi, ma = raw.min() / number, raw.max() / number\n print(\"Average %1.3g min=%1.3g max=%1.3g\" % (ave, mi, ma))\n return ave\n\nprint(\"Execution time for clr.predict\")\nspeed(\"clr.predict(X_test)\")\n\nprint(\"Execution time for ONNX Runtime\")\nspeed(\"sess.run([label_name], {input_name: X_test.astype(numpy.float32)})[0]\")" + "from timeit import Timer\n\n\ndef speed(inst, number=10, repeat=20):\n timer = Timer(inst, globals=globals())\n raw = numpy.array(timer.repeat(repeat, number=number))\n ave = raw.sum() / len(raw) / number\n mi, ma = raw.min() / number, raw.max() / number\n print(\"Average %1.3g min=%1.3g max=%1.3g\" % (ave, mi, ma))\n return ave\n\n\nprint(\"Execution time for clr.predict\")\nspeed(\"clr.predict(X_test)\")\n\nprint(\"Execution time for ONNX Runtime\")\nspeed(\"sess.run([label_name], {input_name: X_test.astype(numpy.float32)})[0]\")" ] }, { @@ -188,7 +188,7 @@ }, "outputs": [], "source": [ - "def loop(X_test, fct, n=None):\n nrow = X_test.shape[0]\n if n is None:\n n = nrow\n for i in range(0, n):\n im = i % nrow\n fct(X_test[im: im+1])\n\nprint(\"Execution time for clr.predict\")\nspeed(\"loop(X_test, clr.predict, 100)\")\n\ndef sess_predict(x):\n return sess.run([label_name], {input_name: x.astype(numpy.float32)})[0]\n\nprint(\"Execution time for sess_predict\")\nspeed(\"loop(X_test, sess_predict, 100)\")" + "def loop(X_test, fct, n=None):\n nrow = X_test.shape[0]\n if n is None:\n n = nrow\n for i in range(0, n):\n im = i % nrow\n fct(X_test[im : im + 1])\n\n\nprint(\"Execution time for clr.predict\")\nspeed(\"loop(X_test, clr.predict, 100)\")\n\n\ndef sess_predict(x):\n return sess.run([label_name], {input_name: x.astype(numpy.float32)})[0]\n\n\nprint(\"Execution time for sess_predict\")\nspeed(\"loop(X_test, sess_predict, 100)\")" ] }, { @@ -206,14 +206,14 @@ }, "outputs": [], "source": [ - "print(\"Execution time for predict_proba\")\nspeed(\"loop(X_test, clr.predict_proba, 100)\")\n\ndef sess_predict_proba(x):\n return sess.run([prob_name], {input_name: x.astype(numpy.float32)})[0]\n\nprint(\"Execution time for sess_predict_proba\")\nspeed(\"loop(X_test, sess_predict_proba, 100)\")" + "print(\"Execution time for predict_proba\")\nspeed(\"loop(X_test, clr.predict_proba, 100)\")\n\n\ndef sess_predict_proba(x):\n return sess.run([prob_name], {input_name: x.astype(numpy.float32)})[0]\n\n\nprint(\"Execution time for sess_predict_proba\")\nspeed(\"loop(X_test, sess_predict_proba, 100)\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "This second comparison is better as \nONNX Runtime, in this experience,\ncomputes the label and the probabilities\nin every case.\n\n" + "This second comparison is better as\nONNX Runtime, in this experience,\ncomputes the label and the probabilities\nin every case.\n\n" ] }, { @@ -231,7 +231,7 @@ }, "outputs": [], "source": [ - "from sklearn.ensemble import RandomForestClassifier\nrf = RandomForestClassifier()\nrf.fit(X_train, y_train)\n\ninitial_type = [('float_input', FloatTensorType([1, 4]))]\nonx = convert_sklearn(rf, initial_types=initial_type)\nwith open(\"rf_iris.onnx\", \"wb\") as f:\n f.write(onx.SerializeToString())" + "from sklearn.ensemble import RandomForestClassifier\n\nrf = RandomForestClassifier()\nrf.fit(X_train, y_train)\n\ninitial_type = [(\"float_input\", FloatTensorType([1, 4]))]\nonx = convert_sklearn(rf, initial_types=initial_type)\nwith open(\"rf_iris.onnx\", \"wb\") as f:\n f.write(onx.SerializeToString())" ] }, { @@ -249,7 +249,7 @@ }, "outputs": [], "source": [ - "sess = rt.InferenceSession(\"rf_iris.onnx\")\n\ndef sess_predict_proba_rf(x):\n return sess.run([prob_name], {input_name: x.astype(numpy.float32)})[0]\n\nprint(\"Execution time for predict_proba\")\nspeed(\"loop(X_test, rf.predict_proba, 100)\")\n\nprint(\"Execution time for sess_predict_proba\")\nspeed(\"loop(X_test, sess_predict_proba_rf, 100)\")" + "sess = rt.InferenceSession(\"rf_iris.onnx\", providers=rt.get_available_providers())\n\n\ndef sess_predict_proba_rf(x):\n return sess.run([prob_name], {input_name: x.astype(numpy.float32)})[0]\n\n\nprint(\"Execution time for predict_proba\")\nspeed(\"loop(X_test, rf.predict_proba, 100)\")\n\nprint(\"Execution time for sess_predict_proba\")\nspeed(\"loop(X_test, sess_predict_proba_rf, 100)\")" ] }, { @@ -267,7 +267,7 @@ }, "outputs": [], "source": [ - "measures = []\n\nfor n_trees in range(5, 51, 5): \n print(n_trees)\n rf = RandomForestClassifier(n_estimators=n_trees)\n rf.fit(X_train, y_train)\n initial_type = [('float_input', FloatTensorType([1, 4]))]\n onx = convert_sklearn(rf, initial_types=initial_type)\n with open(\"rf_iris_%d.onnx\" % n_trees, \"wb\") as f:\n f.write(onx.SerializeToString())\n sess = rt.InferenceSession(\"rf_iris_%d.onnx\" % n_trees)\n def sess_predict_proba_loop(x):\n return sess.run([prob_name], {input_name: x.astype(numpy.float32)})[0]\n tsk = speed(\"loop(X_test, rf.predict_proba, 100)\", number=5, repeat=5)\n trt = speed(\"loop(X_test, sess_predict_proba_loop, 100)\", number=5, repeat=5)\n measures.append({'n_trees': n_trees, 'sklearn': tsk, 'rt': trt})\n\nfrom pandas import DataFrame\ndf = DataFrame(measures)\nax = df.plot(x=\"n_trees\", y=\"sklearn\", label=\"scikit-learn\", c=\"blue\", logy=True)\ndf.plot(x=\"n_trees\", y=\"rt\", label=\"onnxruntime\",\n ax=ax, c=\"green\", logy=True)\nax.set_xlabel(\"Number of trees\")\nax.set_ylabel(\"Prediction time (s)\")\nax.set_title(\"Speed comparison between scikit-learn and ONNX Runtime\\nFor a random forest on Iris dataset\")\nax.legend()" + "measures = []\n\nfor n_trees in range(5, 51, 5):\n print(n_trees)\n rf = RandomForestClassifier(n_estimators=n_trees)\n rf.fit(X_train, y_train)\n initial_type = [(\"float_input\", FloatTensorType([1, 4]))]\n onx = convert_sklearn(rf, initial_types=initial_type)\n with open(\"rf_iris_%d.onnx\" % n_trees, \"wb\") as f:\n f.write(onx.SerializeToString())\n sess = rt.InferenceSession(\"rf_iris_%d.onnx\" % n_trees, providers=rt.get_available_providers())\n\n def sess_predict_proba_loop(x):\n return sess.run([prob_name], {input_name: x.astype(numpy.float32)})[0]\n\n tsk = speed(\"loop(X_test, rf.predict_proba, 100)\", number=5, repeat=5)\n trt = speed(\"loop(X_test, sess_predict_proba_loop, 100)\", number=5, repeat=5)\n measures.append({\"n_trees\": n_trees, \"sklearn\": tsk, \"rt\": trt})\n\nfrom pandas import DataFrame\n\ndf = DataFrame(measures)\nax = df.plot(x=\"n_trees\", y=\"sklearn\", label=\"scikit-learn\", c=\"blue\", logy=True)\ndf.plot(x=\"n_trees\", y=\"rt\", label=\"onnxruntime\", ax=ax, c=\"green\", logy=True)\nax.set_xlabel(\"Number of trees\")\nax.set_ylabel(\"Prediction time (s)\")\nax.set_title(\"Speed comparison between scikit-learn and ONNX Runtime\\nFor a random forest on Iris dataset\")\nax.legend()" ] } ], @@ -287,7 +287,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.0" + "version": "3.8.10" } }, "nbformat": 4, diff --git a/docs/api/python/downloads/1b60ac13d6a5a4b72d4e4d28d1544f8d/plot_common_errors.ipynb b/docs/api/python/downloads/1b60ac13d6a5a4b72d4e4d28d1544f8d/plot_common_errors.ipynb index b835e389d0e58..bba0472e83a8c 100644 --- a/docs/api/python/downloads/1b60ac13d6a5a4b72d4e4d28d1544f8d/plot_common_errors.ipynb +++ b/docs/api/python/downloads/1b60ac13d6a5a4b72d4e4d28d1544f8d/plot_common_errors.ipynb @@ -26,7 +26,7 @@ }, "outputs": [], "source": [ - "import onnxruntime as rt\nfrom onnxruntime.capi.onnxruntime_pybind11_state import InvalidArgument\nimport numpy\nfrom onnxruntime.datasets import get_example\n\nexample2 = get_example(\"logreg_iris.onnx\")\nsess = rt.InferenceSession(example2)\n\ninput_name = sess.get_inputs()[0].name\noutput_name = sess.get_outputs()[0].name" + "import numpy\n\nimport onnxruntime as rt\nfrom onnxruntime.capi.onnxruntime_pybind11_state import InvalidArgument\nfrom onnxruntime.datasets import get_example\n\nexample2 = get_example(\"logreg_iris.onnx\")\nsess = rt.InferenceSession(example2, providers=rt.get_available_providers())\n\ninput_name = sess.get_inputs()[0].name\noutput_name = sess.get_outputs()[0].name" ] }, { @@ -116,7 +116,7 @@ }, "outputs": [], "source": [ - "for x in [\n numpy.array([1.0, 2.0, 3.0, 4.0], dtype=numpy.float32),\n numpy.array([[1.0, 2.0, 3.0, 4.0]], dtype=numpy.float32),\n numpy.array([[1.0, 2.0], [3.0, 4.0]], dtype=numpy.float32),\n numpy.array([1.0, 2.0, 3.0], dtype=numpy.float32),\n numpy.array([[1.0, 2.0, 3.0]], dtype=numpy.float32),\n ]:\n try:\n r = sess.run([output_name], {input_name: x})\n print(\"Shape={0} and predicted labels={1}\".format(x.shape, r))\n except (RuntimeError, InvalidArgument) as e:\n print(\"ERROR with Shape={0} - {1}\".format(x.shape, e))\n\nfor x in [\n numpy.array([1.0, 2.0, 3.0, 4.0], dtype=numpy.float32),\n numpy.array([[1.0, 2.0, 3.0, 4.0]], dtype=numpy.float32),\n numpy.array([[1.0, 2.0], [3.0, 4.0]], dtype=numpy.float32),\n numpy.array([1.0, 2.0, 3.0], dtype=numpy.float32),\n numpy.array([[1.0, 2.0, 3.0]], dtype=numpy.float32),\n ]:\n try:\n r = sess.run(None, {input_name: x})\n print(\"Shape={0} and predicted probabilities={1}\".format(x.shape, r[1]))\n except (RuntimeError, InvalidArgument) as e:\n print(\"ERROR with Shape={0} - {1}\".format(x.shape, e))" + "for x in [\n numpy.array([1.0, 2.0, 3.0, 4.0], dtype=numpy.float32),\n numpy.array([[1.0, 2.0, 3.0, 4.0]], dtype=numpy.float32),\n numpy.array([[1.0, 2.0], [3.0, 4.0]], dtype=numpy.float32),\n numpy.array([1.0, 2.0, 3.0], dtype=numpy.float32),\n numpy.array([[1.0, 2.0, 3.0]], dtype=numpy.float32),\n]:\n try:\n r = sess.run([output_name], {input_name: x})\n print(\"Shape={0} and predicted labels={1}\".format(x.shape, r))\n except (RuntimeError, InvalidArgument) as e:\n print(\"ERROR with Shape={0} - {1}\".format(x.shape, e))\n\nfor x in [\n numpy.array([1.0, 2.0, 3.0, 4.0], dtype=numpy.float32),\n numpy.array([[1.0, 2.0, 3.0, 4.0]], dtype=numpy.float32),\n numpy.array([[1.0, 2.0], [3.0, 4.0]], dtype=numpy.float32),\n numpy.array([1.0, 2.0, 3.0], dtype=numpy.float32),\n numpy.array([[1.0, 2.0, 3.0]], dtype=numpy.float32),\n]:\n try:\n r = sess.run(None, {input_name: x})\n print(\"Shape={0} and predicted probabilities={1}\".format(x.shape, r[1]))\n except (RuntimeError, InvalidArgument) as e:\n print(\"ERROR with Shape={0} - {1}\".format(x.shape, e))" ] }, { @@ -134,7 +134,7 @@ }, "outputs": [], "source": [ - "for x in [\n numpy.array([[[1.0, 2.0], [3.0, 4.0]]], dtype=numpy.float32),\n numpy.array([[[1.0, 2.0, 3.0]]], dtype=numpy.float32),\n numpy.array([[[1.0, 2.0]], [[3.0, 4.0]]], dtype=numpy.float32),\n ]:\n try:\n r = sess.run([output_name], {input_name: x})\n print(\"Shape={0} and predicted labels={1}\".format(x.shape, r))\n except (RuntimeError, InvalidArgument) as e:\n print(\"ERROR with Shape={0} - {1}\".format(x.shape, e))" + "for x in [\n numpy.array([[[1.0, 2.0], [3.0, 4.0]]], dtype=numpy.float32),\n numpy.array([[[1.0, 2.0, 3.0]]], dtype=numpy.float32),\n numpy.array([[[1.0, 2.0]], [[3.0, 4.0]]], dtype=numpy.float32),\n]:\n try:\n r = sess.run([output_name], {input_name: x})\n print(\"Shape={0} and predicted labels={1}\".format(x.shape, r))\n except (RuntimeError, InvalidArgument) as e:\n print(\"ERROR with Shape={0} - {1}\".format(x.shape, e))" ] } ], @@ -154,7 +154,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.0" + "version": "3.8.10" } }, "nbformat": 4, diff --git a/docs/api/python/downloads/290d1103c4874727a37c05b400ffb83c/plot_load_and_predict.ipynb b/docs/api/python/downloads/290d1103c4874727a37c05b400ffb83c/plot_load_and_predict.ipynb index ebad45f91bf38..43634eed2d97d 100644 --- a/docs/api/python/downloads/290d1103c4874727a37c05b400ffb83c/plot_load_and_predict.ipynb +++ b/docs/api/python/downloads/290d1103c4874727a37c05b400ffb83c/plot_load_and_predict.ipynb @@ -26,14 +26,14 @@ }, "outputs": [], "source": [ - "import onnxruntime as rt\nimport numpy\nfrom onnxruntime.datasets import get_example" + "import numpy\n\nimport onnxruntime as rt\nfrom onnxruntime.datasets import get_example" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "Let's load a very simple model.\nThe model is available on github `onnx...test_sigmoid `_.\n\n" + "Let's load a very simple model.\nThe model is available on github [onnx...test_sigmoid](https://github.com/onnx/onnx/blob/main/onnx/backend/test/data/node/test_sigmoid).\n\n" ] }, { @@ -44,7 +44,7 @@ }, "outputs": [], "source": [ - "example1 = get_example(\"sigmoid.onnx\")\nsess = rt.InferenceSession(example1)" + "example1 = get_example(\"sigmoid.onnx\")\nsess = rt.InferenceSession(example1, providers=rt.get_available_providers())" ] }, { @@ -80,7 +80,7 @@ }, "outputs": [], "source": [ - "output_name = sess.get_outputs()[0].name\nprint(\"output name\", output_name) \noutput_shape = sess.get_outputs()[0].shape\nprint(\"output shape\", output_shape)\noutput_type = sess.get_outputs()[0].type\nprint(\"output type\", output_type)" + "output_name = sess.get_outputs()[0].name\nprint(\"output name\", output_name)\noutput_shape = sess.get_outputs()[0].shape\nprint(\"output shape\", output_shape)\noutput_type = sess.get_outputs()[0].type\nprint(\"output type\", output_type)" ] }, { @@ -98,7 +98,7 @@ }, "outputs": [], "source": [ - "import numpy.random\nx = numpy.random.random((3,4,5))\nx = x.astype(numpy.float32)\nres = sess.run([output_name], {input_name: x})\nprint(res)" + "import numpy.random\n\nx = numpy.random.random((3, 4, 5))\nx = x.astype(numpy.float32)\nres = sess.run([output_name], {input_name: x})\nprint(res)" ] } ], @@ -118,7 +118,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.0" + "version": "3.8.10" } }, "nbformat": 4, diff --git a/docs/api/python/downloads/2dbd202de70c8b6a394b8a1c7c1e12b8/plot_pipeline.ipynb b/docs/api/python/downloads/2dbd202de70c8b6a394b8a1c7c1e12b8/plot_pipeline.ipynb index ef872cbbcfdf6..9bdb7d4113ae9 100644 --- a/docs/api/python/downloads/2dbd202de70c8b6a394b8a1c7c1e12b8/plot_pipeline.ipynb +++ b/docs/api/python/downloads/2dbd202de70c8b6a394b8a1c7c1e12b8/plot_pipeline.ipynb @@ -15,7 +15,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "\n# Draw a pipeline\n\nThere is no other way to look into one model stored\nin ONNX format than looking into its node with \n*onnx*. This example demonstrates\nhow to draw a model and to retrieve it in *json*\nformat.\n\n## Retrieve a model in JSON format\n\nThat's the most simple way.\n" + "\n# Draw a pipeline\n\nThere is no other way to look into one model stored\nin ONNX format than looking into its node with\n*onnx*. This example demonstrates\nhow to draw a model and to retrieve it in *json*\nformat.\n\n## Retrieve a model in JSON format\n\nThat's the most simple way.\n" ] }, { @@ -26,14 +26,14 @@ }, "outputs": [], "source": [ - "from onnxruntime.datasets import get_example\nexample1 = get_example(\"mul_1.onnx\")\n\nimport onnx\nmodel = onnx.load(example1) # model is a ModelProto protobuf message\n\nprint(model)" + "from onnxruntime.datasets import get_example\n\nexample1 = get_example(\"mul_1.onnx\")\n\nimport onnx\n\nmodel = onnx.load(example1) # model is a ModelProto protobuf message\n\nprint(model)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "## Draw a model with ONNX\nWe use `net_drawer.py `_\nincluded in *onnx* package.\nWe use *onnx* to load the model\nin a different way than before.\n\n" + "## Draw a model with ONNX\nWe use [net_drawer.py](https://github.com/onnx/onnx/blob/main/onnx/tools/net_drawer.py)\nincluded in *onnx* package.\nWe use *onnx* to load the model\nin a different way than before.\n\n" ] }, { @@ -44,7 +44,7 @@ }, "outputs": [], "source": [ - "from onnx import ModelProto\nmodel = ModelProto()\nwith open(example1, 'rb') as fid:\n content = fid.read()\n model.ParseFromString(content)" + "from onnx import ModelProto\n\nmodel = ModelProto()\nwith open(example1, \"rb\") as fid:\n content = fid.read()\n model.ParseFromString(content)" ] }, { @@ -62,7 +62,7 @@ }, "outputs": [], "source": [ - "from onnx.tools.net_drawer import GetPydotGraph, GetOpNodeProducer\npydot_graph = GetPydotGraph(model.graph, name=model.graph.name, rankdir=\"LR\",\n node_producer=GetOpNodeProducer(\"docstring\"))\npydot_graph.write_dot(\"graph.dot\")" + "from onnx.tools.net_drawer import GetOpNodeProducer, GetPydotGraph\n\npydot_graph = GetPydotGraph(\n model.graph, name=model.graph.name, rankdir=\"LR\", node_producer=GetOpNodeProducer(\"docstring\")\n)\npydot_graph.write_dot(\"graph.dot\")" ] }, { @@ -80,7 +80,7 @@ }, "outputs": [], "source": [ - "import os\nos.system('dot -O -Tpng graph.dot')" + "import os\n\nos.system(\"dot -O -Tpng graph.dot\")" ] }, { @@ -98,7 +98,7 @@ }, "outputs": [], "source": [ - "import matplotlib.pyplot as plt\nimage = plt.imread(\"graph.dot.png\")\nplt.imshow(image)" + "import matplotlib.pyplot as plt\n\nimage = plt.imread(\"graph.dot.png\")\nplt.imshow(image)" ] } ], @@ -118,7 +118,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.0" + "version": "3.8.10" } }, "nbformat": 4, diff --git a/docs/api/python/downloads/3a0c2ba94405e579c84b6f356705659d/plot_train_convert_predict.ipynb b/docs/api/python/downloads/3a0c2ba94405e579c84b6f356705659d/plot_train_convert_predict.ipynb deleted file mode 100644 index cfdd75b320cd2..0000000000000 --- a/docs/api/python/downloads/3a0c2ba94405e579c84b6f356705659d/plot_train_convert_predict.ipynb +++ /dev/null @@ -1,295 +0,0 @@ -{ - "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "%matplotlib inline" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "\n\n# Train, convert and predict with ONNX Runtime\n\nThis example demonstrates an end to end scenario\nstarting with the training of a machine learned model\nto its use in its converted from.\n\n## Train a logistic regression\n\nThe first step consists in retrieving the iris datset.\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from sklearn.datasets import load_iris\niris = load_iris()\nX, y = iris.data, iris.target\n\nfrom sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Then we fit a model.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from sklearn.linear_model import LogisticRegression\nclr = LogisticRegression()\nclr.fit(X_train, y_train)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We compute the prediction on the test set\nand we show the confusion matrix.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from sklearn.metrics import confusion_matrix\n\npred = clr.predict(X_test)\nprint(confusion_matrix(y_test, pred))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Conversion to ONNX format\n\nWe use module \n`sklearn-onnx `_\nto convert the model into ONNX format.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from skl2onnx import convert_sklearn\nfrom skl2onnx.common.data_types import FloatTensorType\n\ninitial_type = [('float_input', FloatTensorType([None, 4]))]\nonx = convert_sklearn(clr, initial_types=initial_type)\nwith open(\"logreg_iris.onnx\", \"wb\") as f:\n f.write(onx.SerializeToString())" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We load the model with ONNX Runtime and look at\nits input and output.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "import onnxruntime as rt\nsess = rt.InferenceSession(\"logreg_iris.onnx\")\n\nprint(\"input name='{}' and shape={}\".format(\n sess.get_inputs()[0].name, sess.get_inputs()[0].shape))\nprint(\"output name='{}' and shape={}\".format(\n sess.get_outputs()[0].name, sess.get_outputs()[0].shape))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We compute the predictions.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "input_name = sess.get_inputs()[0].name\nlabel_name = sess.get_outputs()[0].name\n\nimport numpy\npred_onx = sess.run([label_name], {input_name: X_test.astype(numpy.float32)})[0]\nprint(confusion_matrix(pred, pred_onx))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The prediction are perfectly identical.\n\n## Probabilities\n\nProbabilities are needed to compute other\nrelevant metrics such as the ROC Curve.\nLet's see how to get them first with\nscikit-learn.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "prob_sklearn = clr.predict_proba(X_test)\nprint(prob_sklearn[:3])" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "And then with ONNX Runtime.\nThe probabilies appear to be \n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "prob_name = sess.get_outputs()[1].name\nprob_rt = sess.run([prob_name], {input_name: X_test.astype(numpy.float32)})[0]\n\nimport pprint\npprint.pprint(prob_rt[0:3])" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Let's benchmark.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from timeit import Timer\n\ndef speed(inst, number=10, repeat=20):\n timer = Timer(inst, globals=globals())\n raw = numpy.array(timer.repeat(repeat, number=number))\n ave = raw.sum() / len(raw) / number\n mi, ma = raw.min() / number, raw.max() / number\n print(\"Average %1.3g min=%1.3g max=%1.3g\" % (ave, mi, ma))\n return ave\n\nprint(\"Execution time for clr.predict\")\nspeed(\"clr.predict(X_test)\")\n\nprint(\"Execution time for ONNX Runtime\")\nspeed(\"sess.run([label_name], {input_name: X_test.astype(numpy.float32)})[0]\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Let's benchmark a scenario similar to what a webservice\nexperiences: the model has to do one prediction at a time\nas opposed to a batch of prediction.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "def loop(X_test, fct, n=None):\n nrow = X_test.shape[0]\n if n is None:\n n = nrow\n for i in range(0, n):\n im = i % nrow\n fct(X_test[im: im+1])\n\nprint(\"Execution time for clr.predict\")\nspeed(\"loop(X_test, clr.predict, 100)\")\n\ndef sess_predict(x):\n return sess.run([label_name], {input_name: x.astype(numpy.float32)})[0]\n\nprint(\"Execution time for sess_predict\")\nspeed(\"loop(X_test, sess_predict, 100)\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Let's do the same for the probabilities.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "print(\"Execution time for predict_proba\")\nspeed(\"loop(X_test, clr.predict_proba, 100)\")\n\ndef sess_predict_proba(x):\n return sess.run([prob_name], {input_name: x.astype(numpy.float32)})[0]\n\nprint(\"Execution time for sess_predict_proba\")\nspeed(\"loop(X_test, sess_predict_proba, 100)\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "This second comparison is better as \nONNX Runtime, in this experience,\ncomputes the label and the probabilities\nin every case.\n\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Benchmark with RandomForest\n\nWe first train and save a model in ONNX format.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from sklearn.ensemble import RandomForestClassifier\nrf = RandomForestClassifier()\nrf.fit(X_train, y_train)\n\ninitial_type = [('float_input', FloatTensorType([1, 4]))]\nonx = convert_sklearn(rf, initial_types=initial_type)\nwith open(\"rf_iris.onnx\", \"wb\") as f:\n f.write(onx.SerializeToString())" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We compare.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "sess = rt.InferenceSession(\"rf_iris.onnx\")\n\ndef sess_predict_proba_rf(x):\n return sess.run([prob_name], {input_name: x.astype(numpy.float32)})[0]\n\nprint(\"Execution time for predict_proba\")\nspeed(\"loop(X_test, rf.predict_proba, 100)\")\n\nprint(\"Execution time for sess_predict_proba\")\nspeed(\"loop(X_test, sess_predict_proba_rf, 100)\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Let's see with different number of trees.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "measures = []\n\nfor n_trees in range(5, 51, 5): \n print(n_trees)\n rf = RandomForestClassifier(n_estimators=n_trees)\n rf.fit(X_train, y_train)\n initial_type = [('float_input', FloatTensorType([1, 4]))]\n onx = convert_sklearn(rf, initial_types=initial_type)\n with open(\"rf_iris_%d.onnx\" % n_trees, \"wb\") as f:\n f.write(onx.SerializeToString())\n sess = rt.InferenceSession(\"rf_iris_%d.onnx\" % n_trees)\n def sess_predict_proba_loop(x):\n return sess.run([prob_name], {input_name: x.astype(numpy.float32)})[0]\n tsk = speed(\"loop(X_test, rf.predict_proba, 100)\", number=5, repeat=5)\n trt = speed(\"loop(X_test, sess_predict_proba_loop, 100)\", number=5, repeat=5)\n measures.append({'n_trees': n_trees, 'sklearn': tsk, 'rt': trt})\n\nfrom pandas import DataFrame\ndf = DataFrame(measures)\nax = df.plot(x=\"n_trees\", y=\"sklearn\", label=\"scikit-learn\", c=\"blue\", logy=True)\ndf.plot(x=\"n_trees\", y=\"rt\", label=\"onnxruntime\",\n ax=ax, c=\"green\", logy=True)\nax.set_xlabel(\"Number of trees\")\nax.set_ylabel(\"Prediction time (s)\")\nax.set_title(\"Speed comparison between scikit-learn and ONNX Runtime\\nFor a random forest on Iris dataset\")\nax.legend()" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.7.2" - } - }, - "nbformat": 4, - "nbformat_minor": 0 -} \ No newline at end of file diff --git a/docs/api/python/downloads/3a2955e44bf8f95a0eee6e71695ad788/plot_common_errors.py b/docs/api/python/downloads/3a2955e44bf8f95a0eee6e71695ad788/plot_common_errors.py index 0d98e17c45dff..e4b2b1e05a3ec 100644 --- a/docs/api/python/downloads/3a2955e44bf8f95a0eee6e71695ad788/plot_common_errors.py +++ b/docs/api/python/downloads/3a2955e44bf8f95a0eee6e71695ad788/plot_common_errors.py @@ -15,13 +15,14 @@ trained on *Iris* datasets. The model takes a vector of dimension 2 and returns a class among three. """ +import numpy + import onnxruntime as rt from onnxruntime.capi.onnxruntime_pybind11_state import InvalidArgument -import numpy from onnxruntime.datasets import get_example example2 = get_example("logreg_iris.onnx") -sess = rt.InferenceSession(example2) +sess = rt.InferenceSession(example2, providers=rt.get_available_providers()) input_name = sess.get_inputs()[0].name output_name = sess.get_outputs()[0].name @@ -37,7 +38,7 @@ except Exception as e: print("Unexpected type") print("{0}: {1}".format(type(e), e)) - + ######################### # The model fails to return an output if the name # is misspelled. @@ -76,12 +77,12 @@ # dimension is a multiple of the expected input dimension. for x in [ - numpy.array([1.0, 2.0, 3.0, 4.0], dtype=numpy.float32), - numpy.array([[1.0, 2.0, 3.0, 4.0]], dtype=numpy.float32), - numpy.array([[1.0, 2.0], [3.0, 4.0]], dtype=numpy.float32), - numpy.array([1.0, 2.0, 3.0], dtype=numpy.float32), - numpy.array([[1.0, 2.0, 3.0]], dtype=numpy.float32), - ]: + numpy.array([1.0, 2.0, 3.0, 4.0], dtype=numpy.float32), + numpy.array([[1.0, 2.0, 3.0, 4.0]], dtype=numpy.float32), + numpy.array([[1.0, 2.0], [3.0, 4.0]], dtype=numpy.float32), + numpy.array([1.0, 2.0, 3.0], dtype=numpy.float32), + numpy.array([[1.0, 2.0, 3.0]], dtype=numpy.float32), +]: try: r = sess.run([output_name], {input_name: x}) print("Shape={0} and predicted labels={1}".format(x.shape, r)) @@ -89,12 +90,12 @@ print("ERROR with Shape={0} - {1}".format(x.shape, e)) for x in [ - numpy.array([1.0, 2.0, 3.0, 4.0], dtype=numpy.float32), - numpy.array([[1.0, 2.0, 3.0, 4.0]], dtype=numpy.float32), - numpy.array([[1.0, 2.0], [3.0, 4.0]], dtype=numpy.float32), - numpy.array([1.0, 2.0, 3.0], dtype=numpy.float32), - numpy.array([[1.0, 2.0, 3.0]], dtype=numpy.float32), - ]: + numpy.array([1.0, 2.0, 3.0, 4.0], dtype=numpy.float32), + numpy.array([[1.0, 2.0, 3.0, 4.0]], dtype=numpy.float32), + numpy.array([[1.0, 2.0], [3.0, 4.0]], dtype=numpy.float32), + numpy.array([1.0, 2.0, 3.0], dtype=numpy.float32), + numpy.array([[1.0, 2.0, 3.0]], dtype=numpy.float32), +]: try: r = sess.run(None, {input_name: x}) print("Shape={0} and predicted probabilities={1}".format(x.shape, r[1])) @@ -106,10 +107,10 @@ # is higher than expects but produces a warning. for x in [ - numpy.array([[[1.0, 2.0], [3.0, 4.0]]], dtype=numpy.float32), - numpy.array([[[1.0, 2.0, 3.0]]], dtype=numpy.float32), - numpy.array([[[1.0, 2.0]], [[3.0, 4.0]]], dtype=numpy.float32), - ]: + numpy.array([[[1.0, 2.0], [3.0, 4.0]]], dtype=numpy.float32), + numpy.array([[[1.0, 2.0, 3.0]]], dtype=numpy.float32), + numpy.array([[[1.0, 2.0]], [[3.0, 4.0]]], dtype=numpy.float32), +]: try: r = sess.run([output_name], {input_name: x}) print("Shape={0} and predicted labels={1}".format(x.shape, r)) diff --git a/docs/api/python/downloads/3aca744422de94d33f9aaa3ce99633a9/plot_convert_pipeline_vectorizer.ipynb b/docs/api/python/downloads/3aca744422de94d33f9aaa3ce99633a9/plot_convert_pipeline_vectorizer.ipynb index ce8eb47ad4f6c..1859231ebd515 100644 --- a/docs/api/python/downloads/3aca744422de94d33f9aaa3ce99633a9/plot_convert_pipeline_vectorizer.ipynb +++ b/docs/api/python/downloads/3aca744422de94d33f9aaa3ce99633a9/plot_convert_pipeline_vectorizer.ipynb @@ -15,7 +15,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "\n# Train, convert and predict with ONNX Runtime\n\nThis example demonstrates an end to end scenario\nstarting with the training of a scikit-learn pipeline\nwhich takes as inputs not a regular vector but a\ndictionary ``{ int: float }`` as its first step is a\n`DictVectorizer `_.\n\n## Train a pipeline\n\nThe first step consists in retrieving the boston datset.\n" + "\n# Train, convert and predict with ONNX Runtime\n\nThis example demonstrates an end to end scenario\nstarting with the training of a scikit-learn pipeline\nwhich takes as inputs not a regular vector but a\ndictionary ``{ int: float }`` as its first step is a\n[DictVectorizer](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.DictVectorizer.html).\n\n## Train a pipeline\n\nThe first step consists in retrieving the boston datset.\n" ] }, { @@ -26,7 +26,7 @@ }, "outputs": [], "source": [ - "import pandas\nfrom sklearn.datasets import load_boston\nboston = load_boston()\nX, y = boston.data, boston.target\n\nfrom sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y)\nX_train_dict = pandas.DataFrame(X_train[:,1:]).T.to_dict().values()\nX_test_dict = pandas.DataFrame(X_test[:,1:]).T.to_dict().values()" + "import pandas\nfrom sklearn.datasets import load_boston\n\nboston = load_boston()\nX, y = boston.data, boston.target\n\nfrom sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(X, y)\nX_train_dict = pandas.DataFrame(X_train[:, 1:]).T.to_dict().values()\nX_test_dict = pandas.DataFrame(X_test[:, 1:]).T.to_dict().values()" ] }, { @@ -44,7 +44,7 @@ }, "outputs": [], "source": [ - "from sklearn.pipeline import make_pipeline\nfrom sklearn.ensemble import GradientBoostingRegressor\nfrom sklearn.feature_extraction import DictVectorizer\npipe = make_pipeline(\n DictVectorizer(sparse=False),\n GradientBoostingRegressor())\n \npipe.fit(X_train_dict, y_train)" + "from sklearn.ensemble import GradientBoostingRegressor\nfrom sklearn.feature_extraction import DictVectorizer\nfrom sklearn.pipeline import make_pipeline\n\npipe = make_pipeline(DictVectorizer(sparse=False), GradientBoostingRegressor())\n\npipe.fit(X_train_dict, y_train)" ] }, { @@ -69,7 +69,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Conversion to ONNX format\n\nWe use module \n`sklearn-onnx `_\nto convert the model into ONNX format.\n\n" + "## Conversion to ONNX format\n\nWe use module\n[sklearn-onnx](https://github.com/onnx/sklearn-onnx)\nto convert the model into ONNX format.\n\n" ] }, { @@ -80,7 +80,7 @@ }, "outputs": [], "source": [ - "from skl2onnx import convert_sklearn\nfrom skl2onnx.common.data_types import FloatTensorType, Int64TensorType, DictionaryType, SequenceType\n\n# initial_type = [('float_input', DictionaryType(Int64TensorType([1]), FloatTensorType([])))]\ninitial_type = [('float_input', DictionaryType(Int64TensorType([1]), FloatTensorType([])))]\nonx = convert_sklearn(pipe, initial_types=initial_type)\nwith open(\"pipeline_vectorize.onnx\", \"wb\") as f:\n f.write(onx.SerializeToString())" + "from skl2onnx import convert_sklearn\nfrom skl2onnx.common.data_types import DictionaryType, FloatTensorType, Int64TensorType, SequenceType\n\n# initial_type = [('float_input', DictionaryType(Int64TensorType([1]), FloatTensorType([])))]\ninitial_type = [(\"float_input\", DictionaryType(Int64TensorType([1]), FloatTensorType([])))]\nonx = convert_sklearn(pipe, initial_types=initial_type)\nwith open(\"pipeline_vectorize.onnx\", \"wb\") as f:\n f.write(onx.SerializeToString())" ] }, { @@ -98,7 +98,7 @@ }, "outputs": [], "source": [ - "import onnxruntime as rt\nfrom onnxruntime.capi.onnxruntime_pybind11_state import InvalidArgument\n\nsess = rt.InferenceSession(\"pipeline_vectorize.onnx\")\n\nimport numpy\ninp, out = sess.get_inputs()[0], sess.get_outputs()[0]\nprint(\"input name='{}' and shape={} and type={}\".format(inp.name, inp.shape, inp.type))\nprint(\"output name='{}' and shape={} and type={}\".format(out.name, out.shape, out.type))" + "import onnxruntime as rt\nfrom onnxruntime.capi.onnxruntime_pybind11_state import InvalidArgument\n\nsess = rt.InferenceSession(\"pipeline_vectorize.onnx\", providers=rt.get_available_providers())\n\nimport numpy\n\ninp, out = sess.get_inputs()[0], sess.get_outputs()[0]\nprint(\"input name='{}' and shape={} and type={}\".format(inp.name, inp.shape, inp.type))\nprint(\"output name='{}' and shape={} and type={}\".format(out.name, out.shape, out.type))" ] }, { @@ -179,7 +179,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.0" + "version": "3.8.10" } }, "nbformat": 4, diff --git a/docs/api/python/downloads/3e23fa9ebb26f4728ee8426ed7da0f63/plot_backend.py b/docs/api/python/downloads/3e23fa9ebb26f4728ee8426ed7da0f63/plot_backend.py index 63d3cbce34d08..ecfc17175d11b 100644 --- a/docs/api/python/downloads/3e23fa9ebb26f4728ee8426ed7da0f63/plot_backend.py +++ b/docs/api/python/downloads/3e23fa9ebb26f4728ee8426ed7da0f63/plot_backend.py @@ -8,22 +8,29 @@ ONNX Runtime Backend for ONNX ============================= -*ONNX Runtime* extends the -`onnx backend API `_ +*ONNX Runtime* extends the +`onnx backend API `_ to run predictions using this runtime. Let's use the API to compute the prediction of a simple logistic regression model. """ import numpy as np -from onnxruntime import datasets -from onnxruntime.capi.onnxruntime_pybind11_state import InvalidArgument -import onnxruntime.backend as backend from onnx import load +import onnxruntime.backend as backend + +######################################## +# The device depends on how the package was compiled, +# GPU or CPU. +from onnxruntime import datasets, get_device +from onnxruntime.capi.onnxruntime_pybind11_state import InvalidArgument + +device = get_device() + name = datasets.get_example("logreg_iris.onnx") model = load(name) -rep = backend.prepare(model, 'CPU') +rep = backend.prepare(model, device) x = np.array([[-1.0, -2.0]], dtype=np.float32) try: label, proba = rep.run(x) @@ -32,17 +39,11 @@ except (RuntimeError, InvalidArgument) as e: print(e) -######################################## -# The device depends on how the package was compiled, -# GPU or CPU. -from onnxruntime import get_device -print(get_device()) - ######################################## # The backend can also directly load the model # without using *onnx*. -rep = backend.prepare(name, 'CPU') +rep = backend.prepare(name, device) x = np.array([[-1.0, -2.0]], dtype=np.float32) try: label, proba = rep.run(x) diff --git a/docs/api/python/downloads/4412ae3bda7068f45094acb5373c790e/plot_load_and_predict.py b/docs/api/python/downloads/4412ae3bda7068f45094acb5373c790e/plot_load_and_predict.py deleted file mode 100644 index feb369feb2e27..0000000000000 --- a/docs/api/python/downloads/4412ae3bda7068f45094acb5373c790e/plot_load_and_predict.py +++ /dev/null @@ -1,53 +0,0 @@ -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. - -""" -.. _l-example-simple-usage: - -Load and predict with ONNX Runtime and a very simple model -========================================================== - -This example demonstrates how to load a model and compute -the output for an input vector. It also shows how to -retrieve the definition of its inputs and outputs. -""" - -import onnxruntime as rt -import numpy -from onnxruntime.datasets import get_example - -######################### -# Let's load a very simple model. -# The model is available on github `onnx...test_sigmoid `_. - -example1 = get_example("sigmoid.onnx") -sess = rt.InferenceSession(example1) - -######################### -# Let's see the input name and shape. - -input_name = sess.get_inputs()[0].name -print("input name", input_name) -input_shape = sess.get_inputs()[0].shape -print("input shape", input_shape) -input_type = sess.get_inputs()[0].type -print("input type", input_type) - -######################### -# Let's see the output name and shape. - -output_name = sess.get_outputs()[0].name -print("output name", output_name) -output_shape = sess.get_outputs()[0].shape -print("output shape", output_shape) -output_type = sess.get_outputs()[0].type -print("output type", output_type) - -######################### -# Let's compute its outputs (or predictions if it is a machine learned model). - -import numpy.random -x = numpy.random.random((3,4,5)) -x = x.astype(numpy.float32) -res = sess.run([output_name], {input_name: x}) -print(res) diff --git a/docs/api/python/downloads/500c588edb4417e84924953b39d33cc6/plot_convert_pipeline_vectorizer.py b/docs/api/python/downloads/500c588edb4417e84924953b39d33cc6/plot_convert_pipeline_vectorizer.py deleted file mode 100644 index 0de0b30e28de0..0000000000000 --- a/docs/api/python/downloads/500c588edb4417e84924953b39d33cc6/plot_convert_pipeline_vectorizer.py +++ /dev/null @@ -1,103 +0,0 @@ -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. - -""" -Train, convert and predict with ONNX Runtime -============================================ - -This example demonstrates an end to end scenario -starting with the training of a scikit-learn pipeline -which takes as inputs not a regular vector but a -dictionary ``{ int: float }`` as its first step is a -`DictVectorizer `_. - -.. contents:: - :local: - -Train a pipeline -++++++++++++++++ - -The first step consists in retrieving the boston datset. -""" -import pandas -from sklearn.datasets import load_boston -boston = load_boston() -X, y = boston.data, boston.target - -from sklearn.model_selection import train_test_split -X_train, X_test, y_train, y_test = train_test_split(X, y) -X_train_dict = pandas.DataFrame(X_train[:,1:]).T.to_dict().values() -X_test_dict = pandas.DataFrame(X_test[:,1:]).T.to_dict().values() - -#################################### -# We create a pipeline. - -from sklearn.pipeline import make_pipeline -from sklearn.ensemble import GradientBoostingRegressor -from sklearn.feature_extraction import DictVectorizer -pipe = make_pipeline( - DictVectorizer(sparse=False), - GradientBoostingRegressor()) - -pipe.fit(X_train_dict, y_train) - -#################################### -# We compute the prediction on the test set -# and we show the confusion matrix. -from sklearn.metrics import r2_score - -pred = pipe.predict(X_test_dict) -print(r2_score(y_test, pred)) - -#################################### -# Conversion to ONNX format -# +++++++++++++++++++++++++ -# -# We use module -# `sklearn-onnx `_ -# to convert the model into ONNX format. - -from skl2onnx import convert_sklearn -from skl2onnx.common.data_types import FloatTensorType, Int64TensorType, DictionaryType, SequenceType - -# initial_type = [('float_input', DictionaryType(Int64TensorType([1]), FloatTensorType([])))] -initial_type = [('float_input', DictionaryType(Int64TensorType([1]), FloatTensorType([])))] -onx = convert_sklearn(pipe, initial_types=initial_type) -with open("pipeline_vectorize.onnx", "wb") as f: - f.write(onx.SerializeToString()) - -################################## -# We load the model with ONNX Runtime and look at -# its input and output. -import onnxruntime as rt -from onnxruntime.capi.onnxruntime_pybind11_state import InvalidArgument - -sess = rt.InferenceSession("pipeline_vectorize.onnx") - -import numpy -inp, out = sess.get_inputs()[0], sess.get_outputs()[0] -print("input name='{}' and shape={} and type={}".format(inp.name, inp.shape, inp.type)) -print("output name='{}' and shape={} and type={}".format(out.name, out.shape, out.type)) - -################################## -# We compute the predictions. -# We could do that in one call: - -try: - pred_onx = sess.run([out.name], {inp.name: X_test_dict})[0] -except (RuntimeError, InvalidArgument) as e: - print(e) - -############################# -# But it fails because, in case of a DictVectorizer, -# ONNX Runtime expects one observation at a time. -pred_onx = [sess.run([out.name], {inp.name: row})[0][0, 0] for row in X_test_dict] - -############################### -# We compare them to the model's ones. -print(r2_score(pred, pred_onx)) - -######################### -# Very similar. *ONNX Runtime* uses floats instead of doubles, -# that explains the small discrepencies. - diff --git a/docs/api/python/downloads/594159f58c2d97bee3f22ee659067d5a/plot_backend.ipynb b/docs/api/python/downloads/594159f58c2d97bee3f22ee659067d5a/plot_backend.ipynb deleted file mode 100644 index 42788feedebd7..0000000000000 --- a/docs/api/python/downloads/594159f58c2d97bee3f22ee659067d5a/plot_backend.ipynb +++ /dev/null @@ -1,97 +0,0 @@ -{ - "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "%matplotlib inline" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "\n\n# ONNX Runtime Backend for ONNX\n\n*ONNX Runtime* extends the \n`onnx backend API `_\nto run predictions using this runtime.\nLet's use the API to compute the prediction\nof a simple logistic regression model.\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "import numpy as np\nfrom onnxruntime import datasets\nfrom onnxruntime.capi.onnxruntime_pybind11_state import InvalidArgument\nimport onnxruntime.backend as backend\nfrom onnx import load\n\nname = datasets.get_example(\"logreg_iris.onnx\")\nmodel = load(name)\n\nrep = backend.prepare(model, 'CPU')\nx = np.array([[-1.0, -2.0]], dtype=np.float32)\ntry:\n label, proba = rep.run(x)\n print(\"label={}\".format(label))\n print(\"probabilities={}\".format(proba))\nexcept (RuntimeError, InvalidArgument) as e:\n print(e)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The device depends on how the package was compiled,\nGPU or CPU.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from onnxruntime import get_device\nprint(get_device())" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The backend can also directly load the model\nwithout using *onnx*.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "rep = backend.prepare(name, 'CPU')\nx = np.array([[-1.0, -2.0]], dtype=np.float32)\ntry:\n label, proba = rep.run(x)\n print(\"label={}\".format(label))\n print(\"probabilities={}\".format(proba))\nexcept (RuntimeError, InvalidArgument) as e:\n print(e)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The backend API is implemented by other frameworks\nand makes it easier to switch between multiple runtimes\nwith the same API.\n\n" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.7.2" - } - }, - "nbformat": 4, - "nbformat_minor": 0 -} \ No newline at end of file diff --git a/docs/api/python/downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip b/docs/api/python/downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip index 9a50a7410fca6..4f237e9819ae9 100644 Binary files a/docs/api/python/downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip and b/docs/api/python/downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip differ diff --git a/docs/api/python/downloads/7c8424f45d0156abd4d0221c65601124/plot_load_and_predict.py b/docs/api/python/downloads/7c8424f45d0156abd4d0221c65601124/plot_load_and_predict.py index feb369feb2e27..09d7c9cdb4c88 100644 --- a/docs/api/python/downloads/7c8424f45d0156abd4d0221c65601124/plot_load_and_predict.py +++ b/docs/api/python/downloads/7c8424f45d0156abd4d0221c65601124/plot_load_and_predict.py @@ -12,16 +12,17 @@ retrieve the definition of its inputs and outputs. """ -import onnxruntime as rt import numpy + +import onnxruntime as rt from onnxruntime.datasets import get_example ######################### # Let's load a very simple model. -# The model is available on github `onnx...test_sigmoid `_. +# The model is available on github `onnx...test_sigmoid `_. example1 = get_example("sigmoid.onnx") -sess = rt.InferenceSession(example1) +sess = rt.InferenceSession(example1, providers=rt.get_available_providers()) ######################### # Let's see the input name and shape. @@ -37,7 +38,7 @@ # Let's see the output name and shape. output_name = sess.get_outputs()[0].name -print("output name", output_name) +print("output name", output_name) output_shape = sess.get_outputs()[0].shape print("output shape", output_shape) output_type = sess.get_outputs()[0].type @@ -47,7 +48,8 @@ # Let's compute its outputs (or predictions if it is a machine learned model). import numpy.random -x = numpy.random.random((3,4,5)) + +x = numpy.random.random((3, 4, 5)) x = x.astype(numpy.float32) res = sess.run([output_name], {input_name: x}) print(res) diff --git a/docs/api/python/downloads/7d69185b02f38811ad5d0593ec22c99d/plot_metadata.py b/docs/api/python/downloads/7d69185b02f38811ad5d0593ec22c99d/plot_metadata.py deleted file mode 100644 index df5d15276c634..0000000000000 --- a/docs/api/python/downloads/7d69185b02f38811ad5d0593ec22c99d/plot_metadata.py +++ /dev/null @@ -1,43 +0,0 @@ -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. - -""" -Metadata -======== - -ONNX format contains metadata related to how the -model was produced. It is useful when the model -is deployed to production to keep track of which -instance was used at a specific time. -Let's see how to do that with a simple -logistic regression model trained with -*scikit-learn* and converted with *sklearn-onnx*. -""" - -from onnxruntime.datasets import get_example -example = get_example("logreg_iris.onnx") - -import onnx -model = onnx.load(example) - -print("doc_string={}".format(model.doc_string)) -print("domain={}".format(model.domain)) -print("ir_version={}".format(model.ir_version)) -print("metadata_props={}".format(model.metadata_props)) -print("model_version={}".format(model.model_version)) -print("producer_name={}".format(model.producer_name)) -print("producer_version={}".format(model.producer_version)) - -############################# -# With *ONNX Runtime*: - -from onnxruntime import InferenceSession -sess = InferenceSession(example) -meta = sess.get_modelmeta() - -print("custom_metadata_map={}".format(meta.custom_metadata_map)) -print("description={}".format(meta.description)) -print("domain={}".format(meta.domain, meta.domain)) -print("graph_name={}".format(meta.graph_name)) -print("producer_name={}".format(meta.producer_name)) -print("version={}".format(meta.version)) diff --git a/docs/api/python/downloads/8547b931339c42011a7c05ff3b5f5373/plot_common_errors.py b/docs/api/python/downloads/8547b931339c42011a7c05ff3b5f5373/plot_common_errors.py deleted file mode 100644 index 0d98e17c45dff..0000000000000 --- a/docs/api/python/downloads/8547b931339c42011a7c05ff3b5f5373/plot_common_errors.py +++ /dev/null @@ -1,117 +0,0 @@ -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. - -""" -.. _l-example-common-error: - -Common errors with onnxruntime -============================== - -This example looks into several common situations -in which *onnxruntime* does not return the model -prediction but raises an exception instead. -It starts by loading the model trained in example -:ref:`l-logreg-example` which produced a logistic regression -trained on *Iris* datasets. The model takes -a vector of dimension 2 and returns a class among three. -""" -import onnxruntime as rt -from onnxruntime.capi.onnxruntime_pybind11_state import InvalidArgument -import numpy -from onnxruntime.datasets import get_example - -example2 = get_example("logreg_iris.onnx") -sess = rt.InferenceSession(example2) - -input_name = sess.get_inputs()[0].name -output_name = sess.get_outputs()[0].name - -############################# -# The first example fails due to *bad types*. -# *onnxruntime* only expects single floats (4 bytes) -# and cannot handle any other kind of floats. - -try: - x = numpy.array([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], dtype=numpy.float64) - sess.run([output_name], {input_name: x}) -except Exception as e: - print("Unexpected type") - print("{0}: {1}".format(type(e), e)) - -######################### -# The model fails to return an output if the name -# is misspelled. - -try: - x = numpy.array([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], dtype=numpy.float32) - sess.run(["misspelled"], {input_name: x}) -except Exception as e: - print("Misspelled output name") - print("{0}: {1}".format(type(e), e)) - -########################### -# The output name is optional, it can be replaced by *None* -# and *onnxruntime* will then return all the outputs. - -x = numpy.array([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], dtype=numpy.float32) -try: - res = sess.run(None, {input_name: x}) - print("All outputs") - print(res) -except (RuntimeError, InvalidArgument) as e: - print(e) - -######################### -# The same goes if the input name is misspelled. - -try: - x = numpy.array([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], dtype=numpy.float32) - sess.run([output_name], {"misspelled": x}) -except Exception as e: - print("Misspelled input name") - print("{0}: {1}".format(type(e), e)) - -######################### -# *onnxruntime* does not necessarily fail if the input -# dimension is a multiple of the expected input dimension. - -for x in [ - numpy.array([1.0, 2.0, 3.0, 4.0], dtype=numpy.float32), - numpy.array([[1.0, 2.0, 3.0, 4.0]], dtype=numpy.float32), - numpy.array([[1.0, 2.0], [3.0, 4.0]], dtype=numpy.float32), - numpy.array([1.0, 2.0, 3.0], dtype=numpy.float32), - numpy.array([[1.0, 2.0, 3.0]], dtype=numpy.float32), - ]: - try: - r = sess.run([output_name], {input_name: x}) - print("Shape={0} and predicted labels={1}".format(x.shape, r)) - except (RuntimeError, InvalidArgument) as e: - print("ERROR with Shape={0} - {1}".format(x.shape, e)) - -for x in [ - numpy.array([1.0, 2.0, 3.0, 4.0], dtype=numpy.float32), - numpy.array([[1.0, 2.0, 3.0, 4.0]], dtype=numpy.float32), - numpy.array([[1.0, 2.0], [3.0, 4.0]], dtype=numpy.float32), - numpy.array([1.0, 2.0, 3.0], dtype=numpy.float32), - numpy.array([[1.0, 2.0, 3.0]], dtype=numpy.float32), - ]: - try: - r = sess.run(None, {input_name: x}) - print("Shape={0} and predicted probabilities={1}".format(x.shape, r[1])) - except (RuntimeError, InvalidArgument) as e: - print("ERROR with Shape={0} - {1}".format(x.shape, e)) - -######################### -# It does not fail either if the number of dimension -# is higher than expects but produces a warning. - -for x in [ - numpy.array([[[1.0, 2.0], [3.0, 4.0]]], dtype=numpy.float32), - numpy.array([[[1.0, 2.0, 3.0]]], dtype=numpy.float32), - numpy.array([[[1.0, 2.0]], [[3.0, 4.0]]], dtype=numpy.float32), - ]: - try: - r = sess.run([output_name], {input_name: x}) - print("Shape={0} and predicted labels={1}".format(x.shape, r)) - except (RuntimeError, InvalidArgument) as e: - print("ERROR with Shape={0} - {1}".format(x.shape, e)) diff --git a/docs/api/python/downloads/8dd0c9d6c4ab396fa8f24f6321edc4b0/plot_common_errors.ipynb b/docs/api/python/downloads/8dd0c9d6c4ab396fa8f24f6321edc4b0/plot_common_errors.ipynb deleted file mode 100644 index 1dadcdcca3182..0000000000000 --- a/docs/api/python/downloads/8dd0c9d6c4ab396fa8f24f6321edc4b0/plot_common_errors.ipynb +++ /dev/null @@ -1,162 +0,0 @@ -{ - "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "%matplotlib inline" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "\n\n# Common errors with onnxruntime\n\nThis example looks into several common situations\nin which *onnxruntime* does not return the model \nprediction but raises an exception instead.\nIt starts by loading the model trained in example\n`l-logreg-example` which produced a logistic regression\ntrained on *Iris* datasets. The model takes\na vector of dimension 2 and returns a class among three.\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "import onnxruntime as rt\nfrom onnxruntime.capi.onnxruntime_pybind11_state import InvalidArgument\nimport numpy\nfrom onnxruntime.datasets import get_example\n\nexample2 = get_example(\"logreg_iris.onnx\")\nsess = rt.InferenceSession(example2)\n\ninput_name = sess.get_inputs()[0].name\noutput_name = sess.get_outputs()[0].name" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The first example fails due to *bad types*.\n*onnxruntime* only expects single floats (4 bytes)\nand cannot handle any other kind of floats.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "try:\n x = numpy.array([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], dtype=numpy.float64)\n sess.run([output_name], {input_name: x})\nexcept Exception as e:\n print(\"Unexpected type\")\n print(\"{0}: {1}\".format(type(e), e))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The model fails to return an output if the name\nis misspelled.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "try:\n x = numpy.array([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], dtype=numpy.float32)\n sess.run([\"misspelled\"], {input_name: x})\nexcept Exception as e:\n print(\"Misspelled output name\")\n print(\"{0}: {1}\".format(type(e), e))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The output name is optional, it can be replaced by *None*\nand *onnxruntime* will then return all the outputs.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "x = numpy.array([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], dtype=numpy.float32)\ntry:\n res = sess.run(None, {input_name: x})\n print(\"All outputs\")\n print(res)\nexcept (RuntimeError, InvalidArgument) as e:\n print(e)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The same goes if the input name is misspelled.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "try:\n x = numpy.array([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], dtype=numpy.float32)\n sess.run([output_name], {\"misspelled\": x})\nexcept Exception as e:\n print(\"Misspelled input name\")\n print(\"{0}: {1}\".format(type(e), e))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "*onnxruntime* does not necessarily fail if the input\ndimension is a multiple of the expected input dimension.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "for x in [\n numpy.array([1.0, 2.0, 3.0, 4.0], dtype=numpy.float32),\n numpy.array([[1.0, 2.0, 3.0, 4.0]], dtype=numpy.float32),\n numpy.array([[1.0, 2.0], [3.0, 4.0]], dtype=numpy.float32),\n numpy.array([1.0, 2.0, 3.0], dtype=numpy.float32),\n numpy.array([[1.0, 2.0, 3.0]], dtype=numpy.float32),\n ]:\n try:\n r = sess.run([output_name], {input_name: x})\n print(\"Shape={0} and predicted labels={1}\".format(x.shape, r))\n except (RuntimeError, InvalidArgument) as e:\n print(\"ERROR with Shape={0} - {1}\".format(x.shape, e))\n\nfor x in [\n numpy.array([1.0, 2.0, 3.0, 4.0], dtype=numpy.float32),\n numpy.array([[1.0, 2.0, 3.0, 4.0]], dtype=numpy.float32),\n numpy.array([[1.0, 2.0], [3.0, 4.0]], dtype=numpy.float32),\n numpy.array([1.0, 2.0, 3.0], dtype=numpy.float32),\n numpy.array([[1.0, 2.0, 3.0]], dtype=numpy.float32),\n ]:\n try:\n r = sess.run(None, {input_name: x})\n print(\"Shape={0} and predicted probabilities={1}\".format(x.shape, r[1]))\n except (RuntimeError, InvalidArgument) as e:\n print(\"ERROR with Shape={0} - {1}\".format(x.shape, e))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "It does not fail either if the number of dimension\nis higher than expects but produces a warning.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "for x in [\n numpy.array([[[1.0, 2.0], [3.0, 4.0]]], dtype=numpy.float32),\n numpy.array([[[1.0, 2.0, 3.0]]], dtype=numpy.float32),\n numpy.array([[[1.0, 2.0]], [[3.0, 4.0]]], dtype=numpy.float32),\n ]:\n try:\n r = sess.run([output_name], {input_name: x})\n print(\"Shape={0} and predicted labels={1}\".format(x.shape, r))\n except (RuntimeError, InvalidArgument) as e:\n print(\"ERROR with Shape={0} - {1}\".format(x.shape, e))" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.7.2" - } - }, - "nbformat": 4, - "nbformat_minor": 0 -} \ No newline at end of file diff --git a/docs/api/python/downloads/932fe1ee7f48f55a6155d2f378bc85a0/plot_metadata.py b/docs/api/python/downloads/932fe1ee7f48f55a6155d2f378bc85a0/plot_metadata.py index df5d15276c634..c76f2e8d9fa7f 100644 --- a/docs/api/python/downloads/932fe1ee7f48f55a6155d2f378bc85a0/plot_metadata.py +++ b/docs/api/python/downloads/932fe1ee7f48f55a6155d2f378bc85a0/plot_metadata.py @@ -15,9 +15,11 @@ """ from onnxruntime.datasets import get_example + example = get_example("logreg_iris.onnx") import onnx + model = onnx.load(example) print("doc_string={}".format(model.doc_string)) @@ -31,8 +33,9 @@ ############################# # With *ONNX Runtime*: -from onnxruntime import InferenceSession -sess = InferenceSession(example) +import onnxruntime as rt + +sess = rt.InferenceSession(example, providers=rt.get_available_providers()) meta = sess.get_modelmeta() print("custom_metadata_map={}".format(meta.custom_metadata_map)) diff --git a/docs/api/python/downloads/940d20a5bdb46eb51fb878fbb3bd928d/plot_metadata.ipynb b/docs/api/python/downloads/940d20a5bdb46eb51fb878fbb3bd928d/plot_metadata.ipynb index 4246eaa7057a6..3ea14d7fe686e 100644 --- a/docs/api/python/downloads/940d20a5bdb46eb51fb878fbb3bd928d/plot_metadata.ipynb +++ b/docs/api/python/downloads/940d20a5bdb46eb51fb878fbb3bd928d/plot_metadata.ipynb @@ -26,7 +26,7 @@ }, "outputs": [], "source": [ - "from onnxruntime.datasets import get_example\nexample = get_example(\"logreg_iris.onnx\")\n\nimport onnx\nmodel = onnx.load(example)\n\nprint(\"doc_string={}\".format(model.doc_string))\nprint(\"domain={}\".format(model.domain))\nprint(\"ir_version={}\".format(model.ir_version))\nprint(\"metadata_props={}\".format(model.metadata_props))\nprint(\"model_version={}\".format(model.model_version))\nprint(\"producer_name={}\".format(model.producer_name))\nprint(\"producer_version={}\".format(model.producer_version))" + "from onnxruntime.datasets import get_example\n\nexample = get_example(\"logreg_iris.onnx\")\n\nimport onnx\n\nmodel = onnx.load(example)\n\nprint(\"doc_string={}\".format(model.doc_string))\nprint(\"domain={}\".format(model.domain))\nprint(\"ir_version={}\".format(model.ir_version))\nprint(\"metadata_props={}\".format(model.metadata_props))\nprint(\"model_version={}\".format(model.model_version))\nprint(\"producer_name={}\".format(model.producer_name))\nprint(\"producer_version={}\".format(model.producer_version))" ] }, { @@ -44,7 +44,7 @@ }, "outputs": [], "source": [ - "from onnxruntime import InferenceSession\nsess = InferenceSession(example)\nmeta = sess.get_modelmeta()\n\nprint(\"custom_metadata_map={}\".format(meta.custom_metadata_map))\nprint(\"description={}\".format(meta.description))\nprint(\"domain={}\".format(meta.domain, meta.domain))\nprint(\"graph_name={}\".format(meta.graph_name))\nprint(\"producer_name={}\".format(meta.producer_name))\nprint(\"version={}\".format(meta.version))" + "import onnxruntime as rt\n\nsess = rt.InferenceSession(example, providers=rt.get_available_providers())\nmeta = sess.get_modelmeta()\n\nprint(\"custom_metadata_map={}\".format(meta.custom_metadata_map))\nprint(\"description={}\".format(meta.description))\nprint(\"domain={}\".format(meta.domain, meta.domain))\nprint(\"graph_name={}\".format(meta.graph_name))\nprint(\"producer_name={}\".format(meta.producer_name))\nprint(\"version={}\".format(meta.version))" ] } ], @@ -64,7 +64,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.0" + "version": "3.8.10" } }, "nbformat": 4, diff --git a/docs/api/python/downloads/94ba4b1ad7abc78b59124c6a39f9a075/plot_pipeline.py b/docs/api/python/downloads/94ba4b1ad7abc78b59124c6a39f9a075/plot_pipeline.py deleted file mode 100644 index 0a002f6223e1b..0000000000000 --- a/docs/api/python/downloads/94ba4b1ad7abc78b59124c6a39f9a075/plot_pipeline.py +++ /dev/null @@ -1,69 +0,0 @@ -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. - -""" -Draw a pipeline -=============== - -There is no other way to look into one model stored -in ONNX format than looking into its node with -*onnx*. This example demonstrates -how to draw a model and to retrieve it in *json* -format. - -.. contents:: - :local: - -Retrieve a model in JSON format -+++++++++++++++++++++++++++++++ - -That's the most simple way. -""" - -from onnxruntime.datasets import get_example -example1 = get_example("mul_1.onnx") - -import onnx -model = onnx.load(example1) # model is a ModelProto protobuf message - -print(model) - - -################################# -# Draw a model with ONNX -# ++++++++++++++++++++++ -# We use `net_drawer.py `_ -# included in *onnx* package. -# We use *onnx* to load the model -# in a different way than before. - - -from onnx import ModelProto -model = ModelProto() -with open(example1, 'rb') as fid: - content = fid.read() - model.ParseFromString(content) - -################################### -# We convert it into a graph. -from onnx.tools.net_drawer import GetPydotGraph, GetOpNodeProducer -pydot_graph = GetPydotGraph(model.graph, name=model.graph.name, rankdir="LR", - node_producer=GetOpNodeProducer("docstring")) -pydot_graph.write_dot("graph.dot") - -####################################### -# Then into an image -import os -os.system('dot -O -Tpng graph.dot') - -################################ -# Which we display... -import matplotlib.pyplot as plt -image = plt.imread("graph.dot.png") -plt.imshow(image) - - - - - - diff --git a/docs/api/python/downloads/982a1f7abbb8ffc5d5e98b671c35e5aa/plot_convert_pipeline_vectorizer.py b/docs/api/python/downloads/982a1f7abbb8ffc5d5e98b671c35e5aa/plot_convert_pipeline_vectorizer.py index 0de0b30e28de0..3df6d6dfea9bf 100644 --- a/docs/api/python/downloads/982a1f7abbb8ffc5d5e98b671c35e5aa/plot_convert_pipeline_vectorizer.py +++ b/docs/api/python/downloads/982a1f7abbb8ffc5d5e98b671c35e5aa/plot_convert_pipeline_vectorizer.py @@ -21,24 +21,25 @@ """ import pandas from sklearn.datasets import load_boston + boston = load_boston() X, y = boston.data, boston.target from sklearn.model_selection import train_test_split + X_train, X_test, y_train, y_test = train_test_split(X, y) -X_train_dict = pandas.DataFrame(X_train[:,1:]).T.to_dict().values() -X_test_dict = pandas.DataFrame(X_test[:,1:]).T.to_dict().values() +X_train_dict = pandas.DataFrame(X_train[:, 1:]).T.to_dict().values() +X_test_dict = pandas.DataFrame(X_test[:, 1:]).T.to_dict().values() #################################### # We create a pipeline. -from sklearn.pipeline import make_pipeline from sklearn.ensemble import GradientBoostingRegressor from sklearn.feature_extraction import DictVectorizer -pipe = make_pipeline( - DictVectorizer(sparse=False), - GradientBoostingRegressor()) - +from sklearn.pipeline import make_pipeline + +pipe = make_pipeline(DictVectorizer(sparse=False), GradientBoostingRegressor()) + pipe.fit(X_train_dict, y_train) #################################### @@ -53,15 +54,15 @@ # Conversion to ONNX format # +++++++++++++++++++++++++ # -# We use module +# We use module # `sklearn-onnx `_ # to convert the model into ONNX format. from skl2onnx import convert_sklearn -from skl2onnx.common.data_types import FloatTensorType, Int64TensorType, DictionaryType, SequenceType +from skl2onnx.common.data_types import DictionaryType, FloatTensorType, Int64TensorType, SequenceType # initial_type = [('float_input', DictionaryType(Int64TensorType([1]), FloatTensorType([])))] -initial_type = [('float_input', DictionaryType(Int64TensorType([1]), FloatTensorType([])))] +initial_type = [("float_input", DictionaryType(Int64TensorType([1]), FloatTensorType([])))] onx = convert_sklearn(pipe, initial_types=initial_type) with open("pipeline_vectorize.onnx", "wb") as f: f.write(onx.SerializeToString()) @@ -72,9 +73,10 @@ import onnxruntime as rt from onnxruntime.capi.onnxruntime_pybind11_state import InvalidArgument -sess = rt.InferenceSession("pipeline_vectorize.onnx") +sess = rt.InferenceSession("pipeline_vectorize.onnx", providers=rt.get_available_providers()) import numpy + inp, out = sess.get_inputs()[0], sess.get_outputs()[0] print("input name='{}' and shape={} and type={}".format(inp.name, inp.shape, inp.type)) print("output name='{}' and shape={} and type={}".format(out.name, out.shape, out.type)) @@ -100,4 +102,3 @@ ######################### # Very similar. *ONNX Runtime* uses floats instead of doubles, # that explains the small discrepencies. - diff --git a/docs/api/python/downloads/a8a3fe6bdcf0fdfbed17a5a0f2384419/plot_profiling.ipynb b/docs/api/python/downloads/a8a3fe6bdcf0fdfbed17a5a0f2384419/plot_profiling.ipynb deleted file mode 100644 index 575a43864bfb0..0000000000000 --- a/docs/api/python/downloads/a8a3fe6bdcf0fdfbed17a5a0f2384419/plot_profiling.ipynb +++ /dev/null @@ -1,108 +0,0 @@ -{ - "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "%matplotlib inline" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "\n\n# Profile the execution of a simple model\n\n*ONNX Runtime* can profile the execution of the model.\nThis example shows how to interpret the results.\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "import onnx\nimport onnxruntime as rt\nimport numpy\nfrom onnxruntime.datasets import get_example\n\n\ndef change_ir_version(filename, ir_version=6):\n \"onnxruntime==1.2.0 does not support opset <= 7 and ir_version > 6\"\n with open(filename, \"rb\") as f:\n model = onnx.load(f)\n model.ir_version = 6\n if model.opset_import[0].version <= 7:\n model.opset_import[0].version = 11\n return model" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Let's load a very simple model and compute some prediction.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "example1 = get_example(\"mul_1.onnx\")\nonnx_model = change_ir_version(example1)\nonnx_model_str = onnx_model.SerializeToString()\nsess = rt.InferenceSession(onnx_model_str)\ninput_name = sess.get_inputs()[0].name\n\nx = numpy.array([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], dtype=numpy.float32)\nres = sess.run(None, {input_name: x})\nprint(res)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We need to enable to profiling\nbefore running the predictions.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "options = rt.SessionOptions()\noptions.enable_profiling = True\nsess_profile = rt.InferenceSession(onnx_model_str, options)\ninput_name = sess.get_inputs()[0].name\n\nx = numpy.array([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], dtype=numpy.float32)\n\nsess.run(None, {input_name: x})\nprof_file = sess_profile.end_profiling()\nprint(prof_file)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The results are stored un a file in JSON format.\nLet's see what it contains.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "import json\nwith open(prof_file, \"r\") as f:\n sess_time = json.load(f)\nimport pprint\npprint.pprint(sess_time)" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.7.2" - } - }, - "nbformat": 4, - "nbformat_minor": 0 -} \ No newline at end of file diff --git a/docs/api/python/downloads/ae37e1cd2f800a09b44216a88ac39d30/plot_metadata.ipynb b/docs/api/python/downloads/ae37e1cd2f800a09b44216a88ac39d30/plot_metadata.ipynb deleted file mode 100644 index 0aa5560d09b44..0000000000000 --- a/docs/api/python/downloads/ae37e1cd2f800a09b44216a88ac39d30/plot_metadata.ipynb +++ /dev/null @@ -1,72 +0,0 @@ -{ - "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "%matplotlib inline" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "\n# Metadata\n\nONNX format contains metadata related to how the\nmodel was produced. It is useful when the model\nis deployed to production to keep track of which\ninstance was used at a specific time.\nLet's see how to do that with a simple \nlogistic regression model trained with\n*scikit-learn* and converted with *sklearn-onnx*.\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from onnxruntime.datasets import get_example\nexample = get_example(\"logreg_iris.onnx\")\n\nimport onnx\nmodel = onnx.load(example)\n\nprint(\"doc_string={}\".format(model.doc_string))\nprint(\"domain={}\".format(model.domain))\nprint(\"ir_version={}\".format(model.ir_version))\nprint(\"metadata_props={}\".format(model.metadata_props))\nprint(\"model_version={}\".format(model.model_version))\nprint(\"producer_name={}\".format(model.producer_name))\nprint(\"producer_version={}\".format(model.producer_version))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "With *ONNX Runtime*:\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from onnxruntime import InferenceSession\nsess = InferenceSession(example)\nmeta = sess.get_modelmeta()\n\nprint(\"custom_metadata_map={}\".format(meta.custom_metadata_map))\nprint(\"description={}\".format(meta.description))\nprint(\"domain={}\".format(meta.domain, meta.domain))\nprint(\"graph_name={}\".format(meta.graph_name))\nprint(\"producer_name={}\".format(meta.producer_name))\nprint(\"version={}\".format(meta.version))" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.7.2" - } - }, - "nbformat": 4, - "nbformat_minor": 0 -} \ No newline at end of file diff --git a/docs/api/python/downloads/b81c8c31615f9400a26ee60f0641af3f/plot_dl_keras.py b/docs/api/python/downloads/b81c8c31615f9400a26ee60f0641af3f/plot_dl_keras.py deleted file mode 100644 index 949ee895e5912..0000000000000 --- a/docs/api/python/downloads/b81c8c31615f9400a26ee60f0641af3f/plot_dl_keras.py +++ /dev/null @@ -1,94 +0,0 @@ -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. - -""" - -.. _l-example-backend-api-tensorflow: - -ONNX Runtime for Keras -====================== - -The following demonstrates how to compute the predictions -of a pretrained deep learning model obtained from -`keras `_ -with *onnxruntime*. The conversion requires -`keras `_, -`tensorflow `_, -`keras-onnx `_, -`onnxmltools `_ -but then only *onnxruntime* is required -to compute the predictions. -""" -import os -if not os.path.exists('dense121.onnx'): - from keras.applications.densenet import DenseNet121 - model = DenseNet121(include_top=True, weights='imagenet') - - from keras2onnx import convert_keras - onx = convert_keras(model, 'dense121.onnx') - with open("dense121.onnx", "wb") as f: - f.write(onx.SerializeToString()) - -################################## -# Let's load an image (source: wikipedia). - -from keras.preprocessing.image import array_to_img, img_to_array, load_img -img = load_img('Sannosawa1.jpg') -ximg = img_to_array(img) - -import matplotlib.pyplot as plt -plt.imshow(ximg / 255) -plt.axis('off') - -############################################# -# Let's load the model with onnxruntime. -import onnxruntime as rt -from onnxruntime.capi.onnxruntime_pybind11_state import InvalidGraph - -try: - sess = rt.InferenceSession('dense121.onnx') - ok = True -except (InvalidGraph, TypeError, RuntimeError) as e: - # Probably a mismatch between onnxruntime and onnx version. - print(e) - ok = False - -if ok: - print("The model expects input shape:", sess.get_inputs()[0].shape) - print("image shape:", ximg.shape) - -####################################### -# Let's resize the image. - -if ok: - from skimage.transform import resize - import numpy - - ximg224 = resize(ximg / 255, (224, 224, 3), anti_aliasing=True) - ximg = ximg224[numpy.newaxis, :, :, :] - ximg = ximg.astype(numpy.float32) - - print("new shape:", ximg.shape) - -################################## -# Let's compute the output. - -if ok: - input_name = sess.get_inputs()[0].name - res = sess.run(None, {input_name: ximg}) - prob = res[0] - print(prob.ravel()[:10]) # Too big to be displayed. - - -################################## -# Let's get more comprehensive results. - -if ok: - from keras.applications.densenet import decode_predictions - decoded = decode_predictions(prob) - - import pandas - df = pandas.DataFrame(decoded[0], columns=["class_id", "name", "P"]) - print(df) - - diff --git a/docs/api/python/downloads/be7aeaf8b9b95af92780b5d952019035/plot_convert_pipeline_vectorizer.ipynb b/docs/api/python/downloads/be7aeaf8b9b95af92780b5d952019035/plot_convert_pipeline_vectorizer.ipynb deleted file mode 100644 index 2b285b76adecd..0000000000000 --- a/docs/api/python/downloads/be7aeaf8b9b95af92780b5d952019035/plot_convert_pipeline_vectorizer.ipynb +++ /dev/null @@ -1,187 +0,0 @@ -{ - "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "%matplotlib inline" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "\n# Train, convert and predict with ONNX Runtime\n\nThis example demonstrates an end to end scenario\nstarting with the training of a scikit-learn pipeline\nwhich takes as inputs not a regular vector but a\ndictionary ``{ int: float }`` as its first step is a\n`DictVectorizer `_.\n\n## Train a pipeline\n\nThe first step consists in retrieving the boston datset.\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "import pandas\nfrom sklearn.datasets import load_boston\nboston = load_boston()\nX, y = boston.data, boston.target\n\nfrom sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y)\nX_train_dict = pandas.DataFrame(X_train[:,1:]).T.to_dict().values()\nX_test_dict = pandas.DataFrame(X_test[:,1:]).T.to_dict().values()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We create a pipeline.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from sklearn.pipeline import make_pipeline\nfrom sklearn.ensemble import GradientBoostingRegressor\nfrom sklearn.feature_extraction import DictVectorizer\npipe = make_pipeline(\n DictVectorizer(sparse=False),\n GradientBoostingRegressor())\n \npipe.fit(X_train_dict, y_train)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We compute the prediction on the test set\nand we show the confusion matrix.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from sklearn.metrics import r2_score\n\npred = pipe.predict(X_test_dict)\nprint(r2_score(y_test, pred))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Conversion to ONNX format\n\nWe use module \n`sklearn-onnx `_\nto convert the model into ONNX format.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from skl2onnx import convert_sklearn\nfrom skl2onnx.common.data_types import FloatTensorType, Int64TensorType, DictionaryType, SequenceType\n\n# initial_type = [('float_input', DictionaryType(Int64TensorType([1]), FloatTensorType([])))]\ninitial_type = [('float_input', DictionaryType(Int64TensorType([1]), FloatTensorType([])))]\nonx = convert_sklearn(pipe, initial_types=initial_type)\nwith open(\"pipeline_vectorize.onnx\", \"wb\") as f:\n f.write(onx.SerializeToString())" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We load the model with ONNX Runtime and look at\nits input and output.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "import onnxruntime as rt\nfrom onnxruntime.capi.onnxruntime_pybind11_state import InvalidArgument\n\nsess = rt.InferenceSession(\"pipeline_vectorize.onnx\")\n\nimport numpy\ninp, out = sess.get_inputs()[0], sess.get_outputs()[0]\nprint(\"input name='{}' and shape={} and type={}\".format(inp.name, inp.shape, inp.type))\nprint(\"output name='{}' and shape={} and type={}\".format(out.name, out.shape, out.type))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We compute the predictions.\nWe could do that in one call:\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "try:\n pred_onx = sess.run([out.name], {inp.name: X_test_dict})[0]\nexcept (RuntimeError, InvalidArgument) as e:\n print(e)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "But it fails because, in case of a DictVectorizer,\nONNX Runtime expects one observation at a time.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "pred_onx = [sess.run([out.name], {inp.name: row})[0][0, 0] for row in X_test_dict]" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We compare them to the model's ones.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "print(r2_score(pred, pred_onx))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Very similar. *ONNX Runtime* uses floats instead of doubles,\nthat explains the small discrepencies.\n\n" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.7.2" - } - }, - "nbformat": 4, - "nbformat_minor": 0 -} \ No newline at end of file diff --git a/docs/api/python/downloads/c2f3ba3f66159c615632f39a9ad32dee/plot_pipeline.ipynb b/docs/api/python/downloads/c2f3ba3f66159c615632f39a9ad32dee/plot_pipeline.ipynb deleted file mode 100644 index 6ccf3e2972926..0000000000000 --- a/docs/api/python/downloads/c2f3ba3f66159c615632f39a9ad32dee/plot_pipeline.ipynb +++ /dev/null @@ -1,126 +0,0 @@ -{ - "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "%matplotlib inline" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "\n# Draw a pipeline\n\nThere is no other way to look into one model stored\nin ONNX format than looking into its node with \n*onnx*. This example demonstrates\nhow to draw a model and to retrieve it in *json*\nformat.\n\n## Retrieve a model in JSON format\n\nThat's the most simple way.\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from onnxruntime.datasets import get_example\nexample1 = get_example(\"mul_1.onnx\")\n\nimport onnx\nmodel = onnx.load(example1) # model is a ModelProto protobuf message\n\nprint(model)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Draw a model with ONNX\nWe use `net_drawer.py `_\nincluded in *onnx* package.\nWe use *onnx* to load the model\nin a different way than before.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from onnx import ModelProto\nmodel = ModelProto()\nwith open(example1, 'rb') as fid:\n content = fid.read()\n model.ParseFromString(content)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We convert it into a graph.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from onnx.tools.net_drawer import GetPydotGraph, GetOpNodeProducer\npydot_graph = GetPydotGraph(model.graph, name=model.graph.name, rankdir=\"LR\",\n node_producer=GetOpNodeProducer(\"docstring\"))\npydot_graph.write_dot(\"graph.dot\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Then into an image\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "import os\nos.system('dot -O -Tpng graph.dot')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Which we display...\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "import matplotlib.pyplot as plt\nimage = plt.imread(\"graph.dot.png\")\nplt.imshow(image)" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.7.2" - } - }, - "nbformat": 4, - "nbformat_minor": 0 -} \ No newline at end of file diff --git a/docs/api/python/downloads/c647c128e0cf2b3db04ce60b41ef1a14/plot_train_convert_predict.py b/docs/api/python/downloads/c647c128e0cf2b3db04ce60b41ef1a14/plot_train_convert_predict.py index 5b060c5f41ffe..b5033b503b3eb 100644 --- a/docs/api/python/downloads/c647c128e0cf2b3db04ce60b41ef1a14/plot_train_convert_predict.py +++ b/docs/api/python/downloads/c647c128e0cf2b3db04ce60b41ef1a14/plot_train_convert_predict.py @@ -22,16 +22,19 @@ """ from sklearn.datasets import load_iris + iris = load_iris() X, y = iris.data, iris.target from sklearn.model_selection import train_test_split + X_train, X_test, y_train, y_test = train_test_split(X, y) #################################### # Then we fit a model. from sklearn.linear_model import LogisticRegression + clr = LogisticRegression() clr.fit(X_train, y_train) @@ -47,14 +50,14 @@ # Conversion to ONNX format # +++++++++++++++++++++++++ # -# We use module +# We use module # `sklearn-onnx `_ # to convert the model into ONNX format. from skl2onnx import convert_sklearn from skl2onnx.common.data_types import FloatTensorType -initial_type = [('float_input', FloatTensorType([None, 4]))] +initial_type = [("float_input", FloatTensorType([None, 4]))] onx = convert_sklearn(clr, initial_types=initial_type) with open("logreg_iris.onnx", "wb") as f: f.write(onx.SerializeToString()) @@ -64,12 +67,11 @@ # its input and output. import onnxruntime as rt -sess = rt.InferenceSession("logreg_iris.onnx") -print("input name='{}' and shape={}".format( - sess.get_inputs()[0].name, sess.get_inputs()[0].shape)) -print("output name='{}' and shape={}".format( - sess.get_outputs()[0].name, sess.get_outputs()[0].shape)) +sess = rt.InferenceSession("logreg_iris.onnx", providers=rt.get_available_providers()) + +print("input name='{}' and shape={}".format(sess.get_inputs()[0].name, sess.get_inputs()[0].shape)) +print("output name='{}' and shape={}".format(sess.get_outputs()[0].name, sess.get_outputs()[0].shape)) ################################## # We compute the predictions. @@ -78,6 +80,7 @@ label_name = sess.get_outputs()[0].name import numpy + pred_onx = sess.run([label_name], {input_name: X_test.astype(numpy.float32)})[0] print(confusion_matrix(pred, pred_onx)) @@ -97,18 +100,20 @@ ############################# # And then with ONNX Runtime. -# The probabilies appear to be +# The probabilies appear to be prob_name = sess.get_outputs()[1].name prob_rt = sess.run([prob_name], {input_name: X_test.astype(numpy.float32)})[0] import pprint + pprint.pprint(prob_rt[0:3]) ############################### # Let's benchmark. from timeit import Timer + def speed(inst, number=10, repeat=20): timer = Timer(inst, globals=globals()) raw = numpy.array(timer.repeat(repeat, number=number)) @@ -117,6 +122,7 @@ def speed(inst, number=10, repeat=20): print("Average %1.3g min=%1.3g max=%1.3g" % (ave, mi, ma)) return ave + print("Execution time for clr.predict") speed("clr.predict(X_test)") @@ -128,20 +134,24 @@ def speed(inst, number=10, repeat=20): # experiences: the model has to do one prediction at a time # as opposed to a batch of prediction. + def loop(X_test, fct, n=None): nrow = X_test.shape[0] if n is None: n = nrow for i in range(0, n): im = i % nrow - fct(X_test[im: im+1]) + fct(X_test[im : im + 1]) + print("Execution time for clr.predict") speed("loop(X_test, clr.predict, 100)") + def sess_predict(x): return sess.run([label_name], {input_name: x.astype(numpy.float32)})[0] + print("Execution time for sess_predict") speed("loop(X_test, sess_predict, 100)") @@ -151,14 +161,16 @@ def sess_predict(x): print("Execution time for predict_proba") speed("loop(X_test, clr.predict_proba, 100)") + def sess_predict_proba(x): return sess.run([prob_name], {input_name: x.astype(numpy.float32)})[0] + print("Execution time for sess_predict_proba") speed("loop(X_test, sess_predict_proba, 100)") ##################################### -# This second comparison is better as +# This second comparison is better as # ONNX Runtime, in this experience, # computes the label and the probabilities # in every case. @@ -169,10 +181,11 @@ def sess_predict_proba(x): # # We first train and save a model in ONNX format. from sklearn.ensemble import RandomForestClassifier + rf = RandomForestClassifier() rf.fit(X_train, y_train) -initial_type = [('float_input', FloatTensorType([1, 4]))] +initial_type = [("float_input", FloatTensorType([1, 4]))] onx = convert_sklearn(rf, initial_types=initial_type) with open("rf_iris.onnx", "wb") as f: f.write(onx.SerializeToString()) @@ -180,11 +193,13 @@ def sess_predict_proba(x): ################################### # We compare. -sess = rt.InferenceSession("rf_iris.onnx") +sess = rt.InferenceSession("rf_iris.onnx", providers=rt.get_available_providers()) + def sess_predict_proba_rf(x): return sess.run([prob_name], {input_name: x.astype(numpy.float32)})[0] + print("Execution time for predict_proba") speed("loop(X_test, rf.predict_proba, 100)") @@ -196,26 +211,28 @@ def sess_predict_proba_rf(x): measures = [] -for n_trees in range(5, 51, 5): +for n_trees in range(5, 51, 5): print(n_trees) rf = RandomForestClassifier(n_estimators=n_trees) rf.fit(X_train, y_train) - initial_type = [('float_input', FloatTensorType([1, 4]))] + initial_type = [("float_input", FloatTensorType([1, 4]))] onx = convert_sklearn(rf, initial_types=initial_type) with open("rf_iris_%d.onnx" % n_trees, "wb") as f: f.write(onx.SerializeToString()) - sess = rt.InferenceSession("rf_iris_%d.onnx" % n_trees) + sess = rt.InferenceSession("rf_iris_%d.onnx" % n_trees, providers=rt.get_available_providers()) + def sess_predict_proba_loop(x): return sess.run([prob_name], {input_name: x.astype(numpy.float32)})[0] + tsk = speed("loop(X_test, rf.predict_proba, 100)", number=5, repeat=5) trt = speed("loop(X_test, sess_predict_proba_loop, 100)", number=5, repeat=5) - measures.append({'n_trees': n_trees, 'sklearn': tsk, 'rt': trt}) + measures.append({"n_trees": n_trees, "sklearn": tsk, "rt": trt}) from pandas import DataFrame + df = DataFrame(measures) ax = df.plot(x="n_trees", y="sklearn", label="scikit-learn", c="blue", logy=True) -df.plot(x="n_trees", y="rt", label="onnxruntime", - ax=ax, c="green", logy=True) +df.plot(x="n_trees", y="rt", label="onnxruntime", ax=ax, c="green", logy=True) ax.set_xlabel("Number of trees") ax.set_ylabel("Prediction time (s)") ax.set_title("Speed comparison between scikit-learn and ONNX Runtime\nFor a random forest on Iris dataset") diff --git a/docs/api/python/downloads/c9f88a9294285c733dcce209fcc939de/plot_dl_keras.ipynb b/docs/api/python/downloads/c9f88a9294285c733dcce209fcc939de/plot_dl_keras.ipynb deleted file mode 100644 index a1af6a4f627f5..0000000000000 --- a/docs/api/python/downloads/c9f88a9294285c733dcce209fcc939de/plot_dl_keras.ipynb +++ /dev/null @@ -1,144 +0,0 @@ -{ - "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "%matplotlib inline" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "\n\n# ONNX Runtime for Keras\n\nThe following demonstrates how to compute the predictions\nof a pretrained deep learning model obtained from \n`keras `_\nwith *onnxruntime*. The conversion requires\n`keras `_,\n`tensorflow `_,\n`keras-onnx `_,\n`onnxmltools `_\nbut then only *onnxruntime* is required\nto compute the predictions.\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "import os\nif not os.path.exists('dense121.onnx'):\n from keras.applications.densenet import DenseNet121\n model = DenseNet121(include_top=True, weights='imagenet')\n\n from keras2onnx import convert_keras\n onx = convert_keras(model, 'dense121.onnx')\n with open(\"dense121.onnx\", \"wb\") as f:\n f.write(onx.SerializeToString())" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Let's load an image (source: wikipedia).\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from keras.preprocessing.image import array_to_img, img_to_array, load_img\nimg = load_img('Sannosawa1.jpg')\nximg = img_to_array(img)\n\nimport matplotlib.pyplot as plt\nplt.imshow(ximg / 255)\nplt.axis('off')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Let's load the model with onnxruntime.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "import onnxruntime as rt\nfrom onnxruntime.capi.onnxruntime_pybind11_state import InvalidGraph\n\ntry:\n sess = rt.InferenceSession('dense121.onnx')\n ok = True\nexcept (InvalidGraph, TypeError, RuntimeError) as e:\n # Probably a mismatch between onnxruntime and onnx version.\n print(e)\n ok = False\n\nif ok:\n print(\"The model expects input shape:\", sess.get_inputs()[0].shape)\n print(\"image shape:\", ximg.shape)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Let's resize the image.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "if ok:\n from skimage.transform import resize\n import numpy\n\n ximg224 = resize(ximg / 255, (224, 224, 3), anti_aliasing=True)\n ximg = ximg224[numpy.newaxis, :, :, :]\n ximg = ximg.astype(numpy.float32)\n\n print(\"new shape:\", ximg.shape)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Let's compute the output.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "if ok:\n input_name = sess.get_inputs()[0].name\n res = sess.run(None, {input_name: ximg})\n prob = res[0]\n print(prob.ravel()[:10]) # Too big to be displayed." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Let's get more comprehensive results.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "if ok:\n from keras.applications.densenet import decode_predictions\n decoded = decode_predictions(prob)\n\n import pandas\n df = pandas.DataFrame(decoded[0], columns=[\"class_id\", \"name\", \"P\"])\n print(df)" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.8.7" - } - }, - "nbformat": 4, - "nbformat_minor": 0 -} \ No newline at end of file diff --git a/docs/api/python/downloads/ccbbfd4f60683438ab99a4c42d3fbbdf/plot_profiling.ipynb b/docs/api/python/downloads/ccbbfd4f60683438ab99a4c42d3fbbdf/plot_profiling.ipynb index e6d61fa627ead..18bf323925921 100644 --- a/docs/api/python/downloads/ccbbfd4f60683438ab99a4c42d3fbbdf/plot_profiling.ipynb +++ b/docs/api/python/downloads/ccbbfd4f60683438ab99a4c42d3fbbdf/plot_profiling.ipynb @@ -26,7 +26,7 @@ }, "outputs": [], "source": [ - "import onnx\nimport onnxruntime as rt\nimport numpy\nfrom onnxruntime.datasets import get_example\n\n\ndef change_ir_version(filename, ir_version=6):\n \"onnxruntime==1.2.0 does not support opset <= 7 and ir_version > 6\"\n with open(filename, \"rb\") as f:\n model = onnx.load(f)\n model.ir_version = 6\n if model.opset_import[0].version <= 7:\n model.opset_import[0].version = 11\n return model" + "import numpy\nimport onnx\n\nimport onnxruntime as rt\nfrom onnxruntime.datasets import get_example\n\n\ndef change_ir_version(filename, ir_version=6):\n \"onnxruntime==1.2.0 does not support opset <= 7 and ir_version > 6\"\n with open(filename, \"rb\") as f:\n model = onnx.load(f)\n model.ir_version = 6\n if model.opset_import[0].version <= 7:\n model.opset_import[0].version = 11\n return model" ] }, { @@ -44,7 +44,7 @@ }, "outputs": [], "source": [ - "example1 = get_example(\"mul_1.onnx\")\nonnx_model = change_ir_version(example1)\nonnx_model_str = onnx_model.SerializeToString()\nsess = rt.InferenceSession(onnx_model_str)\ninput_name = sess.get_inputs()[0].name\n\nx = numpy.array([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], dtype=numpy.float32)\nres = sess.run(None, {input_name: x})\nprint(res)" + "example1 = get_example(\"mul_1.onnx\")\nonnx_model = change_ir_version(example1)\nonnx_model_str = onnx_model.SerializeToString()\nsess = rt.InferenceSession(onnx_model_str, providers=rt.get_available_providers())\ninput_name = sess.get_inputs()[0].name\n\nx = numpy.array([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], dtype=numpy.float32)\nres = sess.run(None, {input_name: x})\nprint(res)" ] }, { @@ -62,7 +62,7 @@ }, "outputs": [], "source": [ - "options = rt.SessionOptions()\noptions.enable_profiling = True\nsess_profile = rt.InferenceSession(onnx_model_str, options)\ninput_name = sess.get_inputs()[0].name\n\nx = numpy.array([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], dtype=numpy.float32)\n\nsess.run(None, {input_name: x})\nprof_file = sess_profile.end_profiling()\nprint(prof_file)" + "options = rt.SessionOptions()\noptions.enable_profiling = True\nsess_profile = rt.InferenceSession(onnx_model_str, options, providers=rt.get_available_providers())\ninput_name = sess.get_inputs()[0].name\n\nx = numpy.array([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], dtype=numpy.float32)\n\nsess.run(None, {input_name: x})\nprof_file = sess_profile.end_profiling()\nprint(prof_file)" ] }, { @@ -80,7 +80,7 @@ }, "outputs": [], "source": [ - "import json\nwith open(prof_file, \"r\") as f:\n sess_time = json.load(f)\nimport pprint\npprint.pprint(sess_time)" + "import json\n\nwith open(prof_file, \"r\") as f:\n sess_time = json.load(f)\nimport pprint\n\npprint.pprint(sess_time)" ] } ], @@ -100,7 +100,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.0" + "version": "3.8.10" } }, "nbformat": 4, diff --git a/docs/api/python/downloads/cfe61aca1f0a89486c7024466ea500fd/plot_profiling.py b/docs/api/python/downloads/cfe61aca1f0a89486c7024466ea500fd/plot_profiling.py index f0ea727ede1b2..3236f954cc052 100644 --- a/docs/api/python/downloads/cfe61aca1f0a89486c7024466ea500fd/plot_profiling.py +++ b/docs/api/python/downloads/cfe61aca1f0a89486c7024466ea500fd/plot_profiling.py @@ -11,9 +11,10 @@ *ONNX Runtime* can profile the execution of the model. This example shows how to interpret the results. """ +import numpy import onnx + import onnxruntime as rt -import numpy from onnxruntime.datasets import get_example @@ -27,15 +28,13 @@ def change_ir_version(filename, ir_version=6): return model - - ######################### # Let's load a very simple model and compute some prediction. example1 = get_example("mul_1.onnx") onnx_model = change_ir_version(example1) onnx_model_str = onnx_model.SerializeToString() -sess = rt.InferenceSession(onnx_model_str) +sess = rt.InferenceSession(onnx_model_str, providers=rt.get_available_providers()) input_name = sess.get_inputs()[0].name x = numpy.array([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], dtype=numpy.float32) @@ -48,7 +47,7 @@ def change_ir_version(filename, ir_version=6): options = rt.SessionOptions() options.enable_profiling = True -sess_profile = rt.InferenceSession(onnx_model_str, options) +sess_profile = rt.InferenceSession(onnx_model_str, options, providers=rt.get_available_providers()) input_name = sess.get_inputs()[0].name x = numpy.array([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], dtype=numpy.float32) @@ -61,10 +60,9 @@ def change_ir_version(filename, ir_version=6): # The results are stored un a file in JSON format. # Let's see what it contains. import json + with open(prof_file, "r") as f: sess_time = json.load(f) import pprint -pprint.pprint(sess_time) - - +pprint.pprint(sess_time) diff --git a/docs/api/python/downloads/d436e9922b51a71358604ec00f09e7e4/plot_pipeline.py b/docs/api/python/downloads/d436e9922b51a71358604ec00f09e7e4/plot_pipeline.py index 0a002f6223e1b..05dcbdb25b7a6 100644 --- a/docs/api/python/downloads/d436e9922b51a71358604ec00f09e7e4/plot_pipeline.py +++ b/docs/api/python/downloads/d436e9922b51a71358604ec00f09e7e4/plot_pipeline.py @@ -6,7 +6,7 @@ =============== There is no other way to look into one model stored -in ONNX format than looking into its node with +in ONNX format than looking into its node with *onnx*. This example demonstrates how to draw a model and to retrieve it in *json* format. @@ -21,49 +21,50 @@ """ from onnxruntime.datasets import get_example + example1 = get_example("mul_1.onnx") import onnx + model = onnx.load(example1) # model is a ModelProto protobuf message -print(model) +print(model) ################################# # Draw a model with ONNX # ++++++++++++++++++++++ -# We use `net_drawer.py `_ +# We use `net_drawer.py `_ # included in *onnx* package. # We use *onnx* to load the model # in a different way than before. from onnx import ModelProto + model = ModelProto() -with open(example1, 'rb') as fid: +with open(example1, "rb") as fid: content = fid.read() model.ParseFromString(content) ################################### # We convert it into a graph. -from onnx.tools.net_drawer import GetPydotGraph, GetOpNodeProducer -pydot_graph = GetPydotGraph(model.graph, name=model.graph.name, rankdir="LR", - node_producer=GetOpNodeProducer("docstring")) +from onnx.tools.net_drawer import GetOpNodeProducer, GetPydotGraph + +pydot_graph = GetPydotGraph( + model.graph, name=model.graph.name, rankdir="LR", node_producer=GetOpNodeProducer("docstring") +) pydot_graph.write_dot("graph.dot") ####################################### # Then into an image import os -os.system('dot -O -Tpng graph.dot') + +os.system("dot -O -Tpng graph.dot") ################################ # Which we display... import matplotlib.pyplot as plt + image = plt.imread("graph.dot.png") plt.imshow(image) - - - - - - diff --git a/docs/api/python/downloads/d83345a79a181a29892287297803aeec/plot_train_convert_predict.py b/docs/api/python/downloads/d83345a79a181a29892287297803aeec/plot_train_convert_predict.py deleted file mode 100644 index 5b060c5f41ffe..0000000000000 --- a/docs/api/python/downloads/d83345a79a181a29892287297803aeec/plot_train_convert_predict.py +++ /dev/null @@ -1,222 +0,0 @@ -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. - -""" - -.. _l-logreg-example: - -Train, convert and predict with ONNX Runtime -============================================ - -This example demonstrates an end to end scenario -starting with the training of a machine learned model -to its use in its converted from. - -.. contents:: - :local: - -Train a logistic regression -+++++++++++++++++++++++++++ - -The first step consists in retrieving the iris datset. -""" - -from sklearn.datasets import load_iris -iris = load_iris() -X, y = iris.data, iris.target - -from sklearn.model_selection import train_test_split -X_train, X_test, y_train, y_test = train_test_split(X, y) - -#################################### -# Then we fit a model. - -from sklearn.linear_model import LogisticRegression -clr = LogisticRegression() -clr.fit(X_train, y_train) - -#################################### -# We compute the prediction on the test set -# and we show the confusion matrix. -from sklearn.metrics import confusion_matrix - -pred = clr.predict(X_test) -print(confusion_matrix(y_test, pred)) - -#################################### -# Conversion to ONNX format -# +++++++++++++++++++++++++ -# -# We use module -# `sklearn-onnx `_ -# to convert the model into ONNX format. - -from skl2onnx import convert_sklearn -from skl2onnx.common.data_types import FloatTensorType - -initial_type = [('float_input', FloatTensorType([None, 4]))] -onx = convert_sklearn(clr, initial_types=initial_type) -with open("logreg_iris.onnx", "wb") as f: - f.write(onx.SerializeToString()) - -################################## -# We load the model with ONNX Runtime and look at -# its input and output. - -import onnxruntime as rt -sess = rt.InferenceSession("logreg_iris.onnx") - -print("input name='{}' and shape={}".format( - sess.get_inputs()[0].name, sess.get_inputs()[0].shape)) -print("output name='{}' and shape={}".format( - sess.get_outputs()[0].name, sess.get_outputs()[0].shape)) - -################################## -# We compute the predictions. - -input_name = sess.get_inputs()[0].name -label_name = sess.get_outputs()[0].name - -import numpy -pred_onx = sess.run([label_name], {input_name: X_test.astype(numpy.float32)})[0] -print(confusion_matrix(pred, pred_onx)) - -################################### -# The prediction are perfectly identical. -# -# Probabilities -# +++++++++++++ -# -# Probabilities are needed to compute other -# relevant metrics such as the ROC Curve. -# Let's see how to get them first with -# scikit-learn. - -prob_sklearn = clr.predict_proba(X_test) -print(prob_sklearn[:3]) - -############################# -# And then with ONNX Runtime. -# The probabilies appear to be - -prob_name = sess.get_outputs()[1].name -prob_rt = sess.run([prob_name], {input_name: X_test.astype(numpy.float32)})[0] - -import pprint -pprint.pprint(prob_rt[0:3]) - -############################### -# Let's benchmark. -from timeit import Timer - -def speed(inst, number=10, repeat=20): - timer = Timer(inst, globals=globals()) - raw = numpy.array(timer.repeat(repeat, number=number)) - ave = raw.sum() / len(raw) / number - mi, ma = raw.min() / number, raw.max() / number - print("Average %1.3g min=%1.3g max=%1.3g" % (ave, mi, ma)) - return ave - -print("Execution time for clr.predict") -speed("clr.predict(X_test)") - -print("Execution time for ONNX Runtime") -speed("sess.run([label_name], {input_name: X_test.astype(numpy.float32)})[0]") - -############################### -# Let's benchmark a scenario similar to what a webservice -# experiences: the model has to do one prediction at a time -# as opposed to a batch of prediction. - -def loop(X_test, fct, n=None): - nrow = X_test.shape[0] - if n is None: - n = nrow - for i in range(0, n): - im = i % nrow - fct(X_test[im: im+1]) - -print("Execution time for clr.predict") -speed("loop(X_test, clr.predict, 100)") - -def sess_predict(x): - return sess.run([label_name], {input_name: x.astype(numpy.float32)})[0] - -print("Execution time for sess_predict") -speed("loop(X_test, sess_predict, 100)") - -##################################### -# Let's do the same for the probabilities. - -print("Execution time for predict_proba") -speed("loop(X_test, clr.predict_proba, 100)") - -def sess_predict_proba(x): - return sess.run([prob_name], {input_name: x.astype(numpy.float32)})[0] - -print("Execution time for sess_predict_proba") -speed("loop(X_test, sess_predict_proba, 100)") - -##################################### -# This second comparison is better as -# ONNX Runtime, in this experience, -# computes the label and the probabilities -# in every case. - -########################################## -# Benchmark with RandomForest -# +++++++++++++++++++++++++++ -# -# We first train and save a model in ONNX format. -from sklearn.ensemble import RandomForestClassifier -rf = RandomForestClassifier() -rf.fit(X_train, y_train) - -initial_type = [('float_input', FloatTensorType([1, 4]))] -onx = convert_sklearn(rf, initial_types=initial_type) -with open("rf_iris.onnx", "wb") as f: - f.write(onx.SerializeToString()) - -################################### -# We compare. - -sess = rt.InferenceSession("rf_iris.onnx") - -def sess_predict_proba_rf(x): - return sess.run([prob_name], {input_name: x.astype(numpy.float32)})[0] - -print("Execution time for predict_proba") -speed("loop(X_test, rf.predict_proba, 100)") - -print("Execution time for sess_predict_proba") -speed("loop(X_test, sess_predict_proba_rf, 100)") - -################################## -# Let's see with different number of trees. - -measures = [] - -for n_trees in range(5, 51, 5): - print(n_trees) - rf = RandomForestClassifier(n_estimators=n_trees) - rf.fit(X_train, y_train) - initial_type = [('float_input', FloatTensorType([1, 4]))] - onx = convert_sklearn(rf, initial_types=initial_type) - with open("rf_iris_%d.onnx" % n_trees, "wb") as f: - f.write(onx.SerializeToString()) - sess = rt.InferenceSession("rf_iris_%d.onnx" % n_trees) - def sess_predict_proba_loop(x): - return sess.run([prob_name], {input_name: x.astype(numpy.float32)})[0] - tsk = speed("loop(X_test, rf.predict_proba, 100)", number=5, repeat=5) - trt = speed("loop(X_test, sess_predict_proba_loop, 100)", number=5, repeat=5) - measures.append({'n_trees': n_trees, 'sklearn': tsk, 'rt': trt}) - -from pandas import DataFrame -df = DataFrame(measures) -ax = df.plot(x="n_trees", y="sklearn", label="scikit-learn", c="blue", logy=True) -df.plot(x="n_trees", y="rt", label="onnxruntime", - ax=ax, c="green", logy=True) -ax.set_xlabel("Number of trees") -ax.set_ylabel("Prediction time (s)") -ax.set_title("Speed comparison between scikit-learn and ONNX Runtime\nFor a random forest on Iris dataset") -ax.legend() diff --git a/docs/api/python/downloads/dee2ae82948a521867a372a6b9515393/plot_load_and_predict.ipynb b/docs/api/python/downloads/dee2ae82948a521867a372a6b9515393/plot_load_and_predict.ipynb deleted file mode 100644 index 14032652289eb..0000000000000 --- a/docs/api/python/downloads/dee2ae82948a521867a372a6b9515393/plot_load_and_predict.ipynb +++ /dev/null @@ -1,126 +0,0 @@ -{ - "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "%matplotlib inline" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "\n\n# Load and predict with ONNX Runtime and a very simple model\n\nThis example demonstrates how to load a model and compute\nthe output for an input vector. It also shows how to\nretrieve the definition of its inputs and outputs.\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "import onnxruntime as rt\nimport numpy\nfrom onnxruntime.datasets import get_example" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Let's load a very simple model.\nThe model is available on github `onnx...test_sigmoid `_.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "example1 = get_example(\"sigmoid.onnx\")\nsess = rt.InferenceSession(example1)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Let's see the input name and shape.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "input_name = sess.get_inputs()[0].name\nprint(\"input name\", input_name)\ninput_shape = sess.get_inputs()[0].shape\nprint(\"input shape\", input_shape)\ninput_type = sess.get_inputs()[0].type\nprint(\"input type\", input_type)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Let's see the output name and shape.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "output_name = sess.get_outputs()[0].name\nprint(\"output name\", output_name) \noutput_shape = sess.get_outputs()[0].shape\nprint(\"output shape\", output_shape)\noutput_type = sess.get_outputs()[0].type\nprint(\"output type\", output_type)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Let's compute its outputs (or predictions if it is a machine learned model).\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "import numpy.random\nx = numpy.random.random((3,4,5))\nx = x.astype(numpy.float32)\nres = sess.run([output_name], {input_name: x})\nprint(res)" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.7.2" - } - }, - "nbformat": 4, - "nbformat_minor": 0 -} \ No newline at end of file diff --git a/docs/api/python/downloads/df88a32237a9b3e764a8da54c1743145/plot_backend.py b/docs/api/python/downloads/df88a32237a9b3e764a8da54c1743145/plot_backend.py deleted file mode 100644 index 63d3cbce34d08..0000000000000 --- a/docs/api/python/downloads/df88a32237a9b3e764a8da54c1743145/plot_backend.py +++ /dev/null @@ -1,57 +0,0 @@ -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. - -""" - -.. _l-example-backend-api: - -ONNX Runtime Backend for ONNX -============================= - -*ONNX Runtime* extends the -`onnx backend API `_ -to run predictions using this runtime. -Let's use the API to compute the prediction -of a simple logistic regression model. -""" -import numpy as np -from onnxruntime import datasets -from onnxruntime.capi.onnxruntime_pybind11_state import InvalidArgument -import onnxruntime.backend as backend -from onnx import load - -name = datasets.get_example("logreg_iris.onnx") -model = load(name) - -rep = backend.prepare(model, 'CPU') -x = np.array([[-1.0, -2.0]], dtype=np.float32) -try: - label, proba = rep.run(x) - print("label={}".format(label)) - print("probabilities={}".format(proba)) -except (RuntimeError, InvalidArgument) as e: - print(e) - -######################################## -# The device depends on how the package was compiled, -# GPU or CPU. -from onnxruntime import get_device -print(get_device()) - -######################################## -# The backend can also directly load the model -# without using *onnx*. - -rep = backend.prepare(name, 'CPU') -x = np.array([[-1.0, -2.0]], dtype=np.float32) -try: - label, proba = rep.run(x) - print("label={}".format(label)) - print("probabilities={}".format(proba)) -except (RuntimeError, InvalidArgument) as e: - print(e) - -####################################### -# The backend API is implemented by other frameworks -# and makes it easier to switch between multiple runtimes -# with the same API. diff --git a/docs/api/python/downloads/e4ee07af6afb721729db3bf156693fa2/plot_profiling.py b/docs/api/python/downloads/e4ee07af6afb721729db3bf156693fa2/plot_profiling.py deleted file mode 100644 index f0ea727ede1b2..0000000000000 --- a/docs/api/python/downloads/e4ee07af6afb721729db3bf156693fa2/plot_profiling.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. - -""" - -.. _l-example-profiling: - -Profile the execution of a simple model -======================================= - -*ONNX Runtime* can profile the execution of the model. -This example shows how to interpret the results. -""" -import onnx -import onnxruntime as rt -import numpy -from onnxruntime.datasets import get_example - - -def change_ir_version(filename, ir_version=6): - "onnxruntime==1.2.0 does not support opset <= 7 and ir_version > 6" - with open(filename, "rb") as f: - model = onnx.load(f) - model.ir_version = 6 - if model.opset_import[0].version <= 7: - model.opset_import[0].version = 11 - return model - - - - -######################### -# Let's load a very simple model and compute some prediction. - -example1 = get_example("mul_1.onnx") -onnx_model = change_ir_version(example1) -onnx_model_str = onnx_model.SerializeToString() -sess = rt.InferenceSession(onnx_model_str) -input_name = sess.get_inputs()[0].name - -x = numpy.array([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], dtype=numpy.float32) -res = sess.run(None, {input_name: x}) -print(res) - -######################### -# We need to enable to profiling -# before running the predictions. - -options = rt.SessionOptions() -options.enable_profiling = True -sess_profile = rt.InferenceSession(onnx_model_str, options) -input_name = sess.get_inputs()[0].name - -x = numpy.array([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], dtype=numpy.float32) - -sess.run(None, {input_name: x}) -prof_file = sess_profile.end_profiling() -print(prof_file) - -########################### -# The results are stored un a file in JSON format. -# Let's see what it contains. -import json -with open(prof_file, "r") as f: - sess_time = json.load(f) -import pprint -pprint.pprint(sess_time) - - - diff --git a/docs/api/python/downloads/f8e5d8e309ca291f68bd029c26838ccc/plot_backend.ipynb b/docs/api/python/downloads/f8e5d8e309ca291f68bd029c26838ccc/plot_backend.ipynb index b4e02617aa957..ec784f28f4634 100644 --- a/docs/api/python/downloads/f8e5d8e309ca291f68bd029c26838ccc/plot_backend.ipynb +++ b/docs/api/python/downloads/f8e5d8e309ca291f68bd029c26838ccc/plot_backend.ipynb @@ -15,7 +15,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "\n\n# ONNX Runtime Backend for ONNX\n\n*ONNX Runtime* extends the \n`onnx backend API `_\nto run predictions using this runtime.\nLet's use the API to compute the prediction\nof a simple logistic regression model.\n" + "\n\n# ONNX Runtime Backend for ONNX\n\n*ONNX Runtime* extends the\n[onnx backend API](https://github.com/onnx/onnx/blob/main/docs/ImplementingAnOnnxBackend.md)\nto run predictions using this runtime.\nLet's use the API to compute the prediction\nof a simple logistic regression model.\n" ] }, { @@ -26,7 +26,7 @@ }, "outputs": [], "source": [ - "import numpy as np\nfrom onnxruntime import datasets\nfrom onnxruntime.capi.onnxruntime_pybind11_state import InvalidArgument\nimport onnxruntime.backend as backend\nfrom onnx import load\n\nname = datasets.get_example(\"logreg_iris.onnx\")\nmodel = load(name)\n\nrep = backend.prepare(model, 'CPU')\nx = np.array([[-1.0, -2.0]], dtype=np.float32)\ntry:\n label, proba = rep.run(x)\n print(\"label={}\".format(label))\n print(\"probabilities={}\".format(proba))\nexcept (RuntimeError, InvalidArgument) as e:\n print(e)" + "import numpy as np\nfrom onnx import load\n\nimport onnxruntime.backend as backend" ] }, { @@ -44,7 +44,7 @@ }, "outputs": [], "source": [ - "from onnxruntime import get_device\nprint(get_device())" + "from onnxruntime import datasets, get_device\nfrom onnxruntime.capi.onnxruntime_pybind11_state import InvalidArgument\n\ndevice = get_device()\n\nname = datasets.get_example(\"logreg_iris.onnx\")\nmodel = load(name)\n\nrep = backend.prepare(model, device)\nx = np.array([[-1.0, -2.0]], dtype=np.float32)\ntry:\n label, proba = rep.run(x)\n print(\"label={}\".format(label))\n print(\"probabilities={}\".format(proba))\nexcept (RuntimeError, InvalidArgument) as e:\n print(e)" ] }, { @@ -62,7 +62,7 @@ }, "outputs": [], "source": [ - "rep = backend.prepare(name, 'CPU')\nx = np.array([[-1.0, -2.0]], dtype=np.float32)\ntry:\n label, proba = rep.run(x)\n print(\"label={}\".format(label))\n print(\"probabilities={}\".format(proba))\nexcept (RuntimeError, InvalidArgument) as e:\n print(e)" + "rep = backend.prepare(name, device)\nx = np.array([[-1.0, -2.0]], dtype=np.float32)\ntry:\n label, proba = rep.run(x)\n print(\"label={}\".format(label))\n print(\"probabilities={}\".format(proba))\nexcept (RuntimeError, InvalidArgument) as e:\n print(e)" ] }, { @@ -89,7 +89,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.0" + "version": "3.8.10" } }, "nbformat": 4, diff --git a/docs/api/python/examples_md.html b/docs/api/python/examples_md.html index 9b711c0700a7c..a2c71d4494175 100644 --- a/docs/api/python/examples_md.html +++ b/docs/api/python/examples_md.html @@ -4,19 +4,22 @@ - - Gallery of examples — ONNX Runtime 1.7.0 documentation - - + + + Gallery of examples — ONNX Runtime 1.14.0 documentation + + - - - - - + + + + + + + @@ -58,7 +61,7 @@

    ONNX Runtime

    Navigation

    @@ -73,12 +76,12 @@

    Related Topics

    Quick search

    - + @@ -95,7 +98,7 @@

    Quick search

    ©2018-2021, Microsoft. | - Powered by Sphinx 3.5.1 + Powered by Sphinx 5.3.0 & Alabaster 0.7.12 | diff --git a/docs/api/python/genindex.html b/docs/api/python/genindex.html index f901bb353d7a3..9eeeacad1a0a0 100644 --- a/docs/api/python/genindex.html +++ b/docs/api/python/genindex.html @@ -5,18 +5,20 @@ - Index — ONNX Runtime 1.7.0 documentation - - + Index — ONNX Runtime 1.14.0 documentation + + - - - - - + + + + + + + @@ -40,10 +42,13 @@

    Index

    A + | B | C | D | E + | F | G + | H | I | L | M @@ -60,15 +65,45 @@

    Index

    A

    - + +
    + +

    B

    + + +
    @@ -76,7 +111,11 @@

    A

    C

    +
    @@ -84,11 +123,29 @@

    C

    D

    @@ -96,17 +153,33 @@

    D

    E

    +
    + +

    F

    + +
    @@ -114,19 +187,45 @@

    E

    G

    +
    + +

    H

    + +
    @@ -136,13 +235,23 @@

    I

    @@ -150,13 +259,13 @@

    I

    L

    @@ -224,9 +345,17 @@

    R

      -
    • run() (in module onnxruntime.backend) +
    • run_with_iobinding() (onnxruntime.InferenceSession method) +
    • +
    • run_with_ort_values() (onnxruntime.InferenceSession method)
    • RunOptions (class in onnxruntime)
    • @@ -237,10 +366,20 @@

      S

        -
      • shape() (onnxruntime.NodeArg property) +
      • sparse_coo_from_numpy() (onnxruntime.SparseTensor static method) +
      • +
      • sparse_csr_from_numpy() (onnxruntime.SparseTensor static method) +
      • +
      • SparseTensor (class in onnxruntime)
      • supports_device() (in module onnxruntime.backend)
      • @@ -250,11 +389,13 @@

        S

        T

        @@ -262,7 +403,11 @@

        T

        U

        +
        @@ -270,7 +415,11 @@

        U

        V

        +
        @@ -298,7 +447,7 @@

        ONNX Runtime

        Navigation

        @@ -313,12 +462,12 @@

        Related Topics

        Quick search

        - + @@ -335,7 +484,7 @@

        Quick search

        ©2018-2021, Microsoft. | - Powered by Sphinx 3.5.1 + Powered by Sphinx 5.3.0 & Alabaster 0.7.12 diff --git a/docs/api/python/images/sphx_glr_plot_backend_thumb.png b/docs/api/python/images/sphx_glr_plot_backend_thumb.png index 233f8e605efca..8a5fed589d17f 100644 Binary files a/docs/api/python/images/sphx_glr_plot_backend_thumb.png and b/docs/api/python/images/sphx_glr_plot_backend_thumb.png differ diff --git a/docs/api/python/images/sphx_glr_plot_common_errors_thumb.png b/docs/api/python/images/sphx_glr_plot_common_errors_thumb.png index 233f8e605efca..8a5fed589d17f 100644 Binary files a/docs/api/python/images/sphx_glr_plot_common_errors_thumb.png and b/docs/api/python/images/sphx_glr_plot_common_errors_thumb.png differ diff --git a/docs/api/python/images/sphx_glr_plot_convert_pipeline_vectorizer_thumb.png b/docs/api/python/images/sphx_glr_plot_convert_pipeline_vectorizer_thumb.png index 233f8e605efca..8a5fed589d17f 100644 Binary files a/docs/api/python/images/sphx_glr_plot_convert_pipeline_vectorizer_thumb.png and b/docs/api/python/images/sphx_glr_plot_convert_pipeline_vectorizer_thumb.png differ diff --git a/docs/api/python/images/sphx_glr_plot_load_and_predict_thumb.png b/docs/api/python/images/sphx_glr_plot_load_and_predict_thumb.png index 233f8e605efca..8a5fed589d17f 100644 Binary files a/docs/api/python/images/sphx_glr_plot_load_and_predict_thumb.png and b/docs/api/python/images/sphx_glr_plot_load_and_predict_thumb.png differ diff --git a/docs/api/python/images/sphx_glr_plot_metadata_thumb.png b/docs/api/python/images/sphx_glr_plot_metadata_thumb.png index 233f8e605efca..8a5fed589d17f 100644 Binary files a/docs/api/python/images/sphx_glr_plot_metadata_thumb.png and b/docs/api/python/images/sphx_glr_plot_metadata_thumb.png differ diff --git a/docs/api/python/images/sphx_glr_plot_pipeline_001.png b/docs/api/python/images/sphx_glr_plot_pipeline_001.png index c91809a34eea5..f83137e62c9a7 100644 Binary files a/docs/api/python/images/sphx_glr_plot_pipeline_001.png and b/docs/api/python/images/sphx_glr_plot_pipeline_001.png differ diff --git a/docs/api/python/images/sphx_glr_plot_pipeline_thumb.png b/docs/api/python/images/sphx_glr_plot_pipeline_thumb.png index b2147e8559020..0a2315a0fb439 100644 Binary files a/docs/api/python/images/sphx_glr_plot_pipeline_thumb.png and b/docs/api/python/images/sphx_glr_plot_pipeline_thumb.png differ diff --git a/docs/api/python/images/sphx_glr_plot_profiling_thumb.png b/docs/api/python/images/sphx_glr_plot_profiling_thumb.png index 233f8e605efca..8a5fed589d17f 100644 Binary files a/docs/api/python/images/sphx_glr_plot_profiling_thumb.png and b/docs/api/python/images/sphx_glr_plot_profiling_thumb.png differ diff --git a/docs/api/python/images/sphx_glr_plot_train_convert_predict_001.png b/docs/api/python/images/sphx_glr_plot_train_convert_predict_001.png index 1bf4999d1cd48..6bbd00251ae43 100644 Binary files a/docs/api/python/images/sphx_glr_plot_train_convert_predict_001.png and b/docs/api/python/images/sphx_glr_plot_train_convert_predict_001.png differ diff --git a/docs/api/python/images/sphx_glr_plot_train_convert_predict_thumb.png b/docs/api/python/images/sphx_glr_plot_train_convert_predict_thumb.png index 358dfc6dd1e09..ee83cc8bc264d 100644 Binary files a/docs/api/python/images/sphx_glr_plot_train_convert_predict_thumb.png and b/docs/api/python/images/sphx_glr_plot_train_convert_predict_thumb.png differ diff --git a/docs/api/python/index.html b/docs/api/python/index.html index df5cd71a2e5f5..13a78c43a2779 100644 --- a/docs/api/python/index.html +++ b/docs/api/python/index.html @@ -4,19 +4,22 @@ - - Python Bindings for ONNX Runtime — ONNX Runtime 1.7.0 documentation - - + + + Python Bindings for ONNX Runtime — ONNX Runtime 1.14.0 documentation + + - - - - - + + + + + + + @@ -36,18 +39,18 @@
        -
        -

        Python Bindings for ONNX Runtime

        +
        +

        Python Bindings for ONNX Runtime

        ONNX Runtime is a performance-focused scoring engine for Open Neural Network Exchange (ONNX) models. For more information on ONNX Runtime, please see aka.ms/onnxruntime or the Github project.

        -
        +
        @@ -71,7 +74,7 @@

        ONNX Runtime

        Navigation

        @@ -87,12 +90,12 @@

        Related Topics

        Quick search

        - + @@ -109,7 +112,7 @@

        Quick search

        ©2018-2021, Microsoft. | - Powered by Sphinx 3.5.1 + Powered by Sphinx 5.3.0 & Alabaster 0.7.12 | diff --git a/docs/api/python/modules/index.html b/docs/api/python/modules/index.html index 82fb75be7d382..a75254516ef90 100644 --- a/docs/api/python/modules/index.html +++ b/docs/api/python/modules/index.html @@ -5,18 +5,20 @@ - Overview: module code — ONNX Runtime 1.7.0 documentation - - + Overview: module code — ONNX Runtime 1.14.0 documentation + + - - - - - + + + + + + + @@ -38,7 +40,6 @@

        All modules for which code is available

        @@ -62,7 +63,7 @@

        ONNX Runtime

        Navigation

        @@ -77,12 +78,12 @@

        Related Topics

        Quick search

        - + @@ -99,7 +100,7 @@

        Quick search

        ©2018-2021, Microsoft. | - Powered by Sphinx 3.5.1 + Powered by Sphinx 5.3.0 & Alabaster 0.7.12 diff --git a/docs/api/python/modules/onnxruntime/capi/onnxruntime_inference_collection.html b/docs/api/python/modules/onnxruntime/capi/onnxruntime_inference_collection.html index eb51908d06367..317685ebf07ec 100644 --- a/docs/api/python/modules/onnxruntime/capi/onnxruntime_inference_collection.html +++ b/docs/api/python/modules/onnxruntime/capi/onnxruntime_inference_collection.html @@ -5,18 +5,20 @@ - onnxruntime.capi.onnxruntime_inference_collection — ONNX Runtime 1.7.0 documentation - - + onnxruntime.capi.onnxruntime_inference_collection — ONNX Runtime 1.14.0 documentation + + - - - - - + + + + + + + @@ -48,14 +50,15 @@

        Source code for onnxruntime.capi.onnxruntime_inference_collection

        from onnxruntime.capi import _pybind_state as C -def get_ort_device_type(device): - device = device.lower() - if device == 'cuda': +def get_ort_device_type(device_type, device_index): + if device_type == "cuda": return C.OrtDevice.cuda() - elif device == 'cpu': + elif device_type == "cpu": return C.OrtDevice.cpu() + elif device_type == "ort": + return C.get_ort_device(device_index).device_type() else: - raise Exception('Unsupported device type: ' + device) + raise Exception("Unsupported device type: " + device_type) def check_and_normalize_provider_args(providers, provider_options, available_provider_names): @@ -88,8 +91,10 @@

        Source code for onnxruntime.capi.onnxruntime_inference_collection

        def set_provider_options(name, options): if name not in available_provider_names: - raise ValueError("Specified provider '{}' is unavailable. Available providers: '{}'".format( - name, ", ".join(available_provider_names))) + warnings.warn( + "Specified provider '{}' is not in available provider names." + "Available providers: '{}'".format(name, ", ".join(available_provider_names)) + ) if name in provider_name_to_options: warnings.warn("Duplicate provider '{}' encountered, ignoring.".format(name)) @@ -121,8 +126,12 @@

        Source code for onnxruntime.capi.onnxruntime_inference_collection

        for provider in providers: if isinstance(provider, str): set_provider_options(provider, dict()) - elif isinstance(provider, tuple) and len(provider) == 2 and \ - isinstance(provider[0], str) and isinstance(provider[1], dict): + elif ( + isinstance(provider, tuple) + and len(provider) == 2 + and isinstance(provider[0], str) + and isinstance(provider[1], dict) + ): set_provider_options(provider[0], provider[1]) else: raise ValueError("'providers' values must be either strings or (string, dict) tuples.") @@ -134,6 +143,7 @@

        Source code for onnxruntime.capi.onnxruntime_inference_collection

        """ This is the main class used to run a model. """ + def __init__(self): # self._sess is managed by the derived class and relies on bindings from C.InferenceSession @@ -176,19 +186,16 @@

        Source code for onnxruntime.capi.onnxruntime_inference_collection

        precedence. Values can either be provider names or tuples of (provider name, options dict). If not provided, then all available providers are used with the default precedence. - The next release (ORT 1.10) will require explicitly setting - this providers parameter if you want to use execution providers - other than the default CPU provider (as opposed to the current - behavior of providers getting set/registered by default based on the - build flags) when instantiating InferenceSession. :param provider_options: Optional sequence of options dicts corresponding to the providers listed in 'providers'. 'providers' can contain either names or names and options. When any options are given in 'providers', 'provider_options' should not be used. - The list of providers is ordered by precedence. For example ['CUDAExecutionProvider', 'CPUExecutionProvider'] - means execute a node using CUDAExecutionProvider if capable, otherwise execute using CPUExecutionProvider. + The list of providers is ordered by precedence. For example + `['CUDAExecutionProvider', 'CPUExecutionProvider']` + means execute a node using CUDAExecutionProvider if capable, + otherwise execute using CPUExecutionProvider. """ # recreate the underlying C.InferenceSession self._reset_session(providers, provider_options) @@ -215,6 +222,8 @@

        Source code for onnxruntime.capi.onnxruntime_inference_collection

        :param output_names: name of the outputs :param input_feed: dictionary ``{ input_name: input_value }`` :param run_options: See :class:`onnxruntime.RunOptions`. + :return: list of results, every result is either a numpy array, + a sparse tensor, a list or a dictionary. :: @@ -240,6 +249,52 @@

        Source code for onnxruntime.capi.onnxruntime_inference_collection

        else: raise + def run_with_ort_values(self, output_names, input_dict_ort_values, run_options=None): + """ + Compute the predictions. + + :param output_names: name of the outputs + :param input_dict_ort_values: dictionary ``{ input_name: input_ort_value }`` + See ``OrtValue`` class how to create `OrtValue` + from numpy array or `SparseTensor` + :param run_options: See :class:`onnxruntime.RunOptions`. + :return: an array of `OrtValue` + + :: + + sess.run([output_name], {input_name: x}) + """ + + def invoke(sess, output_names, input_dict_ort_values, run_options): + input_dict = {} + for n, v in input_dict_ort_values.items(): + input_dict[n] = v._get_c_value() + result = sess.run_with_ort_values(input_dict, output_names, run_options) + if not isinstance(result, C.OrtValueVector): + raise TypeError("run_with_ort_values() must return a instance of type 'OrtValueVector'.") + ort_values = [OrtValue(v) for v in result] + return ort_values + + num_required_inputs = len(self._inputs_meta) + num_inputs = len(input_dict_ort_values) + # the graph may have optional inputs used to override initializers. allow for that. + if num_inputs < num_required_inputs: + raise ValueError("Model requires {} inputs. Input Feed contains {}".format(num_required_inputs, num_inputs)) + if not output_names: + output_names = [output.name for output in self._outputs_meta] + try: + return invoke(self._sess, output_names, input_dict_ort_values, run_options) + except C.EPFail as err: + if self._enable_fallback: + print("EP Error: {} using {}".format(str(err), self._providers)) + print("Falling back to {} and retrying.".format(self._fallback_providers)) + self.set_providers(self._fallback_providers) + # Fallback only once. + self.disable_fallback() + return invoke(self._sess, output_names, input_dict_ort_values, run_options) + else: + raise + def end_profiling(self): """ End profiling and return results in a file. @@ -264,10 +319,10 @@

        Source code for onnxruntime.capi.onnxruntime_inference_collection

        def run_with_iobinding(self, iobinding, run_options=None): """ - Compute the predictions. + Compute the predictions. - :param iobinding: the iobinding object that has graph inputs/outputs bind. - :param run_options: See :class:`onnxruntime.RunOptions`. + :param iobinding: the iobinding object that has graph inputs/outputs bind. + :param run_options: See :class:`onnxruntime.RunOptions`. """ self._sess.run_with_iobinding(iobinding._iobinding, run_options) @@ -276,7 +331,8 @@

        Source code for onnxruntime.capi.onnxruntime_inference_collection

        """ This is the main class used to run a model. """ - def __init__(self, path_or_bytes, sess_options=None, providers=None, provider_options=None): + + def __init__(self, path_or_bytes, sess_options=None, providers=None, provider_options=None, **kwargs): """ :param path_or_bytes: filename or serialized ONNX or ORT format model in a byte string :param sess_options: session options @@ -289,9 +345,12 @@

        Source code for onnxruntime.capi.onnxruntime_inference_collection

        The model type will be inferred unless explicitly set in the SessionOptions. To explicitly set: - so = onnxruntime.SessionOptions() - so.add_session_config_entry('session.load_model_format', 'ONNX') or - so.add_session_config_entry('session.load_model_format', 'ORT') or + + :: + + so = onnxruntime.SessionOptions() + # so.add_session_config_entry('session.load_model_format', 'ONNX') or + so.add_session_config_entry('session.load_model_format', 'ORT') A file extension of '.ort' will be inferred as an ORT format model. All other filenames are assumed to be ONNX format models. @@ -299,8 +358,10 @@

        Source code for onnxruntime.capi.onnxruntime_inference_collection

        'providers' can contain either names or names and options. When any options are given in 'providers', 'provider_options' should not be used. - The list of providers is ordered by precedence. For example ['CUDAExecutionProvider', 'CPUExecutionProvider'] - means execute a node using CUDAExecutionProvider if capable, otherwise execute using CPUExecutionProvider. + The list of providers is ordered by precedence. For example + `['CUDAExecutionProvider', 'CPUExecutionProvider']` + means execute a node using `CUDAExecutionProvider` + if capable, otherwise execute using `CPUExecutionProvider`. """ Session.__init__(self) @@ -317,13 +378,16 @@

        Source code for onnxruntime.capi.onnxruntime_inference_collection

        self._sess_options = sess_options self._sess_options_initial = sess_options self._enable_fallback = True - self._read_config_from_model = os.environ.get('ORT_LOAD_CONFIG_FROM_MODEL') == '1' + self._read_config_from_model = os.environ.get("ORT_LOAD_CONFIG_FROM_MODEL") == "1" + + # internal parameters that we don't expect to be used in general so aren't documented + disabled_optimizers = kwargs["disabled_optimizers"] if "disabled_optimizers" in kwargs else None try: - self._create_inference_session(providers, provider_options) - except RuntimeError: + self._create_inference_session(providers, provider_options, disabled_optimizers) + except ValueError: if self._enable_fallback: - print("EP Error using {}".format(self._providers)) + print("EP Error using {}".format(providers)) print("Falling back to {} and retrying.".format(self._fallback_providers)) self._create_inference_session(self._fallback_providers, None) # Fallback only once. @@ -331,19 +395,29 @@

        Source code for onnxruntime.capi.onnxruntime_inference_collection

        else: raise - def _create_inference_session(self, providers, provider_options): + def _create_inference_session(self, providers, provider_options, disabled_optimizers=None): available_providers = C.get_available_providers() - # validate providers and provider_options before other initialization - providers, provider_options = check_and_normalize_provider_args(providers, - provider_options, - available_providers) - # Tensorrt can fall back to CUDA. All others fall back to CPU. - if 'TensorrtExecutionProvider' in available_providers: - self._fallback_providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] + if "TensorrtExecutionProvider" in available_providers: + self._fallback_providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] + elif "MIGraphXExecutionProvider" in available_providers: + self._fallback_providers = ["ROCMExecutionProvider", "CPUExecutionProvider"] else: - self._fallback_providers = ['CPUExecutionProvider'] + self._fallback_providers = ["CPUExecutionProvider"] + + # validate providers and provider_options before other initialization + providers, provider_options = check_and_normalize_provider_args( + providers, provider_options, available_providers + ) + if providers == [] and len(available_providers) > 1: + self.disable_fallback() + raise ValueError( + "This ORT build has {} enabled. ".format(available_providers) + + "Since ORT 1.9, you are required to explicitly set " + + "the providers parameter when instantiating InferenceSession. For example, " + "onnxruntime.InferenceSession(..., providers={}, ...)".format(available_providers) + ) session_options = self._sess_options if self._sess_options else C.get_default_session_options() if self._model_path: @@ -351,8 +425,14 @@

        Source code for onnxruntime.capi.onnxruntime_inference_collection

        else: sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model) + if disabled_optimizers is None: + disabled_optimizers = set() + elif not isinstance(disabled_optimizers, set): + # convert to set. assumes iterable + disabled_optimizers = set(disabled_optimizers) + # initialize the C++ InferenceSession - sess.initialize_session(providers, provider_options) + sess.initialize_session(providers, provider_options, disabled_optimizers) self._sess = sess self._sess_options = self._sess.session_options @@ -383,56 +463,75 @@

        Source code for onnxruntime.capi.onnxruntime_inference_collection

        self._create_inference_session(providers, provider_options)
        -class IOBinding: - ''' +
        [docs]class IOBinding: + """ This class provides API to bind input/output to a specified device, e.g. GPU. - ''' + """ + def __init__(self, session): self._iobinding = C.SessionIOBinding(session._sess) - self._numpy_obj_references = [] + self._numpy_obj_references = {} - def bind_cpu_input(self, name, arr_on_cpu): - ''' +
        [docs] def bind_cpu_input(self, name, arr_on_cpu): + """ bind an input to array on CPU :param name: input name :param arr_on_cpu: input values as a python array on CPU - ''' + """ # Hold a reference to the numpy object as the bound OrtValue is backed # directly by the data buffer of the numpy object and so the numpy object # must be around until this IOBinding instance is around - self._numpy_obj_references.append(arr_on_cpu) - self._iobinding.bind_input(name, arr_on_cpu) + self._numpy_obj_references[name] = arr_on_cpu + self._iobinding.bind_input(name, arr_on_cpu)
        - def bind_input(self, name, device_type, device_id, element_type, shape, buffer_ptr): - ''' +
        [docs] def bind_input(self, name, device_type, device_id, element_type, shape, buffer_ptr): + """ :param name: input name :param device_type: e.g. cpu, cuda :param device_id: device id, e.g. 0 :param element_type: input element type :param shape: input shape :param buffer_ptr: memory pointer to input data - ''' - self._iobinding.bind_input(name, - C.OrtDevice(get_ort_device_type(device_type), C.OrtDevice.default_memory(), - device_id), - element_type, shape, buffer_ptr) - - def bind_ortvalue_input(self, name, ortvalue): - ''' + """ + self._iobinding.bind_input( + name, + C.OrtDevice( + get_ort_device_type(device_type, device_id), + C.OrtDevice.default_memory(), + device_id, + ), + element_type, + shape, + buffer_ptr, + )
        + +
        [docs] def bind_ortvalue_input(self, name, ortvalue): + """ :param name: input name :param ortvalue: OrtValue instance to bind - ''' - self._iobinding.bind_ortvalue_input(name, ortvalue._ortvalue) - - def bind_output(self, name, device_type='cpu', device_id=0, element_type=None, shape=None, buffer_ptr=None): - ''' + """ + self._iobinding.bind_ortvalue_input(name, ortvalue._ortvalue)
        + + def synchronize_inputs(self): + self._iobinding.synchronize_inputs() + +
        [docs] def bind_output( + self, + name, + device_type="cpu", + device_id=0, + element_type=None, + shape=None, + buffer_ptr=None, + ): + """ :param name: output name :param device_type: e.g. cpu, cuda, cpu by default :param device_id: device id, e.g. 0 :param element_type: output element type :param shape: output shape :param buffer_ptr: memory pointer to output data - ''' + """ # Follow the `if` path when the user has not provided any pre-allocated buffer but still # would like to bind an output to a specific device (e.g. cuda). @@ -441,53 +540,67 @@

        Source code for onnxruntime.capi.onnxruntime_inference_collection

        # in which case ORT will allocate the memory for the user # (2) The output has a dynamic shape and hence the size of the buffer may not be fixed across runs if buffer_ptr is None: - self._iobinding.bind_output(name, - C.OrtDevice(get_ort_device_type(device_type), C.OrtDevice.default_memory(), - device_id)) + self._iobinding.bind_output( + name, + C.OrtDevice( + get_ort_device_type(device_type, device_id), + C.OrtDevice.default_memory(), + device_id, + ), + ) else: if element_type is None or shape is None: raise ValueError("`element_type` and `shape` are to be provided if pre-allocated memory is provided") - self._iobinding.bind_output(name, - C.OrtDevice(get_ort_device_type(device_type), C.OrtDevice.default_memory(), - device_id), - element_type, shape, buffer_ptr) - - def bind_ortvalue_output(self, name, ortvalue): - ''' + self._iobinding.bind_output( + name, + C.OrtDevice( + get_ort_device_type(device_type, device_id), + C.OrtDevice.default_memory(), + device_id, + ), + element_type, + shape, + buffer_ptr, + )
        + +
        [docs] def bind_ortvalue_output(self, name, ortvalue): + """ :param name: output name :param ortvalue: OrtValue instance to bind - ''' - self._iobinding.bind_ortvalue_output(name, ortvalue._ortvalue) + """ + self._iobinding.bind_ortvalue_output(name, ortvalue._ortvalue)
        - def get_outputs(self): - ''' + def synchronize_outputs(self): + self._iobinding.synchronize_outputs() + +
        [docs] def get_outputs(self): + """ Returns the output OrtValues from the Run() that preceded the call. The data buffer of the obtained OrtValues may not reside on CPU memory - ''' - returned_ortvalues = [] - - for ortvalue in self._iobinding.get_outputs(): - returned_ortvalues.append(OrtValue(ortvalue)) - - return returned_ortvalues + """ + outputs = self._iobinding.get_outputs() + if not isinstance(outputs, C.OrtValueVector): + raise TypeError("get_outputs() must return an instance of type 'OrtValueVector'.") + return [OrtValue(ortvalue) for ortvalue in outputs]
        - def copy_outputs_to_cpu(self): - '''Copy output contents to CPU (if on another device). No-op if already on the CPU.''' - return self._iobinding.copy_outputs_to_cpu() +
        [docs] def copy_outputs_to_cpu(self): + """Copy output contents to CPU (if on another device). No-op if already on the CPU.""" + return self._iobinding.copy_outputs_to_cpu()
        def clear_binding_inputs(self): self._iobinding.clear_binding_inputs() def clear_binding_outputs(self): - self._iobinding.clear_binding_outputs() + self._iobinding.clear_binding_outputs()
        -class OrtValue: - ''' +
        [docs]class OrtValue: + """ A data structure that supports all ONNX data formats (tensors and non-tensors) that allows users to place the data backing these on a device, for example, on a CUDA supported device. This class provides APIs to construct and deal with OrtValues. - ''' + """ + def __init__(self, ortvalue, numpy_obj=None): if isinstance(ortvalue, C.OrtValue): self._ortvalue = ortvalue @@ -496,75 +609,361 @@

        Source code for onnxruntime.capi.onnxruntime_inference_collection

        self._numpy_obj = numpy_obj else: # An end user won't hit this error - raise ValueError("`Provided ortvalue` needs to be of type " + - "`onnxruntime.capi.onnxruntime_pybind11_state.OrtValue`") + raise ValueError( + "`Provided ortvalue` needs to be of type " + "`onnxruntime.capi.onnxruntime_pybind11_state.OrtValue`" + ) - @staticmethod - def ortvalue_from_numpy(numpy_obj, device_type='cpu', device_id=0): - ''' + def _get_c_value(self): + return self._ortvalue + +
        [docs] @staticmethod + def ortvalue_from_numpy(numpy_obj, device_type="cpu", device_id=0): + """ Factory method to construct an OrtValue (which holds a Tensor) from a given Numpy object A copy of the data in the Numpy object is held by the OrtValue only if the device is NOT cpu + :param numpy_obj: The Numpy object to construct the OrtValue from :param device_type: e.g. cpu, cuda, cpu by default :param device_id: device id, e.g. 0 - ''' + """ # Hold a reference to the numpy object (if device_type is 'cpu') as the OrtValue # is backed directly by the data buffer of the numpy object and so the numpy object # must be around until this OrtValue instance is around - return OrtValue(C.OrtValue.ortvalue_from_numpy(numpy_obj, C.OrtDevice(get_ort_device_type(device_type), - C.OrtDevice.default_memory(), device_id)), numpy_obj if device_type.lower() == 'cpu' else None) - - @staticmethod - def ortvalue_from_shape_and_type(shape=None, element_type=None, device_type='cpu', device_id=0): - ''' + return OrtValue( + C.OrtValue.ortvalue_from_numpy( + numpy_obj, + C.OrtDevice( + get_ort_device_type(device_type, device_id), + C.OrtDevice.default_memory(), + device_id, + ), + ), + numpy_obj if device_type.lower() == "cpu" else None, + )
        + +
        [docs] @staticmethod + def ortvalue_from_shape_and_type(shape=None, element_type=None, device_type="cpu", device_id=0): + """ Factory method to construct an OrtValue (which holds a Tensor) from given shape and element_type + :param shape: List of integers indicating the shape of the OrtValue :param element_type: The data type of the elements in the OrtValue (numpy type) :param device_type: e.g. cpu, cuda, cpu by default :param device_id: device id, e.g. 0 - ''' + """ if shape is None or element_type is None: raise ValueError("`element_type` and `shape` are to be provided if pre-allocated memory is provided") - return OrtValue(C.OrtValue.ortvalue_from_shape_and_type(shape, element_type, - C.OrtDevice(get_ort_device_type(device_type), C.OrtDevice.default_memory(), device_id))) + return OrtValue( + C.OrtValue.ortvalue_from_shape_and_type( + shape, + element_type, + C.OrtDevice( + get_ort_device_type(device_type, device_id), + C.OrtDevice.default_memory(), + device_id, + ), + ) + )
        + +
        [docs] @staticmethod + def ort_value_from_sparse_tensor(sparse_tensor): + """ + The function will construct an OrtValue instance from a valid SparseTensor + The new instance of OrtValue will assume the ownership of sparse_tensor + """ + return OrtValue(C.OrtValue.ort_value_from_sparse_tensor(sparse_tensor._get_c_tensor()))
        + +
        [docs] def as_sparse_tensor(self): + """ + The function will return SparseTensor contained in this OrtValue + """ + return SparseTensor(self._ortvalue.as_sparse_tensor())
        - def data_ptr(self): - ''' +
        [docs] def data_ptr(self): + """ Returns the address of the first element in the OrtValue's data buffer - ''' - return self._ortvalue.data_ptr() + """ + return self._ortvalue.data_ptr()
        - def device_name(self): - ''' +
        [docs] def device_name(self): + """ Returns the name of the device where the OrtValue's data buffer resides e.g. cpu, cuda - ''' - return self._ortvalue.device_name().lower() + """ + return self._ortvalue.device_name().lower()
        - def shape(self): - ''' +
        [docs] def shape(self): + """ Returns the shape of the data in the OrtValue - ''' - return self._ortvalue.shape() + """ + return self._ortvalue.shape()
        - def data_type(self): - ''' +
        [docs] def data_type(self): + """ Returns the data type of the data in the OrtValue - ''' - return self._ortvalue.data_type() + """ + return self._ortvalue.data_type()
        + +
        [docs] def element_type(self): + """ + Returns the proto type of the data in the OrtValue + if the OrtValue is a tensor. + """ + return self._ortvalue.element_type()
        + +
        [docs] def has_value(self): + """ + Returns True if the OrtValue corresponding to an + optional type contains data, else returns False + """ + return self._ortvalue.has_value()
        + +
        [docs] def is_tensor(self): + """ + Returns True if the OrtValue contains a Tensor, else returns False + """ + return self._ortvalue.is_tensor()
        - def is_tensor(self): - ''' - Returns True if the OrtValue is a Tensor, else returns False - ''' - return self._ortvalue.is_tensor() +
        [docs] def is_sparse_tensor(self): + """ + Returns True if the OrtValue contains a SparseTensor, else returns False + """ + return self._ortvalue.is_sparse_tensor()
        - def numpy(self): - ''' +
        [docs] def is_tensor_sequence(self): + """ + Returns True if the OrtValue contains a Tensor Sequence, else returns False + """ + return self._ortvalue.is_tensor_sequence()
        + +
        [docs] def numpy(self): + """ Returns a Numpy object from the OrtValue. Valid only for OrtValues holding Tensors. Throws for OrtValues holding non-Tensors. - ''' - return self._ortvalue.numpy() + Use accessors to gain a reference to non-Tensor objects such as SparseTensor + """ + return self._ortvalue.numpy()
        + +
        [docs] def update_inplace(self, np_arr): + """ + Update the OrtValue in place with a new Numpy array. The numpy contents + are copied over to the device memory backing the OrtValue. It can be used + to update the input valuess for an InferenceSession with CUDA graph + enabled or other scenarios where the OrtValue needs to be updated while + the memory address can not be changed. + """ + self._ortvalue.update_inplace(np_arr)
        + + +
        [docs]class OrtDevice: + """ + A data structure that exposes the underlying C++ OrtDevice + """ + + def __init__(self, c_ort_device): + """ + Internal constructor + """ + if isinstance(c_ort_device, C.OrtDevice): + self._ort_device = c_ort_device + else: + raise ValueError( + "`Provided object` needs to be of type " + "`onnxruntime.capi.onnxruntime_pybind11_state.OrtDevice`" + ) + + def _get_c_device(self): + """ + Internal accessor to underlying object + """ + return self._ort_device + + @staticmethod + def make(ort_device_name, device_id): + return OrtDevice( + C.OrtDevice( + get_ort_device_type(ort_device_name, device_id), + C.OrtDevice.default_memory(), + device_id, + ) + ) + + def device_id(self): + return self._ort_device.device_id() + + def device_type(self): + return self._ort_device.device_type()
        + + +
        [docs]class SparseTensor: + """ + A data structure that project the C++ SparseTensor object + The class provides API to work with the object. + Depending on the format, the class will hold more than one buffer + depending on the format + """ + + def __init__(self, sparse_tensor): + """ + Internal constructor + """ + if isinstance(sparse_tensor, C.SparseTensor): + self._tensor = sparse_tensor + else: + # An end user won't hit this error + raise ValueError( + "`Provided object` needs to be of type " + "`onnxruntime.capi.onnxruntime_pybind11_state.SparseTensor`" + ) + + def _get_c_tensor(self): + return self._tensor + +
        [docs] @staticmethod + def sparse_coo_from_numpy(dense_shape, values, coo_indices, ort_device): + """ + Factory method to construct a SparseTensor in COO format from given arguments + + :param dense_shape: 1-D numpy array(int64) or a python list that contains a dense_shape of the sparse tensor + must be on cpu memory + :param values: a homogeneous, contiguous 1-D numpy array that contains non-zero elements of the tensor + of a type. + :param coo_indices: contiguous numpy array(int64) that contains COO indices for the tensor. coo_indices may + have a 1-D shape when it contains a linear index of non-zero values and its length must be equal to + that of the values. It can also be of 2-D shape, in which has it contains pairs of coordinates for + each of the nnz values and its length must be exactly twice of the values length. + :param ort_device: - describes the backing memory owned by the supplied nummpy arrays. Only CPU memory is + suppored for non-numeric data types. + + For primitive types, the method will map values and coo_indices arrays into native memory and will use + them as backing storage. It will increment the reference count for numpy arrays and it will decrement it + on GC. The buffers may reside in any storage either CPU or GPU. + For strings and objects, it will create a copy of the arrays in CPU memory as ORT does not support those + on other devices and their memory can not be mapped. + """ + return SparseTensor( + C.SparseTensor.sparse_coo_from_numpy(dense_shape, values, coo_indices, ort_device._get_c_device()) + )
        + +
        [docs] @staticmethod + def sparse_csr_from_numpy(dense_shape, values, inner_indices, outer_indices, ort_device): + """ + Factory method to construct a SparseTensor in CSR format from given arguments + + :param dense_shape: 1-D numpy array(int64) or a python list that contains a dense_shape of the + sparse tensor (rows, cols) must be on cpu memory + :param values: a contiguous, homogeneous 1-D numpy array that contains non-zero elements of the tensor + of a type. + :param inner_indices: contiguous 1-D numpy array(int64) that contains CSR inner indices for the tensor. + Its length must be equal to that of the values. + :param outer_indices: contiguous 1-D numpy array(int64) that contains CSR outer indices for the tensor. + Its length must be equal to the number of rows + 1. + :param ort_device: - describes the backing memory owned by the supplied nummpy arrays. Only CPU memory is + suppored for non-numeric data types. + + For primitive types, the method will map values and indices arrays into native memory and will use them as + backing storage. It will increment the reference count and it will decrement then count when it is GCed. + The buffers may reside in any storage either CPU or GPU. + For strings and objects, it will create a copy of the arrays in CPU memory as ORT does not support those + on other devices and their memory can not be mapped. + """ + return SparseTensor( + C.SparseTensor.sparse_csr_from_numpy( + dense_shape, + values, + inner_indices, + outer_indices, + ort_device._get_c_device(), + ) + )
        + +
        [docs] def values(self): + """ + The method returns a numpy array that is backed by the native memory + if the data type is numeric. Otherwise, the returned numpy array that contains + copies of the strings. + """ + return self._tensor.values()
        + +
        [docs] def as_coo_view(self): + """ + The method will return coo representation of the sparse tensor which will enable + querying COO indices. If the instance did not contain COO format, it would throw. + You can query coo indices as: + + :: + + coo_indices = sparse_tensor.as_coo_view().indices() + + which will return a numpy array that is backed by the native memory. + """ + return self._tensor.get_coo_data()
        + +
        [docs] def as_csrc_view(self): + """ + The method will return CSR(C) representation of the sparse tensor which will enable + querying CRS(C) indices. If the instance dit not contain CSR(C) format, it would throw. + You can query indices as: + + :: + + inner_ndices = sparse_tensor.as_csrc_view().inner() + outer_ndices = sparse_tensor.as_csrc_view().outer() + + returning numpy arrays backed by the native memory. + """ + return self._tensor.get_csrc_data()
        + +
        [docs] def as_blocksparse_view(self): + """ + The method will return coo representation of the sparse tensor which will enable + querying BlockSparse indices. If the instance did not contain BlockSparse format, it would throw. + You can query coo indices as: + + :: + + block_sparse_indices = sparse_tensor.as_blocksparse_view().indices() + + which will return a numpy array that is backed by the native memory + """ + return self._tensor.get_blocksparse_data()
        + +
        [docs] def to_cuda(self, ort_device): + """ + Returns a copy of this instance on the specified cuda device + + :param ort_device: with name 'cuda' and valid gpu device id + + The method will throw if: + + - this instance contains strings + - this instance is already on GPU. Cross GPU copy is not supported + - CUDA is not present in this build + - if the specified device is not valid + """ + return SparseTensor(self._tensor.to_cuda(ort_device._get_c_device()))
        + +
        [docs] def format(self): + """ + Returns a OrtSparseFormat enumeration + """ + return self._tensor.format
        + +
        [docs] def dense_shape(self): + """ + Returns a numpy array(int64) containing a dense shape of a sparse tensor + """ + return self._tensor.dense_shape()
        + +
        [docs] def data_type(self): + """ + Returns a string data type of the data in the OrtValue + """ + return self._tensor.data_type()
        + +
        [docs] def device_name(self): + """ + Returns the name of the device where the SparseTensor data buffers reside e.g. cpu, cuda + """ + return self._tensor.device_name().lower()
        @@ -588,7 +987,7 @@

        ONNX Runtime

        Navigation

        @@ -605,12 +1004,12 @@

        Related Topics

        Quick search

        - + @@ -627,7 +1026,7 @@

        Quick search

        ©2018-2021, Microsoft. | - Powered by Sphinx 3.5.1 + Powered by Sphinx 5.3.0 & Alabaster 0.7.12
        diff --git a/docs/api/python/modules/onnxruntime/datasets.html b/docs/api/python/modules/onnxruntime/datasets.html deleted file mode 100644 index 5ccd25ab09dc2..0000000000000 --- a/docs/api/python/modules/onnxruntime/datasets.html +++ /dev/null @@ -1,127 +0,0 @@ - - - - - - - - onnxruntime.datasets — ONNX Runtime 1.7.0 documentation - - - - - - - - - - - - - - - - - - - - - - -
        -
        -
        - - -
        - -

        Source code for onnxruntime.datasets

        -# Copyright (c) Microsoft Corporation. All rights reserved.
        -# Licensed under the MIT License.
        -"""
        -Short examples used in the documentation.
        -"""
        -import os
        -
        -
        -
        [docs]def get_example(name): - """ - Retrieves the absolute file name of an example. - """ - this = os.path.abspath(os.path.dirname(__file__)) - full = os.path.join(this, name) - if not os.path.exists(full): - raise FileNotFoundError("Unable to find example '{0}'".format(name)) - return full
        -
        - -
        - -
        -
        - -
        -
        - - - - - - - \ No newline at end of file diff --git a/docs/api/python/objects.inv b/docs/api/python/objects.inv index dcfe5e81a2d19..5a1f17b5ce98c 100644 Binary files a/docs/api/python/objects.inv and b/docs/api/python/objects.inv differ diff --git a/docs/api/python/search.html b/docs/api/python/search.html index 03ed4fa1a1510..1aef7beed6ae1 100644 --- a/docs/api/python/search.html +++ b/docs/api/python/search.html @@ -5,19 +5,21 @@ - Search — ONNX Runtime 1.7.0 documentation - - + Search — ONNX Runtime 1.14.0 documentation + + - - - - + + + + - + + + @@ -42,26 +44,35 @@

        Search

        -
        - + + + +

        Searching for multiple words only shows matches that contain all words.

        + +
        - +
        + +
        +
        @@ -84,7 +95,7 @@

        ONNX Runtime

        Navigation

        @@ -111,7 +122,7 @@

        Related Topics

        ©2018-2021, Microsoft. | - Powered by Sphinx 3.5.1 + Powered by Sphinx 5.3.0 & Alabaster 0.7.12
        diff --git a/docs/api/python/searchindex.js b/docs/api/python/searchindex.js index 7565a42d573ff..492edcba429ce 100644 --- a/docs/api/python/searchindex.js +++ b/docs/api/python/searchindex.js @@ -1 +1 @@ -Search.setIndex({docnames:["README","api_summary","auto_examples/index","auto_examples/plot_backend","auto_examples/plot_common_errors","auto_examples/plot_convert_pipeline_vectorizer","auto_examples/plot_load_and_predict","auto_examples/plot_metadata","auto_examples/plot_pipeline","auto_examples/plot_profiling","auto_examples/plot_train_convert_predict","auto_examples/sg_execution_times","examples_md","index","tutorial"],envversion:{"sphinx.domains.c":2,"sphinx.domains.changeset":1,"sphinx.domains.citation":1,"sphinx.domains.cpp":3,"sphinx.domains.index":1,"sphinx.domains.javascript":2,"sphinx.domains.math":2,"sphinx.domains.python":2,"sphinx.domains.rst":2,"sphinx.domains.std":2,"sphinx.ext.intersphinx":1,"sphinx.ext.viewcode":1,sphinx:56},filenames:["README.rst","api_summary.rst","auto_examples/index.rst","auto_examples/plot_backend.rst","auto_examples/plot_common_errors.rst","auto_examples/plot_convert_pipeline_vectorizer.rst","auto_examples/plot_load_and_predict.rst","auto_examples/plot_metadata.rst","auto_examples/plot_pipeline.rst","auto_examples/plot_profiling.rst","auto_examples/plot_train_convert_predict.rst","auto_examples/sg_execution_times.rst","examples_md.rst","index.rst","tutorial.rst"],objects:{"onnxruntime.ModelMetadata":{custom_metadata_map:[1,1,1,""],description:[1,1,1,""],domain:[1,1,1,""],graph_description:[1,1,1,""],graph_name:[1,1,1,""],producer_name:[1,1,1,""],version:[1,1,1,""]},"onnxruntime.NodeArg":{name:[1,1,1,""],shape:[1,1,1,""],type:[1,1,1,""]},"onnxruntime.RunOptions":{log_severity_level:[1,1,1,""],log_verbosity_level:[1,1,1,""],logid:[1,1,1,""],only_execute_path_to_fetches:[1,1,1,""],terminate:[1,1,1,""]},"onnxruntime.SessionOptions":{add_free_dimension_override_by_denotation:[1,1,1,""],add_free_dimension_override_by_name:[1,1,1,""],add_initializer:[1,1,1,""],add_session_config_entry:[1,1,1,""],enable_cpu_mem_arena:[1,1,1,""],enable_mem_pattern:[1,1,1,""],enable_profiling:[1,1,1,""],execution_mode:[1,1,1,""],execution_order:[1,1,1,""],get_session_config_entry:[1,1,1,""],graph_optimization_level:[1,1,1,""],inter_op_num_threads:[1,1,1,""],intra_op_num_threads:[1,1,1,""],log_severity_level:[1,1,1,""],log_verbosity_level:[1,1,1,""],logid:[1,1,1,""],optimized_model_filepath:[1,1,1,""],profile_file_prefix:[1,1,1,""],register_custom_ops_library:[1,1,1,""],use_deterministic_compute:[1,1,1,""]},"onnxruntime.backend":{is_compatible:[1,2,1,""],prepare:[1,2,1,""],run:[1,2,1,""],supports_device:[1,2,1,""]},"onnxruntime.datasets":{get_example:[1,2,1,""]},onnxruntime:{InferenceSession:[1,0,1,""],ModelMetadata:[1,0,1,""],NodeArg:[1,0,1,""],RunOptions:[1,0,1,""],SessionOptions:[1,0,1,""],get_device:[1,2,1,""]}},objnames:{"0":["py","class","Python class"],"1":["py","method","Python method"],"2":["py","function","Python function"]},objtypes:{"0":"py:class","1":"py:method","2":"py:function"},terms:{"000":11,"000101":10,"0002686927327886224":4,"00152":10,"00156":10,"0015644603711552918":10,"00161":10,"00163":10,"00165":10,"00166":10,"00167":10,"00169":10,"0016972313076257703":10,"0017":10,"00172":10,"00173":10,"00174":10,"00175":10,"00176":10,"00177":10,"00179":10,"0018":10,"00181":10,"00183":10,"00185":10,"00187":10,"00188":10,"00189":10,"0019":10,"00196":10,"0024466365575790405":4,"00245":10,"00274":10,"0027408220106735826":10,"0041598789393901825":10,"00442":10,"006":7,"00694":10,"00723":10,"00837":10,"010":9,"0104":10,"0108":10,"0115":10,"0116":7,"012":6,"018":4,"019":3,"021566055715084076":4,"0215695519000296e":10,"027834143489599228":4,"02e":10,"04_18":9,"05173242464661598":10,"055770788341760635":10,"05987146869301796":10,"0x7fca300dcda0":10,"0x7fd0a620e860":8,"100":10,"101":9,"107":10,"118":10,"134":9,"14094172418117523":10,"15987711e":10,"165":10,"167":10,"169":10,"17324282e":10,"17e":10,"227":[9,10],"228":10,"231":10,"244":8,"286":10,"291":10,"296":10,"342":10,"346":10,"3c59201b940f410fa29dc71ea9d5767d":7,"404":10,"407":10,"40941788e":10,"417":[10,11],"44228260e":10,"459":10,"463":10,"467":10,"5007695":6,"50415695":6,"5052765":6,"5105157":6,"5138594":6,"51468205":6,"5150477":6,"5155948":6,"51758766":6,"5197903":6,"521":10,"5294609":6,"531":10,"53817046":6,"5395019":6,"5410304":6,"5447472":6,"54898335e":10,"54921585":6,"551158":6,"55247986":6,"556":10,"5584753":6,"56175166":6,"5651827":6,"56617785":6,"56778824":6,"56e":10,"5707261":6,"57149607":6,"57431483":6,"577":10,"57707615e":10,"582":10,"5845876":6,"5885481":6,"5907483":6,"592":5,"59442437":6,"596162":6,"59664494":6,"596786":6,"597":10,"60125333":6,"6099661":6,"61100346":6,"6119356":6,"61e":10,"62632954":6,"6270167988259345e":4,"62868774":6,"642":10,"64250827":6,"645":10,"6528918":6,"6545371":6,"6563896":6,"66830933":6,"66855776":6,"67160237":6,"6744693":6,"6819708":6,"6910895":6,"6941072":6,"7030401":6,"7051111":6,"7062323":6,"70664704":6,"7088654":6,"7094791":6,"71235013":6,"7210481":6,"724396":6,"72844744":6,"72e":10,"765":10,"78002823e":10,"78002844931325e":10,"787709464906584e":4,"848444978558249":5,"8548984527587891":10,"88396143e":10,"888396143913269":10,"9442282915115356":10,"9505997896194458":4,"98714288e":10,"9974970817565918":4,"9997311234474182":4,"9999999999999528":5,"boolean":1,"byte":[1,4],"case":[1,5,10],"class":[1,4],"default":1,"float":[1,4,5,6],"function":1,"import":[3,4,5,6,7,8,9,10,14],"int":[1,5],"public":1,"return":[1,4,9,10],"switch":3,"true":[1,9,10],"try":[3,4,5],"while":1,And:10,But:5,For:[0,13],That:8,The:[1,3,4,5,6,9,10,14],Then:[8,10],There:[8,14],These:1,With:7,_logist:10,_logistic_solver_convergence_msg:10,about:1,absolut:1,access:1,across:1,actual:[1,4,5],add_free_dimension_override_by_denot:1,add_free_dimension_override_by_nam:1,add_initi:1,add_session_config_entri:1,addit:1,address:1,aka:[0,13],all:[1,2,4],alloc:1,allow:1,also:[1,3,6,10],altern:10,alwai:1,among:4,ani:[1,4],api:[3,13],appear:10,append:[1,10],appli:1,applic:14,arena:1,arg0:1,arg1:1,arg:[1,9],argument:1,arrai:[1,3,4,6,9,10],array_equ:1,associ:1,assum:1,astyp:[6,10,14],auto_exampl:11,auto_examples_jupyt:2,auto_examples_python:2,avail:6,ave:10,averag:10,axesimag:8,back:1,backend:[2,11],bad:4,basic:1,batch:[10,14],becaus:[1,5],befor:[8,9],being:1,below:1,better:10,between:[1,3,10],bind:1,bind_cpu_input:1,bind_input:1,bind_ortvalue_input:1,bind_ortvalue_output:1,bind_output:1,blue:10,boston:5,both:1,bound:1,briefli:14,buffer_ptr:1,build:1,call:[1,5],can:[1,3,4,9,14],cannot:4,capi:[1,3,4,5],cat:9,chang:[1,14],change_ir_vers:9,check:1,chenta:8,choos:1,chosen:1,click:[3,4,5,6,7,8,9,10],clr:[10,14],code:[1,2,3,4,5,6,7,8,9,10,14],com:0,common:[2,5,10,11,14],commonli:14,compar:[5,10],comparison:[1,10],compat:1,compil:[1,3],compos:14,comput:[1,3,5,6,9,10,14],config:1,configur:1,conform:1,confus:[5,10],confusion_matrix:10,consist:[5,10],consum:1,contain:[1,7,9],content:8,converg:10,convergencewarn:10,convert:[2,7,8,11],convert_sklearn:[5,10,14],copi:1,copy_outputs_to_cpu:1,correspond:1,could:5,cpu:[1,3,14],creat:[1,5,14],creation:1,cuda:1,current:1,curv:10,custom:1,custom_metadata_map:[1,7],data:[1,4,5,10,14],data_ptr:1,data_typ:[1,5,8,10,14],datafram:[5,10],dataset:[3,4,5,6,7,8,9,10,14],datset:[5,10],debug:1,def:[9,10],defin:[1,14],definit:[1,6],demonstr:[5,6,8,10],denot:1,depend:[1,3,14],deploi:7,describ:14,descript:[1,7],detail:14,determinist:1,devic:3,device_id:1,device_nam:1,device_typ:1,dictionari:[1,5],dictionarytyp:5,dictvector:5,differ:[8,10],dim:8,dim_valu:8,dimens:[1,3,4],directli:[1,3],discrep:5,discuss:1,displai:8,doc_str:7,docstr:8,document:[1,10],doe:[4,9],domain:[1,7,8],don:1,dot:8,doubl:[4,5],download:[1,2,3,4,5,6,7,8,9,10],draw:[2,11],dtype:[3,4,6,9],due:4,dur:9,dynam:1,each:1,easi:14,easier:3,either:[3,4],elem_typ:8,element_typ:1,enabl:[1,9],enable_cpu_mem_arena:1,enable_mem_pattern:1,enable_profil:[1,9],end:[1,5,10],end_profil:9,engin:[0,13],ensembl:[5,10],entri:1,error:[1,2,11],etc:1,everi:10,exampl:[3,4,5,6,7,8,9,10,13],example1:[6,8,9],example2:4,except:[3,4,5],exchang:[0,13],execut:[1,2,10,11],execution_mod:1,execution_ord:1,exit:1,expect:[3,4,5],experi:10,explain:5,expos:1,extend:3,extra_warning_msg:10,facilit:1,fail:[4,5,10],fals:[1,5],famou:14,fatal:1,favorit:4,fct:10,featur:1,feature_extract:5,feed:[1,4],fetch:1,few:1,fid:8,file:[1,9,11],filenam:[1,9],first:[4,5,10,14],fit:[5,10,14],fix:[3,4],float32:[1,3,4,6,9,10,14],float64:4,float_data:8,float_input:[3,4,5,10,14],floattensortyp:[5,10,14],focus:[0,13],follow:[1,3,4],forest:10,format:[1,3,4,7,9],framework:[3,4],free:1,from:[1,3,4,5,6,7,8,9,10,14],full:[3,4,5,6,7,8,9,10],futur:1,galleri:[3,4,5,6,7,8,9,10,13],gener:[1,2,3,4,5,6,7,8,9,10],get:[1,10,14],get_devic:[1,3],get_exampl:[1,3,4,6,7,8,9],get_input:[4,5,6,9,10,14],get_modelmeta:7,get_output:[1,4,5,6,10,14],get_session_config_entri:1,getopnodeproduc:8,getpydotgraph:8,github:[0,6,13],given:1,global:10,goe:4,got:[3,4],gpu:[1,3,14],gracefulli:1,gradientboostingregressor:5,graph:[1,8],graph_descript:1,graph_nam:[1,7],graph_optimization_level:1,green:10,handl:4,has:[1,10],here:[1,3,4,5,6,7,8,9,10,14],high:14,higher:4,hold:1,host:1,how:[3,6,7,8,9,10],html:10,http:[0,10],ident:10,identifi:1,imag:8,implement:[1,3],imread:8,imshow:8,includ:[1,8],increas:10,index:[3,4],indic:[1,3,4],individu:1,inferenc:1,inferencesess:[1,4,5,6,7,9,10,14],info:1,inform:[0,1,13],initi:[1,8],initial_typ:[5,10,14],inp:5,input:[1,3,4,5,6,8,10],input_nam:[4,6,9,10,14],input_shap:6,input_typ:6,insensit:1,inst:10,instal:1,instanc:[1,7],instead:[4,5],int64:[4,5],int64tensortyp:5,inter_op_num_thread:1,interest:1,interpret:9,intra_op_num_thread:1,introduc:1,invalid:[3,4],invalid_argu:[3,4,5],invalidargu:[3,4,5],invoc:1,io_bind:1,ipynb:[3,4,5,6,7,8,9,10],ir_vers:[7,8,9],iri:[4,10,14],is_compat:1,is_tensor:1,issu:1,iter:10,its:[1,5,6,8,10,14],json:9,jupyt:[2,3,4,5,6,7,8,9,10],keep:7,kei:1,kernel:1,kind:4,kwarg:1,label:[3,4,10],label_nam:[10,14],lbfg:10,learn:[5,6,7,10,14],legend:10,len:10,let:[1,3,6,7,9,10],level:[1,14],lib:10,libari:1,librari:1,limit:10,linear_model:[10,14],list:[1,14],load:[2,3,4,5,7,8,9,10,11],load_boston:5,load_iri:[10,14],log:1,log_severity_level:1,log_verbosity_level:1,logger:1,logi:10,logid:1,logist:[3,4,7],logisticregress:[10,14],logreg_iri:[3,4,7,10,14],look:[4,5,8,10],loop:10,machin:[6,10,14],mai:1,main:1,make:3,make_pipelin:5,map:[1,5],math:1,matplotlib:[8,10],matrix:[5,10],max:10,max_it:10,mean:1,measur:10,memori:1,messag:8,meta:7,metadata:[1,2,11],metadata_prop:7,metric:[5,10],microsoft:0,min:10,miniconda:10,minut:[3,4,5,6,7,8,9,10],misspel:4,mkl:1,mode:1,model:[0,2,3,4,5,7,10,11,13],model_loading_arrai:9,model_select:[5,10,14],model_vers:7,modelmetadata:1,modelproto:[1,8],modul:[5,10],more:[0,13,14],most:8,mul:8,mul_1:[8,9],multipl:[3,4],n_estim:10,n_tree:10,name:[1,3,4,5,6,8,9,10,14],nativ:1,necessarili:4,need:[1,9,10],net_draw:8,network:[0,13],neural:[0,13],nfor:10,node:[1,8],node_produc:8,nodearg:1,none:[1,4,5,9,10,14],note:0,notebook:[2,3,4,5,6,7,8,9,10],nrow:10,number:[1,4,10],numpi:[1,3,4,5,6,9,10,14],object:[1,8,10],observ:5,one:[1,5,8,10,14],ones:5,onli:[1,4],only_execute_path_to_fetch:1,onnx:[1,2,4,7,9,11],onnx_model:9,onnx_model_str:9,onnxml:7,onnxmltool:[7,14],onnxruntim:[0,1,2,3,5,6,7,8,9,10,11,13,14],onnxruntime_profile__2021:9,onnxruntime_pybind11_st:[1,3,4,5],onnxruntimeerror:[3,4,5],onx:[5,10,14],op_typ:8,open:[0,5,8,9,10,13,14],oper:14,oppos:10,opset:9,opset_import:[8,9],opt:10,optim:[1,14],optimized_model_filepath:1,option:[1,4,9,10],order:1,org:10,ort:1,ort_output:1,ortvalue_from_numpi:1,ortvalue_from_shape_and_typ:1,other:[1,3,4,8,10,14],out:[3,4,5,6,7,8,9,10],output:[1,4,5,6,8,10,14],output_label:10,output_nam:[4,6],output_shap:6,output_typ:6,over:1,packag:[1,3,8,10],pair:1,panda:[5,10],parallel:1,paramet:1,parsefromstr:8,part:1,particular:1,particularli:1,path:1,path_or_byt:1,pattern:1,perfectli:10,perform:[0,1,13,14],pid:9,pipe:5,pipelin:[2,11,14],pipeline_vector:5,place:1,pleas:[0,3,4,10,13],plot:10,plot_backend:[3,11],plot_common_error:[4,11],plot_convert_pipeline_vector:[5,11],plot_load_and_predict:[6,11],plot_metadata:[7,11],plot_pipelin:[8,11],plot_profil:[9,11],plot_train_convert_predict:[10,11],plt:8,png:8,pprint:[9,10],pre:1,pred:[5,10],pred_onx:[5,10,14],predict:[1,2,3,4,9,11,14],predict_proba:10,prefix:1,prepar:[1,3],preprocess:10,print:[3,4,5,6,7,8,9,10,14],prob_nam:10,prob_rt:10,prob_sklearn:10,proba:3,probabili:10,probabl:[3,4],produc:[1,4,7],producer_nam:[1,7,8],producer_vers:7,product:7,prof_fil:9,profil:[1,2,11],profile_file_prefix:1,project:[0,13],properti:1,protobuf:8,provid:[1,14],provider_opt:1,put:1,pydot_graph:8,pyplot:8,python3:10,python:[1,2,3,4,5,6,7,8,9,10],r2_score:5,rais:4,random:[6,10],randomforestclassifi:10,rang:10,rank:4,rankdir:8,rather:14,raw:10,reach:10,read:[1,8],readi:1,refer:10,register_custom_ops_librari:1,regress:[3,4,7],regular:[1,5],relat:7,releas:0,relev:10,rep:3,repeat:10,replac:4,request:1,requir:1,res:[1,4,6,9],result:9,retriev:[1,5,6,10],rf_iri:10,rf_iris_:10,roc:10,row:5,run:[3,4,5,6,7,8,9,10],run_log_severity_level:1,run_with_iobind:1,runopt:1,runtim:[1,2,7,9,11],runtimeerror:[3,4,5],same:[3,4,10],save:[1,10],save_model_format:1,scale:10,scenario:[1,5,10,14],scikit:[5,7,10,14],score:[0,13],script:[3,4,5,6,7,8,9,10],second:[3,4,5,6,7,8,9,10],see:[0,1,6,7,9,10,13,14],self:1,seq:5,sequenc:1,sequencetyp:5,sequenti:1,serial:1,serializetostr:[5,9,10,14],servic:14,ses:1,sess:[1,4,5,6,7,9,10,14],sess_opt:1,sess_predict:10,sess_predict_proba:10,sess_predict_proba_loop:10,sess_predict_proba_rf:10,sess_profil:9,sess_tim:9,session:[1,9],session_initi:9,session_log_severity_level:1,sessionopt:[1,9],set:[1,5,10,14],set_titl:10,set_xlabel:10,set_ylabel:10,sever:[1,4],shape:[1,4,5,6,8,10],share:1,show:[1,5,6,9,10],shown:10,sigmoid:6,similar:[5,10],simpl:[2,3,7,8,11],singl:[1,4],site:[10,14],situat:4,size:1,skl2onnx:[5,10,14],sklearn:[5,7,10,14],small:5,snippet:1,solver:10,some:9,sourc:[1,2,3,4,5,6,7,8,9,10],spars:5,specif:[1,7,14],specifi:[1,14],speed:10,sphinx:[2,3,4,5,6,7,8,9,10],stabl:10,start:[4,5,10],statu:[1,10],step:[4,5,10],stop:10,store:[1,8,9],str:1,string:1,structur:1,suit:1,sum:10,summari:13,support:[1,9],supports_devic:1,system:8,tag:0,take:[4,5],target:[5,10,14],tensor:[1,4,5,6],tensor_typ:8,termin:1,test:[1,5,8,10],test_sigmoid:6,than:[1,4,8,14],thei:1,them:[5,10],thi:[1,3,4,5,6,8,9,10,14],thread:1,three:4,thu:1,tid:9,time:[1,3,4,5,6,7,8,9,10],timeit:10,timer:10,to_dict:5,tool:[8,14],topolog:1,total:[3,4,5,6,7,8,9,10,11],tpng:8,track:7,train:[2,4,7,11],train_test_split:[5,10,14],tree:10,trt:10,tsk:10,tutori:13,type:[1,4,5,6,8],unexpect:[4,5],unless:1,unus:1,usabl:1,usag:1,use:[1,3,5,8,10,14],use_deterministic_comput:1,used:[1,7,14],useful:[1,7],user:1,uses:5,using:[1,3,4],usual:[1,14],valu:[1,5],variabl:5,vector:[4,5,6],verbos:1,veri:[2,5,9,11],verif:1,version:[1,7,8,9],vlog:1,wai:[8,14],want:1,warn:[1,4],webservic:10,what:[9,10],when:7,whether:1,which:[1,4,5,7,8,14],within:1,without:[3,14],work:1,wrap:1,write:[5,10,14],write_dot:8,x_ortvalu:1,x_test:[5,10,14],x_test_dict:5,x_train:[5,10,14],x_train_dict:5,y_ortvalu:1,y_test:[5,10,14],y_train:[5,10,14],you:[1,14],your:4,zip:2},titles:["ONNX Runtime","API Summary","Gallery of examples","ONNX Runtime Backend for ONNX","Common errors with onnxruntime","Train, convert and predict with ONNX Runtime","Load and predict with ONNX Runtime and a very simple model","Metadata","Draw a pipeline","Profile the execution of a simple model","Train, convert and predict with ONNX Runtime","Computation times","Gallery of examples","Python Bindings for ONNX Runtime","Tutorial"],titleterms:{"export":14,api:1,backend:[1,3],benchmark:10,bind:13,chang:0,common:4,comput:11,convers:[5,10],convert:[5,10,14],dataset:1,devic:1,draw:8,error:4,exampl:[1,2],execut:9,favorit:14,format:[5,8,10,14],framework:14,galleri:2,iobind:1,json:8,load:[1,6,14],logist:10,metadata:7,model:[1,6,8,9,14],onnx:[0,3,5,6,8,10,13,14],onnxruntim:4,ortvalu:1,pipelin:[5,8],predict:[5,6,10],probabl:10,profil:9,python:13,randomforest:10,regress:10,retriev:8,run:[1,14],runtim:[0,3,5,6,10,13,14],simpl:[6,9],step:14,summari:1,time:11,train:[5,10,14],tutori:14,using:14,veri:6,your:14}}) \ No newline at end of file +Search.setIndex({"docnames": ["api_summary", "auto_examples/index", "auto_examples/plot_backend", "auto_examples/plot_common_errors", "auto_examples/plot_convert_pipeline_vectorizer", "auto_examples/plot_load_and_predict", "auto_examples/plot_metadata", "auto_examples/plot_pipeline", "auto_examples/plot_profiling", "auto_examples/plot_train_convert_predict", "auto_examples/sg_execution_times", "examples_md", "index", "tutorial"], "filenames": ["api_summary.rst", "auto_examples/index.rst", "auto_examples/plot_backend.rst", "auto_examples/plot_common_errors.rst", "auto_examples/plot_convert_pipeline_vectorizer.rst", "auto_examples/plot_load_and_predict.rst", "auto_examples/plot_metadata.rst", "auto_examples/plot_pipeline.rst", "auto_examples/plot_profiling.rst", "auto_examples/plot_train_convert_predict.rst", "auto_examples/sg_execution_times.rst", "examples_md.rst", "index.rst", "tutorial.rst"], "titles": ["API", "Gallery of examples", "ONNX Runtime Backend for ONNX", "Common errors with onnxruntime", "Train, convert and predict with ONNX Runtime", "Load and predict with ONNX Runtime and a very simple model", "Metadata", "Draw a pipeline", "Profile the execution of a simple model", "Train, convert and predict with ONNX Runtime", "Computation times", "Gallery of examples", "Python Bindings for ONNX Runtime", "Tutorial"], "terms": {"onnx": [0, 1, 3, 6, 8, 10], "runtim": [0, 1, 6, 8, 10], "infer": 0, "graph": [0, 7], "format": [0, 2, 3, 6, 8], "ort": 0, "memori": 0, "disk": 0, "constrain": 0, "environ": [0, 4, 9], "The": [0, 2, 3, 4, 5, 8, 9, 13], "consum": 0, "produc": [0, 3, 6], "can": [0, 2, 3, 4, 8, 13], "specifi": [0, 13], "access": 0, "wai": [0, 7, 13], "best": 0, "match": 0, "your": [0, 3], "scenario": [0, 4, 9, 13], "i": [0, 2, 3, 4, 5, 6, 7, 9, 12, 13], "main": 0, "It": [0, 3, 5, 6, 13], "us": [0, 2, 3, 4, 6, 7, 9], "an": [0, 3, 4, 5, 7, 9, 13], "well": 0, "applic": [0, 13], "configur": 0, "session": [0, 8], "onnxruntim": [0, 1, 2, 4, 5, 6, 7, 8, 9, 10, 12, 13], "name": [0, 2, 3, 4, 5, 7, 8, 9, 13], "consist": [0, 4, 9], "comput": [0, 2, 4, 5, 8, 9, 13], "oper": [0, 13], "implement": [0, 2], "optim": [0, 13], "kernel": 0, "differ": [0, 7, 9], "hardwar": 0, "target": [0, 4, 9, 13], "orchestr": 0, "execut": [0, 1, 9, 10], "via": 0, "provid": [0, 3, 4, 5, 6, 8, 9, 13], "contain": [0, 6, 8], "set": [0, 4, 9, 13], "specif": [0, 6, 13], "gpu": [0, 2, 13], "iot": 0, "etc": 0, "ar": [0, 8, 9, 13], "paramet": 0, "from": [0, 2, 3, 4, 5, 6, 7, 8, 9, 13], "chosen": 0, "prioriti": 0, "order": 0, "given": 0, "list": [0, 13], "In": [0, 4, 9, 13], "exampl": [0, 2, 3, 4, 5, 6, 7, 8, 9, 12], "below": 0, "cuda": 0, "If": 0, "cudaexecutionprovid": 0, "cpuexecutionprovid": 0, "avail": [0, 5], "found": 0, "here": [0, 2, 3, 4, 5, 6, 7, 8, 9, 13], "sinc": 0, "1": [0, 2, 3, 4, 6, 7, 8, 9], "10": [0, 9], "you": [0, 4, 13], "must": 0, "explicitli": 0, "onli": [0, 3], "time": [0, 2, 3, 4, 5, 6, 7, 8, 9], "allow": 0, "explicit": 0, "follow": [0, 2, 3, 4], "assum": 0, "nvidia": 0, "replac": [0, 3], "suppli": 0, "other": [0, 2, 3, 7, 9, 13], "For": [0, 12], "enabl": [0, 8], "profil": [0, 1, 10], "enable_profil": [0, 8], "true": [0, 4, 8, 9], "sess_opt": 0, "its": [0, 4, 5, 7, 9, 13], "On": [0, 4, 9], "default": 0, "map": [0, 4], "nativ": 0, "python": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], "structur": 0, "numpi": [0, 2, 3, 4, 5, 8, 9, 13], "arrai": [0, 2, 3, 5, 8, 9], "dictionari": [0, 4], "x": [0, 2, 3, 4, 5, 7, 8, 9, 13], "ortvalue_from_numpi": 0, "device_nam": 0, "shape": [0, 3, 4, 5, 7, 9], "data_typ": [0, 4, 7, 9, 13], "tensor": [0, 3, 4, 5], "float": [0, 3, 4, 5], "is_tensor": 0, "np": [0, 2, 4], "array_equ": 0, "part": 0, "feed": [0, 3], "result": [0, 8], "y": [0, 4, 5, 7, 9, 13], "By": 0, "alwai": 0, "place": 0, "": [0, 2, 4, 5, 6, 7, 8, 9], "have": 0, "mai": 0, "than": [0, 3, 7, 13], "becaus": [0, 4], "introduc": 0, "copi": 0, "between": [0, 2, 9], "support": [0, 8], "custom": 0, "all": [0, 1, 3], "user": 0, "back": 0, "thi": [0, 2, 3, 4, 5, 7, 8, 9, 13], "call": [0, 4], "To": 0, "featur": 0, "run_with_iobind": 0, "A": 0, "instanc": [0, 6], "onto": 0, "io_bind": 0, "over": 0, "node": [0, 7], "bind_cpu_input": 0, "bind_output": 0, "copy_outputs_to_cpu": 0, "0": [0, 2, 3, 4, 5, 6, 7, 8, 9, 10, 13], "directli": [0, 2], "x_ortvalu": 0, "bind_input": 0, "device_typ": 0, "device_id": 0, "element_typ": 0, "float32": [0, 2, 3, 5, 8, 9, 13], "buffer_ptr": 0, "data_ptr": 0, "both": 0, "also": [0, 2, 5, 13], "y_ortvalu": 0, "ortvalue_from_shape_and_typ": 0, "3": [0, 2, 3, 5, 6, 7, 8, 9], "2": [0, 2, 3, 4, 6, 7, 8, 9], "chang": [0, 13], "actual": [0, 3, 4], "being": 0, "bound": 0, "request": 0, "alloc": 0, "particularli": 0, "dynam": 0, "get_output": [0, 3, 4, 5, 9, 13], "get": [0, 9, 13], "correspond": 0, "thu": 0, "bind": 0, "return": [0, 3, 8, 9], "which": [0, 3, 4, 6, 7, 13], "ha": [0, 4, 9], "ort_output": 0, "addit": 0, "work": 0, "while": 0, "inferenc": 0, "bind_ortvalue_input": 0, "bind_ortvalue_output": 0, "pytorch": 0, "x_tensor": 0, "contigu": 0, "tupl": 0, "y_shape": 0, "need": [0, 8, 9], "y_tensor": 0, "torch": 0, "empti": 0, "dtype": [0, 2, 3, 5, 8], "path_or_byt": 0, "none": [0, 3, 4, 8, 9, 13], "provider_opt": 0, "kwarg": 0, "sourc": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], "filenam": [0, 8], "serial": 0, "byte": [0, 3], "string": 0, "sequenc": 0, "decreas": 0, "preced": 0, "valu": [0, 4], "either": [0, 2, 3], "dict": 0, "type": [0, 3, 4, 5, 7], "unless": [0, 4], "so": 0, "add_session_config_entri": 0, "load_model_format": 0, "file": [0, 8, 10], "extens": 0, "when": [0, 6], "ani": [0, 3], "should": 0, "mean": 0, "capabl": 0, "otherwis": 0, "disable_fallback": 0, "disabl": 0, "fallback": 0, "mechan": 0, "enable_fallback": 0, "fail": [0, 3, 4, 13], "due": [0, 3], "failur": 0, "reset": 0, "fall": 0, "end_profil": [0, 8], "end": [0, 4, 9], "store": [0, 7, 8], "get_input": [0, 3, 4, 5, 8, 9, 13], "metadata": [0, 1, 10], "get_modelmeta": [0, 6], "see": [0, 5, 6, 8, 9, 12, 13], "get_overridable_initi": 0, "includ": [0, 4, 7], "initi": [0, 7], "get_profiling_start_time_n": 0, "nanosecond": 0, "start": [0, 3, 4, 9], "compar": [0, 4, 9], "monotonic_n": 0, "after": 0, "some": [0, 8], "platform": 0, "timer": [0, 9], "precis": 0, "window": 0, "maco": 0, "100n": 0, "get_provider_opt": 0, "regist": 0, "get_provid": 0, "get_session_opt": 0, "object": [0, 7, 9], "output_nam": [0, 3, 5], "input_fe": 0, "run_opt": 0, "predict": [0, 1, 2, 3, 8, 10, 13], "input_nam": [0, 3, 5, 8, 9, 13], "input_valu": 0, "everi": [0, 9], "spars": [0, 4], "sess": [0, 3, 4, 5, 6, 8, 9, 13], "run_with_ort_valu": 0, "input_dict_ort_valu": 0, "input_ort_valu": 0, "how": [0, 2, 5, 6, 7, 8, 9], "creat": [0, 4, 13], "set_provid": 0, "underli": 0, "re": [0, 3, 5, 8], "self": 0, "capi": [0, 2, 3, 4], "onnxruntime_pybind11_st": [0, 2, 3, 4], "inform": [0, 12], "singl": [0, 3], "add_run_config_entri": 0, "arg0": 0, "str": 0, "arg1": 0, "entri": 0, "pair": 0, "get_run_config_entri": 0, "kei": 0, "properti": 0, "log_severity_level": 0, "log": 0, "sever": [0, 3], "level": [0, 13], "particular": 0, "invoc": 0, "verbos": 0, "info": 0, "warn": [0, 3, 4], "error": [0, 1, 10], "4": [0, 3, 5, 7, 8, 9, 13], "fatal": 0, "log_verbosity_level": 0, "vlog": 0, "debug": 0, "build": 0, "run_log_severity_level": 0, "appli": 0, "logid": 0, "identifi": 0, "gener": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], "only_execute_path_to_fetch": 0, "fetch": [0, 4], "termin": 0, "current": 0, "individu": 0, "exit": 0, "gracefulli": 0, "statu": [0, 13], "add_external_initi": 0, "add_free_dimension_override_by_denot": 0, "int": [0, 4], "dimens": [0, 2, 3], "size": 0, "each": 0, "denot": 0, "associ": 0, "free": 0, "add_free_dimension_override_by_nam": 0, "within": 0, "add_initi": 0, "enable_cpu_mem_arena": 0, "arena": 0, "pre": 0, "futur": 0, "usag": 0, "fals": [0, 4], "don": 0, "t": [0, 4, 8], "want": 0, "enable_mem_pattern": 0, "pattern": 0, "enable_mem_reus": 0, "reus": 0, "execution_mod": 0, "mode": 0, "sequenti": 0, "execution_ord": 0, "basic": 0, "topolog": 0, "get_session_config_entri": 0, "graph_optimization_level": 0, "inter_op_num_thread": 0, "number": [0, 3, 9, 13], "thread": 0, "parallel": 0, "across": 0, "let": [0, 2, 5, 6, 8, 9], "choos": 0, "intra_op_num_thread": 0, "session_log_severity_level": 0, "logger": 0, "id": 0, "optimized_model_filepath": 0, "path": 0, "save_model_format": 0, "config": 0, "case": [0, 4, 9], "insensit": 0, "profile_file_prefix": 0, "prefix": 0, "append": [0, 9], "register_custom_ops_librari": 0, "share": 0, "librari": 0, "op": 0, "requir": 0, "use_deterministic_comput": 0, "whether": 0, "determinist": 0, "numpy_obj": 0, "non": 0, "construct": 0, "deal": 0, "as_sparse_tensor": 0, "function": [0, 4], "address": 0, "first": [0, 3, 4, 9, 13], "element": 0, "buffer": 0, "where": 0, "resid": 0, "e": [0, 2, 3, 4], "g": 0, "proto": 0, "has_valu": 0, "els": 0, "is_sparse_tensor": 0, "is_tensor_sequ": 0, "valid": 0, "hold": 0, "throw": 0, "accessor": 0, "gain": 0, "refer": [0, 4, 13], "static": 0, "ort_value_from_sparse_tensor": 0, "sparse_tensor": 0, "new": 0, "ownership": 0, "factori": 0, "method": 0, "held": 0, "NOT": 0, "integ": 0, "indic": [0, 2, 3], "update_inplac": 0, "np_arr": 0, "updat": 0, "content": [0, 7], "valuess": 0, "project": [0, 12], "c": [0, 9], "depend": [0, 2, 13], "more": [0, 12, 13], "one": [0, 4, 7, 9, 13], "constructor": 0, "as_blocksparse_view": 0, "coo": 0, "represent": [0, 4, 9], "queri": 0, "blockspars": 0, "did": 0, "would": 0, "block_sparse_indic": 0, "as_coo_view": 0, "coo_indic": 0, "as_csrc_view": 0, "csr": 0, "cr": 0, "dit": 0, "inner_ndic": 0, "inner": 0, "outer_ndic": 0, "outer": 0, "dense_shap": 0, "int64": [0, 3, 4], "dens": 0, "ortsparseformat": 0, "enumer": 0, "sparse_coo_from_numpi": 0, "ort_devic": 0, "argument": 0, "d": [0, 9], "homogen": 0, "zero": 0, "linear": 0, "index": [0, 2, 3], "length": 0, "equal": 0, "coordin": 0, "nnz": 0, "exactli": 0, "twice": 0, "describ": [0, 13], "own": 0, "nummpi": 0, "suppor": 0, "numer": 0, "primit": 0, "them": [0, 4, 9], "storag": 0, "increment": 0, "count": 0, "decrement": 0, "gc": 0, "doe": [0, 3, 8], "those": 0, "sparse_csr_from_numpi": 0, "inner_indic": 0, "outer_indic": 0, "row": [0, 4], "col": 0, "Its": 0, "gced": 0, "to_cuda": 0, "alreadi": 0, "cross": 0, "present": 0, "arr_on_cpu": 0, "param": 0, "pointer": 0, "anoth": 0, "No": 0, "obtain": 0, "c_ort_devic": 0, "expos": 0, "These": 0, "cannot": [0, 3], "instanti": 0, "thei": 0, "defin": [0, 13], "about": [0, 4], "usual": [0, 13], "facilit": 0, "comparison": [0, 9], "custom_metadata_map": [0, 6], "descript": [0, 6], "domain": [0, 6, 7], "graph_descript": 0, "host": 0, "graph_nam": [0, 6], "producer_nam": [0, 6, 7], "version": [0, 6, 7, 8], "definit": [0, 5], "arg": [0, 8], "regular": [0, 4], "perform": [0, 12, 13], "usabl": 0, "verif": 0, "conform": 0, "is_compat": 0, "compat": 0, "unus": 0, "ex": 0, "boolean": 0, "prepar": [0, 2], "readi": 0, "modelproto": [0, 7], "compil": [0, 2], "supports_devic": 0, "check": 0, "test": [0, 4, 7, 9], "suit": 0, "draw": [1, 10], "pipelin": [1, 10, 13], "load": [1, 2, 3, 4, 6, 7, 8, 9, 10], "veri": [1, 4, 8, 10], "simpl": [1, 2, 6, 7, 10], "model": [1, 2, 3, 4, 6, 9, 10, 12], "backend": [1, 10], "train": [1, 3, 6, 10], "convert": [1, 6, 7, 10], "common": [1, 4, 9, 10, 13], "download": [1, 2, 3, 4, 5, 6, 7, 8, 9], "code": [1, 2, 3, 4, 5, 6, 7, 8, 9, 13], "auto_examples_python": 1, "zip": 1, "jupyt": [1, 2, 3, 4, 5, 6, 7, 8, 9], "notebook": [1, 2, 3, 4, 5, 6, 7, 8, 9], "auto_examples_jupyt": 1, "sphinx": [1, 2, 3, 4, 5, 6, 7, 8, 9], "click": [2, 3, 4, 5, 6, 7, 8, 9], "full": [2, 3, 4, 5, 6, 7, 8, 9], "extend": 2, "api": [2, 12], "run": [2, 3, 4, 5, 6, 7, 8, 9], "logist": [2, 3, 6, 13], "regress": [2, 3, 6, 13], "import": [2, 3, 4, 5, 6, 7, 8, 9, 13], "devic": 2, "packag": [2, 4, 7, 13], "wa": [2, 6], "cpu": [2, 13], "dataset": [2, 3, 4, 5, 6, 7, 8, 9, 13], "get_devic": 2, "invalidargu": [2, 3, 4], "get_exampl": [2, 3, 5, 6, 7, 8], "logreg_iri": [2, 3, 6, 9, 13], "rep": 2, "try": [2, 3, 4, 9], "label": [2, 3, 9], "proba": 2, "print": [2, 3, 4, 5, 6, 7, 8, 9, 13], "probabl": [2, 3], "except": [2, 3, 4], "runtimeerror": [2, 3, 4], "onnxruntimeerror": [2, 3, 4], "invalid_argu": [2, 3, 4], "got": [2, 3], "invalid": [2, 3], "input": [2, 3, 4, 5, 7, 9], "float_input": [2, 3, 4, 9, 13], "expect": [2, 3, 4], "pleas": [2, 3, 4, 9, 12, 13], "fix": [2, 3], "without": [2, 13], "framework": [2, 3], "make": 2, "easier": 2, "switch": 2, "multipl": [2, 3], "same": [2, 3, 9], "total": [2, 3, 4, 5, 6, 7, 8, 9, 10, 13], "script": [2, 3, 4, 5, 6, 7, 8, 9], "minut": [2, 3, 4, 5, 6, 7, 8, 9], "013": [2, 10], "second": [2, 3, 4, 5, 6, 7, 8, 9], "plot_backend": [2, 10], "py": [2, 3, 4, 5, 6, 7, 8, 9, 10, 13], "ipynb": [2, 3, 4, 5, 6, 7, 8, 9], "galleri": [2, 3, 4, 5, 6, 7, 8, 9, 12], "look": [3, 4, 7, 9], "situat": 3, "rais": 3, "instead": [3, 4], "step": [3, 4, 9], "favorit": 3, "iri": [3, 9, 13], "take": [3, 4], "vector": [3, 4, 5], "class": 3, "among": 3, "three": 3, "rt": [3, 4, 5, 6, 8, 9, 13], "example2": 3, "inferencesess": [3, 4, 5, 6, 8, 9, 13], "get_available_provid": [3, 4, 5, 6, 8, 9, 13], "bad": 3, "handl": 3, "kind": 3, "5": [3, 5, 7, 8, 9], "6": [3, 7, 8, 9], "float64": 3, "unexpect": [3, 4], "data": [3, 4, 9, 13], "doubl": [3, 4], "output": [3, 4, 5, 7, 9, 13], "misspel": 3, "option": [3, 8, 13], "9505997896194458": 3, "027834143489599228": 3, "021566055715084076": 3, "9974970817565918": 3, "6270167988259345e": 3, "05": [3, 9], "0024466365575790405": 3, "9997311234474182": 3, "787709464906584e": 3, "07": 3, "0002686927327886224": 3, "goe": 3, "necessarili": 3, "r": [3, 8], "rank": 3, "higher": 3, "017": [3, 10], "plot_common_error": [3, 10], "demonstr": [4, 5, 7, 9], "scikit": [4, 6, 9, 13], "learn": [4, 5, 6, 9, 13], "dictvector": 4, "retriev": [4, 5, 9], "boston": 4, "datset": [4, 9], "panda": [4, 9], "sklearn": [4, 6, 9, 13], "load_boston": 4, "model_select": [4, 9, 13], "train_test_split": [4, 9, 13], "x_train": [4, 9, 13], "x_test": [4, 9, 13], "y_train": [4, 9, 13], "y_test": [4, 9, 13], "x_train_dict": 4, "datafram": [4, 9], "to_dict": 4, "x_test_dict": 4, "home": [4, 13], "runner": [4, 13], "local": [4, 13], "lib": [4, 13], "python3": [4, 13], "8": [4, 9, 13], "site": [4, 13], "util": 4, "deprec": 4, "87": 4, "futurewarn": 4, "remov": 4, "hous": 4, "price": 4, "ethic": 4, "problem": 4, "document": [4, 13], "further": 4, "detail": [4, 13], "maintain": 4, "therefor": 4, "strongli": 4, "discourag": 4, "purpos": 4, "studi": 4, "educ": 4, "issu": 4, "scienc": 4, "machin": [4, 5, 9, 13], "special": 4, "origin": 4, "pd": 4, "data_url": 4, "http": [4, 13], "stat": 4, "cmu": 4, "edu": 4, "raw_df": 4, "read_csv": 4, "sep": 4, "skiprow": 4, "22": 4, "header": 4, "hstack": 4, "altern": [4, 13], "california": 4, "func": 4, "fetch_california_h": 4, "am": 4, "fetch_openml": 4, "house_pric": 4, "as_fram": 4, "msg": 4, "categori": 4, "we": [4, 7, 8, 9, 13], "ensembl": [4, 9], "gradientboostingregressor": 4, "feature_extract": 4, "make_pipelin": 4, "pipe": 4, "fit": [4, 9, 13], "x27": 4, "rerun": [4, 9], "cell": [4, 9], "show": [4, 5, 8, 9], "html": [4, 9, 13], "trust": [4, 9], "github": [4, 5, 9, 12], "unabl": [4, 9], "render": [4, 9], "page": [4, 9], "nbviewer": [4, 9], "org": [4, 9, 13], "pipelinepipelin": 4, "dictvectorizerdictvector": 4, "gradientboostingregressorgradientboostingregressor": 4, "confus": [4, 9], "matrix": [4, 9], "metric": [4, 9], "r2_score": 4, "pred": [4, 9], "8749080059406178": 4, "modul": [4, 9, 13], "skl2onnx": [4, 9, 13], "convert_sklearn": [4, 9, 13], "dictionarytyp": 4, "floattensortyp": [4, 9, 13], "int64tensortyp": 4, "sequencetyp": 4, "initial_typ": [4, 9, 13], "onx": [4, 9, 13], "open": [4, 7, 8, 9, 12, 13], "pipeline_vector": 4, "wb": [4, 9, 13], "f": [4, 8, 9, 13], "write": [4, 9, 13], "serializetostr": [4, 8, 9, 13], "inp": 4, "out": 4, "variabl": 4, "could": 4, "do": [4, 6, 9], "pred_onx": [4, 9, 13], "seq": 4, "But": 4, "observ": 4, "ones": 4, "9999999999999738": 4, "similar": [4, 9], "explain": 4, "small": 4, "discrep": 4, "883": [4, 10], "plot_convert_pipeline_vector": [4, 10], "test_sigmoid": 5, "example1": [5, 7, 8], "sigmoid": 5, "input_shap": 5, "input_typ": 5, "output_shap": 5, "output_typ": 5, "random": [5, 9], "astyp": [5, 9, 13], "72793055": 5, "7276279": 5, "5402208": 5, "52651": 5, "6187782": 5, "6260148": 5, "72244334": 5, "5919314": 5, "51697546": 5, "6279962": 5, "5698271": 5, "5472146": 5, "5360143": 5, "6660787": 5, "5606206": 5, "6521884": 5, "6324413": 5, "70974916": 5, "6610228": 5, "51202065": 5, "50950056": 5, "5815906": 5, "646681": 5, "5778367": 5, "71506053": 5, "5555914": 5, "6803483": 5, "60739946": 5, "6383947": 5, "53390664": 5, "67096996": 5, "60189897": 5, "67437965": 5, "50587314": 5, "58212054": 5, "6435938": 5, "5710239": 5, "5625591": 5, "6876829": 5, "52768266": 5, "5844447": 5, "55838233": 5, "7304268": 5, "56796575": 5, "52834886": 5, "5544108": 5, "6697368": 5, "50538224": 5, "6169538": 5, "5239782": 5, "7309318": 5, "70566034": 5, "6265241": 5, "5540862": 5, "6926802": 5, "58989674": 5, "71908456": 5, "72978514": 5, "63463324": 5, "58517313": 5, "011": [5, 10], "plot_load_and_predict": [5, 10], "relat": 6, "deploi": 6, "product": 6, "keep": 6, "track": 6, "doc_str": 6, "ir_vers": [6, 7, 8], "metadata_prop": 6, "model_vers": 6, "producer_vers": 6, "onnxml": 6, "onnxmltool": [6, 13], "0116": 6, "With": 6, "meta": 6, "3c59201b940f410fa29dc71ea9d5767d": 6, "003": [6, 10], "plot_metadata": [6, 10], "There": [7, 13], "That": 7, "most": 7, "mul_1": [7, 8], "protobuf": 7, "messag": 7, "chenta": 7, "w": 7, "op_typ": 7, "mul": 7, "dim": 7, "float_data": 7, "tensor_typ": 7, "elem_typ": 7, "dim_valu": 7, "opset_import": [7, 8], "7": [7, 8, 9], "net_draw": 7, "befor": [7, 8], "rb": [7, 8], "fid": 7, "read": 7, "parsefromstr": 7, "tool": [7, 13], "getopnodeproduc": 7, "getpydotgraph": 7, "pydot_graph": 7, "rankdir": 7, "lr": 7, "node_produc": 7, "docstr": 7, "write_dot": 7, "dot": 7, "Then": [7, 9], "imag": 7, "o": 7, "system": 7, "tpng": 7, "displai": 7, "matplotlib": [7, 9], "pyplot": 7, "plt": 7, "imread": 7, "png": 7, "imshow": 7, "axesimag": 7, "0x7efc140b5f10": 7, "187": [7, 10], "plot_pipelin": [7, 10], "interpret": 8, "def": [8, 9], "change_ir_vers": 8, "opset": 8, "11": [8, 9], "onnx_model": 8, "onnx_model_str": 8, "9": [8, 9], "16": 8, "25": [8, 9], "36": 8, "sessionopt": 8, "sess_profil": 8, "prof_fil": 8, "onnxruntime_profile__2022": 8, "01_10": 8, "29": 8, "57": 8, "json": 8, "un": 8, "what": [8, 9], "sess_tim": 8, "pprint": [8, 9], "cat": 8, "dur": 8, "49": 8, "model_loading_arrai": 8, "ph": 8, "pid": 8, "3109": 8, "tid": 8, "236": [8, 9], "session_initi": 8, "59": 8, "004": [8, 10], "plot_profil": [8, 10], "load_iri": [9, 13], "linear_model": [9, 13], "logisticregress": [9, 13], "clr": [9, 13], "logisticregressionlogisticregress": 9, "confusion_matrix": 9, "14": 9, "output_label": 9, "label_nam": [9, 13], "perfectli": 9, "ident": 9, "relev": 9, "roc": 9, "curv": 9, "prob_sklearn": 9, "predict_proba": 9, "11305775e": 9, "04": 9, "94108929e": 9, "01": [9, 10], "05679765e": 9, "94214236e": 9, "03": [9, 10], "96030729e": 9, "01027129e": 9, "94823786e": 9, "02": 9, "18372171e": 9, "21454503e": 9, "And": 9, "probabili": 9, "appear": 9, "prob_nam": 9, "prob_rt": 9, "0002113060181727633": 9, "19410927593708038": 9, "805679440498352": 9, "0029421390499919653": 9, "7960308194160461": 9, "2010270357131958": 9, "06948233395814896": 9, "9183722734451294": 9, "012145438231527805": 9, "timeit": 9, "speed": 9, "inst": 9, "repeat": 9, "20": 9, "global": 9, "raw": 9, "av": 9, "sum": 9, "len": 9, "mi": 9, "ma": 9, "min": 9, "max": 9, "averag": 9, "3g": 9, "08e": 9, "96e": 9, "77e": 9, "85e": 9, "14e": 9, "8539305000189187e": 9, "webservic": 9, "experi": 9, "oppos": 9, "batch": [9, 13], "loop": 9, "fct": 9, "n": 9, "nrow": 9, "rang": 9, "im": 9, "100": 9, "sess_predict": 9, "00381": 9, "00373": 9, "00389": 9, "000817": 9, "000808": 9, "000836": 9, "0008169881699997461": 9, "sess_predict_proba": 9, "00572": 9, "00558": 9, "00581": 9, "000824": 9, "000864": 9, "0008363299950003977": 9, "better": 9, "save": 9, "randomforestclassifi": 9, "rf": 9, "rf_iri": 9, "sess_predict_proba_rf": 9, "65": 9, "641": 9, "67": 9, "00103": 9, "00101": 9, "00113": 9, "001025739425000154": 9, "tree": 9, "measur": 9, "n_tree": 9, "51": 9, "n_estim": 9, "rf_iris_": 9, "sess_predict_proba_loop": 9, "tsk": 9, "trt": 9, "df": 9, "ax": 9, "plot": 9, "blue": 9, "logi": 9, "green": 9, "set_xlabel": 9, "set_ylabel": 9, "set_titl": 9, "nfor": 9, "forest": 9, "legend": 9, "0465": 9, "0462": 9, "0469": 9, "000802": 9, "000789": 9, "000834": 9, "0788": 9, "0783": 9, "0799": 9, "000815": 9, "000807": 9, "000837": 9, "15": 9, "109": 9, "108": 9, "00082": 9, "000809": 9, "000844": 9, "142": 9, "141": 9, "143": 9, "000847": 9, "00084": 9, "000869": 9, "173": 9, "171": 9, "174": 9, "000846": 9, "000833": 9, "000859": 9, "30": 9, "204": 9, "202": 9, "205": 9, "000877": 9, "000921": 9, "35": 9, "235": 9, "000879": 9, "000871": 9, "000896": 9, "40": 9, "268": 9, "267": 9, "269": 9, "000898": 9, "000889": 9, "000923": 9, "45": 9, "299": 9, "000913": 9, "000904": 9, "000935": 9, "50": 9, "33": 9, "331": 9, "000916": 9, "000945": 9, "0x7efbfea26bb0": 9, "633": [9, 10], "plot_train_convert_predict": [9, 10], "751": 10, "auto_exampl": 10, "00": 10, "mb": 10, "focus": 12, "score": 12, "engin": 12, "neural": 12, "network": 12, "exchang": 12, "aka": 12, "m": 12, "tutori": 12, "easi": 13, "high": 13, "rather": 13, "servic": 13, "At": 13, "briefli": 13, "ll": 13, "famou": 13, "_logist": 13, "444": 13, "convergencewarn": 13, "lbfg": 13, "converg": 13, "stop": 13, "NO": 13, "iter": 13, "reach": 13, "limit": 13, "increas": 13, "max_it": 13, "scale": 13, "shown": 13, "stabl": 13, "preprocess": 13, "solver": 13, "n_iter_i": 13, "_check_optimize_result": 13, "commonli": 13, "compos": 13}, "objects": {"onnxruntime": [[0, 0, 1, "", "IOBinding"], [0, 0, 1, "", "InferenceSession"], [0, 0, 1, "", "ModelMetadata"], [0, 0, 1, "", "NodeArg"], [0, 0, 1, "", "OrtDevice"], [0, 0, 1, "", "OrtValue"], [0, 0, 1, "", "RunOptions"], [0, 0, 1, "", "SessionOptions"], [0, 0, 1, "", "SparseTensor"]], "onnxruntime.IOBinding": [[0, 1, 1, "", "bind_cpu_input"], [0, 1, 1, "", "bind_input"], [0, 1, 1, "", "bind_ortvalue_input"], [0, 1, 1, "", "bind_ortvalue_output"], [0, 1, 1, "", "bind_output"], [0, 1, 1, "", "copy_outputs_to_cpu"], [0, 1, 1, "", "get_outputs"]], "onnxruntime.InferenceSession": [[0, 1, 1, "", "disable_fallback"], [0, 1, 1, "", "enable_fallback"], [0, 1, 1, "", "end_profiling"], [0, 1, 1, "", "get_inputs"], [0, 1, 1, "", "get_modelmeta"], [0, 1, 1, "", "get_outputs"], [0, 1, 1, "", "get_overridable_initializers"], [0, 1, 1, "", "get_profiling_start_time_ns"], [0, 1, 1, "", "get_provider_options"], [0, 1, 1, "", "get_providers"], [0, 1, 1, "", "get_session_options"], [0, 1, 1, "", "io_binding"], [0, 1, 1, "", "run"], [0, 1, 1, "", "run_with_iobinding"], [0, 1, 1, "", "run_with_ort_values"], [0, 1, 1, "", "set_providers"]], "onnxruntime.ModelMetadata": [[0, 2, 1, "", "custom_metadata_map"], [0, 2, 1, "", "description"], [0, 2, 1, "", "domain"], [0, 2, 1, "", "graph_description"], [0, 2, 1, "", "graph_name"], [0, 2, 1, "", "producer_name"], [0, 2, 1, "", "version"]], "onnxruntime.NodeArg": [[0, 2, 1, "", "name"], [0, 2, 1, "", "shape"], [0, 2, 1, "", "type"]], "onnxruntime.OrtValue": [[0, 1, 1, "", "as_sparse_tensor"], [0, 1, 1, "", "data_ptr"], [0, 1, 1, "", "data_type"], [0, 1, 1, "", "device_name"], [0, 1, 1, "", "element_type"], [0, 1, 1, "", "has_value"], [0, 1, 1, "", "is_sparse_tensor"], [0, 1, 1, "", "is_tensor"], [0, 1, 1, "", "is_tensor_sequence"], [0, 1, 1, "", "numpy"], [0, 1, 1, "", "ort_value_from_sparse_tensor"], [0, 1, 1, "", "ortvalue_from_numpy"], [0, 1, 1, "", "ortvalue_from_shape_and_type"], [0, 1, 1, "", "shape"], [0, 1, 1, "", "update_inplace"]], "onnxruntime.RunOptions": [[0, 1, 1, "", "add_run_config_entry"], [0, 1, 1, "", "get_run_config_entry"], [0, 2, 1, "", "log_severity_level"], [0, 2, 1, "", "log_verbosity_level"], [0, 2, 1, "", "logid"], [0, 2, 1, "", "only_execute_path_to_fetches"], [0, 2, 1, "", "terminate"]], "onnxruntime.SessionOptions": [[0, 1, 1, "", "add_external_initializers"], [0, 1, 1, "", "add_free_dimension_override_by_denotation"], [0, 1, 1, "", "add_free_dimension_override_by_name"], [0, 1, 1, "", "add_initializer"], [0, 1, 1, "", "add_session_config_entry"], [0, 2, 1, "", "enable_cpu_mem_arena"], [0, 2, 1, "", "enable_mem_pattern"], [0, 2, 1, "", "enable_mem_reuse"], [0, 2, 1, "", "enable_profiling"], [0, 2, 1, "", "execution_mode"], [0, 2, 1, "", "execution_order"], [0, 1, 1, "", "get_session_config_entry"], [0, 2, 1, "", "graph_optimization_level"], [0, 2, 1, "", "inter_op_num_threads"], [0, 2, 1, "", "intra_op_num_threads"], [0, 2, 1, "", "log_severity_level"], [0, 2, 1, "", "log_verbosity_level"], [0, 2, 1, "", "logid"], [0, 2, 1, "", "optimized_model_filepath"], [0, 2, 1, "", "profile_file_prefix"], [0, 1, 1, "", "register_custom_ops_library"], [0, 2, 1, "", "use_deterministic_compute"]], "onnxruntime.SparseTensor": [[0, 1, 1, "", "as_blocksparse_view"], [0, 1, 1, "", "as_coo_view"], [0, 1, 1, "", "as_csrc_view"], [0, 1, 1, "", "data_type"], [0, 1, 1, "", "dense_shape"], [0, 1, 1, "", "device_name"], [0, 1, 1, "", "format"], [0, 1, 1, "", "sparse_coo_from_numpy"], [0, 1, 1, "", "sparse_csr_from_numpy"], [0, 1, 1, "", "to_cuda"], [0, 1, 1, "", "values"]], "onnxruntime.backend": [[0, 3, 1, "", "is_compatible"], [0, 3, 1, "", "prepare"], [0, 3, 1, "", "run"], [0, 3, 1, "", "supports_device"]]}, "objtypes": {"0": "py:class", "1": "py:method", "2": "py:property", "3": "py:function"}, "objnames": {"0": ["py", "class", "Python class"], "1": ["py", "method", "Python method"], "2": ["py", "property", "Python property"], "3": ["py", "function", "Python function"]}, "titleterms": {"api": 0, "overview": 0, "load": [0, 5, 13], "run": [0, 13], "model": [0, 5, 7, 8, 13], "data": 0, "input": 0, "output": 0, "cpu": 0, "devic": 0, "detail": 0, "inferencesess": 0, "option": 0, "runopt": 0, "sessionopt": 0, "ortvalu": 0, "sparsetensor": 0, "iobind": 0, "ortdevic": 0, "intern": 0, "class": 0, "modelmetadata": 0, "nodearg": 0, "backend": [0, 2], "galleri": 1, "exampl": 1, "onnx": [2, 4, 5, 7, 9, 12, 13], "runtim": [2, 4, 5, 9, 12, 13], "common": 3, "error": 3, "onnxruntim": 3, "train": [4, 9, 13], "convert": [4, 9, 13], "predict": [4, 5, 9], "pipelin": [4, 7], "convers": [4, 9], "format": [4, 7, 9, 13], "veri": 5, "simpl": [5, 8], "metadata": 6, "draw": 7, "retriev": 7, "json": 7, "profil": 8, "execut": 8, "logist": 9, "regress": 9, "probabl": 9, "benchmark": 9, "randomforest": 9, "comput": 10, "time": 10, "python": 12, "bind": 12, "tutori": 13, "step": 13, "1": 13, "us": 13, "your": 13, "favorit": 13, "framework": 13, "2": 13, "export": 13, "3": 13}, "envversion": {"sphinx.domains.c": 2, "sphinx.domains.changeset": 1, "sphinx.domains.citation": 1, "sphinx.domains.cpp": 8, "sphinx.domains.index": 1, "sphinx.domains.javascript": 2, "sphinx.domains.math": 2, "sphinx.domains.python": 3, "sphinx.domains.rst": 2, "sphinx.domains.std": 2, "sphinx.ext.intersphinx": 1, "sphinx.ext.viewcode": 1, "sphinx": 57}, "alltitles": {"API": [[0, "api"]], "API Overview": [[0, "api-overview"]], "Load and run a model": [[0, "load-and-run-a-model"]], "Data inputs and outputs": [[0, "data-inputs-and-outputs"]], "Data on CPU": [[0, "data-on-cpu"]], "Data on device": [[0, "data-on-device"]], "API Details": [[0, "api-details"]], "InferenceSession": [[0, "inferencesession"]], "Options": [[0, "options"]], "RunOptions": [[0, "runoptions"]], "SessionOptions": [[0, "sessionoptions"]], "Data": [[0, "data"]], "OrtValue": [[0, "ortvalue"]], "SparseTensor": [[0, "sparsetensor"]], "Devices": [[0, "devices"]], "IOBinding": [[0, "iobinding"]], "OrtDevice": [[0, "ortdevice"]], "Internal classes": [[0, "internal-classes"]], "ModelMetadata": [[0, "modelmetadata"]], "NodeArg": [[0, "nodearg"]], "Backend": [[0, "backend"]], "Gallery of examples": [[1, "gallery-of-examples"]], "ONNX Runtime Backend for ONNX": [[2, "onnx-runtime-backend-for-onnx"]], "Common errors with onnxruntime": [[3, "common-errors-with-onnxruntime"]], "Train, convert and predict with ONNX Runtime": [[4, "train-convert-and-predict-with-onnx-runtime"], [9, "train-convert-and-predict-with-onnx-runtime"]], "Train a pipeline": [[4, "train-a-pipeline"]], "Conversion to ONNX format": [[4, "conversion-to-onnx-format"], [9, "conversion-to-onnx-format"]], "Load and predict with ONNX Runtime and a very simple model": [[5, "load-and-predict-with-onnx-runtime-and-a-very-simple-model"]], "Metadata": [[6, "metadata"]], "Draw a pipeline": [[7, "draw-a-pipeline"]], "Retrieve a model in JSON format": [[7, "retrieve-a-model-in-json-format"]], "Draw a model with ONNX": [[7, "draw-a-model-with-onnx"]], "Profile the execution of a simple model": [[8, "profile-the-execution-of-a-simple-model"]], "Train a logistic regression": [[9, "train-a-logistic-regression"]], "Probabilities": [[9, "probabilities"]], "Benchmark with RandomForest": [[9, "benchmark-with-randomforest"]], "Computation times": [[10, "computation-times"]], "Python Bindings for ONNX Runtime": [[12, "python-bindings-for-onnx-runtime"]], "Tutorial": [[13, "tutorial"]], "Step 1: Train a model using your favorite framework": [[13, "step-1-train-a-model-using-your-favorite-framework"]], "Step 2: Convert or export the model into ONNX format": [[13, "step-2-convert-or-export-the-model-into-onnx-format"]], "Step 3: Load and run the model using ONNX Runtime": [[13, "step-3-load-and-run-the-model-using-onnx-runtime"]]}, "indexentries": {"iobinding (class in onnxruntime)": [[0, "onnxruntime.IOBinding"]], "inferencesession (class in onnxruntime)": [[0, "onnxruntime.InferenceSession"]], "modelmetadata (class in onnxruntime)": [[0, "onnxruntime.ModelMetadata"]], "nodearg (class in onnxruntime)": [[0, "onnxruntime.NodeArg"]], "ortdevice (class in onnxruntime)": [[0, "onnxruntime.OrtDevice"]], "ortvalue (class in onnxruntime)": [[0, "onnxruntime.OrtValue"]], "runoptions (class in onnxruntime)": [[0, "onnxruntime.RunOptions"]], "sessionoptions (class in onnxruntime)": [[0, "onnxruntime.SessionOptions"]], "sparsetensor (class in onnxruntime)": [[0, "onnxruntime.SparseTensor"]], "add_external_initializers() (onnxruntime.sessionoptions method)": [[0, "onnxruntime.SessionOptions.add_external_initializers"]], "add_free_dimension_override_by_denotation() (onnxruntime.sessionoptions method)": [[0, "onnxruntime.SessionOptions.add_free_dimension_override_by_denotation"]], "add_free_dimension_override_by_name() (onnxruntime.sessionoptions method)": [[0, "onnxruntime.SessionOptions.add_free_dimension_override_by_name"]], "add_initializer() (onnxruntime.sessionoptions method)": [[0, "onnxruntime.SessionOptions.add_initializer"]], "add_run_config_entry() (onnxruntime.runoptions method)": [[0, "onnxruntime.RunOptions.add_run_config_entry"]], "add_session_config_entry() (onnxruntime.sessionoptions method)": [[0, "onnxruntime.SessionOptions.add_session_config_entry"]], "as_blocksparse_view() (onnxruntime.sparsetensor method)": [[0, "onnxruntime.SparseTensor.as_blocksparse_view"]], "as_coo_view() (onnxruntime.sparsetensor method)": [[0, "onnxruntime.SparseTensor.as_coo_view"]], "as_csrc_view() (onnxruntime.sparsetensor method)": [[0, "onnxruntime.SparseTensor.as_csrc_view"]], "as_sparse_tensor() (onnxruntime.ortvalue method)": [[0, "onnxruntime.OrtValue.as_sparse_tensor"]], "bind_cpu_input() (onnxruntime.iobinding method)": [[0, "onnxruntime.IOBinding.bind_cpu_input"]], "bind_input() (onnxruntime.iobinding method)": [[0, "onnxruntime.IOBinding.bind_input"]], "bind_ortvalue_input() (onnxruntime.iobinding method)": [[0, "onnxruntime.IOBinding.bind_ortvalue_input"]], "bind_ortvalue_output() (onnxruntime.iobinding method)": [[0, "onnxruntime.IOBinding.bind_ortvalue_output"]], "bind_output() (onnxruntime.iobinding method)": [[0, "onnxruntime.IOBinding.bind_output"]], "copy_outputs_to_cpu() (onnxruntime.iobinding method)": [[0, "onnxruntime.IOBinding.copy_outputs_to_cpu"]], "custom_metadata_map (onnxruntime.modelmetadata property)": [[0, "onnxruntime.ModelMetadata.custom_metadata_map"]], "data_ptr() (onnxruntime.ortvalue method)": [[0, "onnxruntime.OrtValue.data_ptr"]], "data_type() (onnxruntime.ortvalue method)": [[0, "onnxruntime.OrtValue.data_type"]], "data_type() (onnxruntime.sparsetensor method)": [[0, "onnxruntime.SparseTensor.data_type"]], "dense_shape() (onnxruntime.sparsetensor method)": [[0, "onnxruntime.SparseTensor.dense_shape"]], "description (onnxruntime.modelmetadata property)": [[0, "onnxruntime.ModelMetadata.description"]], "device_name() (onnxruntime.ortvalue method)": [[0, "onnxruntime.OrtValue.device_name"]], "device_name() (onnxruntime.sparsetensor method)": [[0, "onnxruntime.SparseTensor.device_name"]], "disable_fallback() (onnxruntime.inferencesession method)": [[0, "onnxruntime.InferenceSession.disable_fallback"]], "domain (onnxruntime.modelmetadata property)": [[0, "onnxruntime.ModelMetadata.domain"]], "element_type() (onnxruntime.ortvalue method)": [[0, "onnxruntime.OrtValue.element_type"]], "enable_cpu_mem_arena (onnxruntime.sessionoptions property)": [[0, "onnxruntime.SessionOptions.enable_cpu_mem_arena"]], "enable_fallback() (onnxruntime.inferencesession method)": [[0, "onnxruntime.InferenceSession.enable_fallback"]], "enable_mem_pattern (onnxruntime.sessionoptions property)": [[0, "onnxruntime.SessionOptions.enable_mem_pattern"]], "enable_mem_reuse (onnxruntime.sessionoptions property)": [[0, "onnxruntime.SessionOptions.enable_mem_reuse"]], "enable_profiling (onnxruntime.sessionoptions property)": [[0, "onnxruntime.SessionOptions.enable_profiling"]], "end_profiling() (onnxruntime.inferencesession method)": [[0, "onnxruntime.InferenceSession.end_profiling"]], "execution_mode (onnxruntime.sessionoptions property)": [[0, "onnxruntime.SessionOptions.execution_mode"]], "execution_order (onnxruntime.sessionoptions property)": [[0, "onnxruntime.SessionOptions.execution_order"]], "format() (onnxruntime.sparsetensor method)": [[0, "onnxruntime.SparseTensor.format"]], "get_inputs() (onnxruntime.inferencesession method)": [[0, "onnxruntime.InferenceSession.get_inputs"]], "get_modelmeta() (onnxruntime.inferencesession method)": [[0, "onnxruntime.InferenceSession.get_modelmeta"]], "get_outputs() (onnxruntime.iobinding method)": [[0, "onnxruntime.IOBinding.get_outputs"]], "get_outputs() (onnxruntime.inferencesession method)": [[0, "onnxruntime.InferenceSession.get_outputs"]], "get_overridable_initializers() (onnxruntime.inferencesession method)": [[0, "onnxruntime.InferenceSession.get_overridable_initializers"]], "get_profiling_start_time_ns() (onnxruntime.inferencesession method)": [[0, "onnxruntime.InferenceSession.get_profiling_start_time_ns"]], "get_provider_options() (onnxruntime.inferencesession method)": [[0, "onnxruntime.InferenceSession.get_provider_options"]], "get_providers() (onnxruntime.inferencesession method)": [[0, "onnxruntime.InferenceSession.get_providers"]], "get_run_config_entry() (onnxruntime.runoptions method)": [[0, "onnxruntime.RunOptions.get_run_config_entry"]], "get_session_config_entry() (onnxruntime.sessionoptions method)": [[0, "onnxruntime.SessionOptions.get_session_config_entry"]], "get_session_options() (onnxruntime.inferencesession method)": [[0, "onnxruntime.InferenceSession.get_session_options"]], "graph_description (onnxruntime.modelmetadata property)": [[0, "onnxruntime.ModelMetadata.graph_description"]], "graph_name (onnxruntime.modelmetadata property)": [[0, "onnxruntime.ModelMetadata.graph_name"]], "graph_optimization_level (onnxruntime.sessionoptions property)": [[0, "onnxruntime.SessionOptions.graph_optimization_level"]], "has_value() (onnxruntime.ortvalue method)": [[0, "onnxruntime.OrtValue.has_value"]], "inter_op_num_threads (onnxruntime.sessionoptions property)": [[0, "onnxruntime.SessionOptions.inter_op_num_threads"]], "intra_op_num_threads (onnxruntime.sessionoptions property)": [[0, "onnxruntime.SessionOptions.intra_op_num_threads"]], "io_binding() (onnxruntime.inferencesession method)": [[0, "onnxruntime.InferenceSession.io_binding"]], "is_compatible() (in module onnxruntime.backend)": [[0, "onnxruntime.backend.is_compatible"]], "is_sparse_tensor() (onnxruntime.ortvalue method)": [[0, "onnxruntime.OrtValue.is_sparse_tensor"]], "is_tensor() (onnxruntime.ortvalue method)": [[0, "onnxruntime.OrtValue.is_tensor"]], "is_tensor_sequence() (onnxruntime.ortvalue method)": [[0, "onnxruntime.OrtValue.is_tensor_sequence"]], "log_severity_level (onnxruntime.runoptions property)": [[0, "onnxruntime.RunOptions.log_severity_level"]], "log_severity_level (onnxruntime.sessionoptions property)": [[0, "onnxruntime.SessionOptions.log_severity_level"]], "log_verbosity_level (onnxruntime.runoptions property)": [[0, "onnxruntime.RunOptions.log_verbosity_level"]], "log_verbosity_level (onnxruntime.sessionoptions property)": [[0, "onnxruntime.SessionOptions.log_verbosity_level"]], "logid (onnxruntime.runoptions property)": [[0, "onnxruntime.RunOptions.logid"]], "logid (onnxruntime.sessionoptions property)": [[0, "onnxruntime.SessionOptions.logid"]], "name (onnxruntime.nodearg property)": [[0, "onnxruntime.NodeArg.name"]], "numpy() (onnxruntime.ortvalue method)": [[0, "onnxruntime.OrtValue.numpy"]], "only_execute_path_to_fetches (onnxruntime.runoptions property)": [[0, "onnxruntime.RunOptions.only_execute_path_to_fetches"]], "optimized_model_filepath (onnxruntime.sessionoptions property)": [[0, "onnxruntime.SessionOptions.optimized_model_filepath"]], "ort_value_from_sparse_tensor() (onnxruntime.ortvalue static method)": [[0, "onnxruntime.OrtValue.ort_value_from_sparse_tensor"]], "ortvalue_from_numpy() (onnxruntime.ortvalue static method)": [[0, "onnxruntime.OrtValue.ortvalue_from_numpy"]], "ortvalue_from_shape_and_type() (onnxruntime.ortvalue static method)": [[0, "onnxruntime.OrtValue.ortvalue_from_shape_and_type"]], "prepare() (in module onnxruntime.backend)": [[0, "onnxruntime.backend.prepare"]], "producer_name (onnxruntime.modelmetadata property)": [[0, "onnxruntime.ModelMetadata.producer_name"]], "profile_file_prefix (onnxruntime.sessionoptions property)": [[0, "onnxruntime.SessionOptions.profile_file_prefix"]], "register_custom_ops_library() (onnxruntime.sessionoptions method)": [[0, "onnxruntime.SessionOptions.register_custom_ops_library"]], "run() (in module onnxruntime.backend)": [[0, "onnxruntime.backend.run"]], "run() (onnxruntime.inferencesession method)": [[0, "onnxruntime.InferenceSession.run"]], "run_with_iobinding() (onnxruntime.inferencesession method)": [[0, "onnxruntime.InferenceSession.run_with_iobinding"]], "run_with_ort_values() (onnxruntime.inferencesession method)": [[0, "onnxruntime.InferenceSession.run_with_ort_values"]], "set_providers() (onnxruntime.inferencesession method)": [[0, "onnxruntime.InferenceSession.set_providers"]], "shape (onnxruntime.nodearg property)": [[0, "onnxruntime.NodeArg.shape"]], "shape() (onnxruntime.ortvalue method)": [[0, "onnxruntime.OrtValue.shape"]], "sparse_coo_from_numpy() (onnxruntime.sparsetensor static method)": [[0, "onnxruntime.SparseTensor.sparse_coo_from_numpy"]], "sparse_csr_from_numpy() (onnxruntime.sparsetensor static method)": [[0, "onnxruntime.SparseTensor.sparse_csr_from_numpy"]], "supports_device() (in module onnxruntime.backend)": [[0, "onnxruntime.backend.supports_device"]], "terminate (onnxruntime.runoptions property)": [[0, "onnxruntime.RunOptions.terminate"]], "to_cuda() (onnxruntime.sparsetensor method)": [[0, "onnxruntime.SparseTensor.to_cuda"]], "type (onnxruntime.nodearg property)": [[0, "onnxruntime.NodeArg.type"]], "update_inplace() (onnxruntime.ortvalue method)": [[0, "onnxruntime.OrtValue.update_inplace"]], "use_deterministic_compute (onnxruntime.sessionoptions property)": [[0, "onnxruntime.SessionOptions.use_deterministic_compute"]], "values() (onnxruntime.sparsetensor method)": [[0, "onnxruntime.SparseTensor.values"]], "version (onnxruntime.modelmetadata property)": [[0, "onnxruntime.ModelMetadata.version"]]}}) \ No newline at end of file diff --git a/docs/api/python/sources/README.rst.txt b/docs/api/python/sources/README.rst.txt deleted file mode 100644 index 8d58e4bc0eab9..0000000000000 --- a/docs/api/python/sources/README.rst.txt +++ /dev/null @@ -1,75 +0,0 @@ -ONNX Runtime -============ - -ONNX Runtime is a performance-focused scoring engine for Open Neural Network Exchange (ONNX) models. -For more information on ONNX Runtime, please see `aka.ms/onnxruntime `_ or the `Github project `_. - - -Changes -------- - -1.7.0 -^^^^^ - -Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.7.0 - -1.6.0 -^^^^^ - -Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.6.0 - -1.5.3 -^^^^^ - -Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.5.3 - -1.5.2 -^^^^^ - -Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.5.2 - -1.5.1 -^^^^^ - -Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.5.1 - - -1.4.0 -^^^^^ - -Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.4.0 - -1.3.1 -^^^^^ - -Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.3.1 - -1.3.0 -^^^^^ - -Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.3.0 - -1.2.0 -^^^^^ - -Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.2.0 - -1.1.0 -^^^^^ - -Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.1.0 - -1.0.0 -^^^^^ - -Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.0.0 - -0.5.0 -^^^^^ - -Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v0.5.0 - -0.4.0 -^^^^^ - -Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v0.4.0 diff --git a/docs/api/python/sources/api_summary.rst.txt b/docs/api/python/sources/api_summary.rst.txt index ce764dafce10d..4f82e1242d6de 100644 --- a/docs/api/python/sources/api_summary.rst.txt +++ b/docs/api/python/sources/api_summary.rst.txt @@ -1,88 +1,127 @@ -=========== -API Summary -=========== - -Summary of public functions and classes exposed -in *ONNX Runtime*. +=== +API +=== .. contents:: :local: -OrtValue -========= -*ONNX Runtime* works with native Python data structures which are mapped into ONNX data formats : -Numpy arrays (tensors), dictionaries (maps), and a list of Numpy arrays (sequences). -The data backing these are on CPU. +API Overview +============ -*ONNX Runtime* supports a custom data structure that supports all ONNX data formats that allows users -to place the data backing these on a device, for example, on a CUDA supported device. This allows for -interesting *IOBinding* scenarios (discussed below). In addition, *ONNX Runtime* supports directly -working with *OrtValue* (s) while inferencing a model if provided as part of the input feed. +*ONNX Runtime* loads and runs inference on a model in ONNX graph format, or ORT format (for memory and disk constrained environments). -Below is an example showing creation of an *OrtValue* from a Numpy array while placing its backing memory -on a CUDA device: +The data consumed and produced by the model can be specified and accessed in the way that best matches your scenario. + +Load and run a model +-------------------- + +InferenceSession is the main class of ONNX Runtime. It is used to load and run an ONNX model, +as well as specify environment and application configuration options. .. code-block:: python - #X is numpy array on cpu, create an OrtValue and place it on cuda device id = 0 - ortvalue = onnxruntime.OrtValue.ortvalue_from_numpy(X, 'cuda', 0) - ortvalue.device_name() # 'cuda' - ortvalue.shape() # shape of the numpy array X - ortvalue.data_type() # 'tensor(float)' - ortvalue.is_tensor() # 'True' + session = onnxruntime.InferenceSession('model.onnx') + + outputs = session.run([output names], inputs) + +ONNX and ORT format models consist of a graph of computations, modeled as operators, +and implemented as optimized operator kernels for different hardware targets. +ONNX Runtime orchestrates the execution of operator kernels via `execution providers`. +An execution provider contains the set of kernels for a specific execution target (CPU, GPU, IoT etc). +Execution provides are configured using the `providers` parameter. Kernels from different execution +providers are chosen in the priority order given in the list of providers. In the example below +if there is a kernel in the CUDA execution provider ONNX Runtime executes that on GPU. If not +the kernel is executed on CPU. + +.. code-block:: python + + session = onnxruntime.InferenceSession(model, + providers=['CUDAExecutionProvider', 'CPUExecutionProvider']) + +The list of available execution providers can be found here: `Execution Providers `_. + +Since ONNX Runtime 1.10, you must explicitly specify the execution provider for your target. +Running on CPU is the only time the API allows no explicit setting of the `provider` parameter. +In the examples that follow, the `CUDAExecutionProvider` and `CPUExecutionProvider` are used, assuming the application is running on NVIDIA GPUs. +Replace these with the execution provider specific to your environment. + +You can supply other session configurations via the `session options` parameter. For example, to enable +profiling on the session: + +.. code-block:: python + + options = onnxruntime.SessionOptions() + options.enable_profiling=True + session = onnxruntime.InferenceSession('model.onnx', sess_options=options, providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])) + + +Data inputs and outputs +----------------------- + +The ONNX Runtime Inference Session consumes and produces data using its OrtValue class. + +Data on CPU +^^^^^^^^^^^ + +On CPU (the default), OrtValues can be mapped to and from native Python data structures: numpy arrays, dictionaries and lists of +numpy arrays. + +.. code-block:: python + + # X is numpy array on cpu + ortvalue = onnxruntime.OrtValue.ortvalue_from_numpy(X) + ortvalue.device_name() # 'cpu' + ortvalue.shape() # shape of the numpy array X + ortvalue.data_type() # 'tensor(float)' + ortvalue.is_tensor() # 'True' np.array_equal(ortvalue.numpy(), X) # 'True' - #ortvalue can be provided as part of the input feed to a model - ses = onnxruntime.InferenceSession('model.onnx') - res = sess.run(["Y"], {"X": ortvalue}) + # ortvalue can be provided as part of the input feed to a model + session = onnxruntime.InferenceSession('model.onnx', providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])) + results = session.run(["Y"], {"X": ortvalue}) + +By default, *ONNX Runtime* always places input(s) and output(s) on CPU. Having the data on CPU +may not optimal if the input or output is consumed and produced on a device +other than CPU because it introduces data copy between CPU and the device. -IOBinding -========= -By default, *ONNX Runtime* always places input(s) and output(s) on CPU, which -is not optimal if the input or output is consumed and produced on a device -other than CPU because it introduces data copy between CPU and the device. -*ONNX Runtime* provides a feature, *IO Binding*, which addresses this issue by -enabling users to specify which device to place input(s) and output(s) on. -Here are scenarios to use this feature. +Data on device +^^^^^^^^^^^^^^ -(In the following code snippets, *model.onnx* is the model to execute, -*X* is the input data to feed, and *Y* is the output data.) +*ONNX Runtime* supports a custom data structure that supports all ONNX data formats that allows users +to place the data backing these on a device, for example, on a CUDA supported device. In ONNX Runtime, +this called `IOBinding`. -Scenario 1: +To use the `IOBinding` feature, replace `InferenceSession.run()` with `InferenceSession.run_with_iobinding()`. -A graph is executed on a device other than CPU, for instance CUDA. Users can -use IOBinding to put input on CUDA as the follows. +A graph is executed on a device other than CPU, for instance CUDA. Users can +use IOBinding to copy the data onto the GPU. .. code-block:: python - #X is numpy array on cpu - session = onnxruntime.InferenceSession('model.onnx') + # X is numpy array on cpu + session = onnxruntime.InferenceSession('model.onnx', providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])) io_binding = session.io_binding() - # OnnxRuntime will copy the data over to the CUDA device if 'input' is consumed by nodes on the CUDA device + # OnnxRuntime will copy the data over to the CUDA device if 'input' is consumed by nodes on the CUDA device io_binding.bind_cpu_input('input', X) io_binding.bind_output('output') session.run_with_iobinding(io_binding) Y = io_binding.copy_outputs_to_cpu()[0] -Scenario 2: - The input data is on a device, users directly use the input. The output data is on CPU. .. code-block:: python - #X is numpy array on cpu + # X is numpy array on cpu X_ortvalue = onnxruntime.OrtValue.ortvalue_from_numpy(X, 'cuda', 0) - session = onnxruntime.InferenceSession('model.onnx') + session = onnxruntime.InferenceSession('model.onnx', providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])) io_binding = session.io_binding() io_binding.bind_input(name='input', device_type=X_ortvalue.device_name(), device_id=0, element_type=np.float32, shape=X_ortvalue.shape(), buffer_ptr=X_ortvalue.data_ptr()) io_binding.bind_output('output') session.run_with_iobinding(io_binding) Y = io_binding.copy_outputs_to_cpu()[0] -Scenario 3: - The input data and output data are both on a device, users directly use the input and also place output on the device. .. code-block:: python @@ -90,13 +129,12 @@ The input data and output data are both on a device, users directly use the inpu #X is numpy array on cpu X_ortvalue = onnxruntime.OrtValue.ortvalue_from_numpy(X, 'cuda', 0) Y_ortvalue = onnxruntime.OrtValue.ortvalue_from_shape_and_type([3, 2], np.float32, 'cuda', 0) # Change the shape to the actual shape of the output being bound - session = onnxruntime.InferenceSession('model.onnx') + session = onnxruntime.InferenceSession('model.onnx', providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])) io_binding = session.io_binding() io_binding.bind_input(name='input', device_type=X_ortvalue.device_name(), device_id=0, element_type=np.float32, shape=X_ortvalue.shape(), buffer_ptr=X_ortvalue.data_ptr()) io_binding.bind_output(name='output', device_type=Y_ortvalue.device_name(), device_id=0, element_type=np.float32, shape=Y_ortvalue.shape(), buffer_ptr=Y_ortvalue.data_ptr()) session.run_with_iobinding(io_binding) -Scenario 4: Users can request *ONNX Runtime* to allocate an output on a device. This is particularly useful for dynamic shaped outputs. Users can use the *get_outputs()* API to get access to the *OrtValue* (s) corresponding to the allocated output(s). @@ -106,7 +144,7 @@ Users can thus consume the *ONNX Runtime* allocated memory for the output as an #X is numpy array on cpu X_ortvalue = onnxruntime.OrtValue.ortvalue_from_numpy(X, 'cuda', 0) - session = onnxruntime.InferenceSession('model.onnx') + session = onnxruntime.InferenceSession('model.onnx', providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])) io_binding = session.io_binding() io_binding.bind_input(name='input', device_type=X_ortvalue.device_name(), device_id=0, element_type=np.float32, shape=X_ortvalue.shape(), buffer_ptr=X_ortvalue.data_ptr()) #Request ONNX Runtime to bind and allocate memory on CUDA for 'output' @@ -116,7 +154,7 @@ Users can thus consume the *ONNX Runtime* allocated memory for the output as an ort_output = io_binding.get_outputs()[0] -Scenario 5: +In addition, *ONNX Runtime* supports directly working with *OrtValue* (s) while inferencing a model if provided as part of the input feed. Users can bind *OrtValue* (s) directly. @@ -126,51 +164,119 @@ Users can bind *OrtValue* (s) directly. #X is numpy array on cpu X_ortvalue = onnxruntime.OrtValue.ortvalue_from_numpy(X, 'cuda', 0) Y_ortvalue = onnxruntime.OrtValue.ortvalue_from_shape_and_type([3, 2], np.float32, 'cuda', 0) # Change the shape to the actual shape of the output being bound - session = onnxruntime.InferenceSession('model.onnx') + session = onnxruntime.InferenceSession('model.onnx', providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])) io_binding = session.io_binding() io_binding.bind_ortvalue_input('input', X_ortvalue) io_binding.bind_ortvalue_output('output', Y_ortvalue) session.run_with_iobinding(io_binding) -Device -====== -The package is compiled for a specific device, GPU or CPU. -The CPU implementation includes optimizations -such as MKL (Math Kernel Libary). The following function -indicates the chosen option: +You can also bind inputs and outputs directly to a PyTorch tensor. -.. autofunction:: onnxruntime.get_device +.. code-block:: python -Examples and datasets -===================== + # X is a PyTorch tensor on device + session = onnxruntime.InferenceSession('model.onnx', providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])) + binding = session.io_binding() + + X_tensor = X.contiguous() + + binding.bind_input( + name='X', + device_type='cuda', + device_id=0, + element_type=np.float32, + shape=tuple(x_tensor.shape), + buffer_ptr=x_tensor.data_ptr(), + ) + + ## Allocate the PyTorch tensor for the model output + Y_shape = ... # You need to specify the output PyTorch tensor shape + Y_tensor = torch.empty(Y_shape, dtype=torch.float32, device='cuda:0').contiguous() + binding.bind_output( + name='Y', + device_type='cuda', + device_id=0, + element_type=np.float32, + shape=tuple(Y_tensor.shape), + buffer_ptr=Y_tensor.data_ptr(), + ) + + session.run_with_iobinding(binding) + + +API Details +=========== -The package contains a few models stored in ONNX format -used in the documentation. These don't need to be downloaded -as they are installed with the package. -.. autofunction:: onnxruntime.datasets.get_example +InferenceSession +---------- -Load and run a model -==================== +.. autoclass:: onnxruntime.InferenceSession + :members: + :inherited-members: -*ONNX Runtime* reads a model saved in ONNX format. -The main class *InferenceSession* wraps these functionalities -in a single place. +Options +------- -.. autoclass:: onnxruntime.ModelMetadata +RunOptions +^^^^^^^^^^ + +.. autoclass:: onnxruntime.RunOptions :members: -.. autoclass:: onnxruntime.InferenceSession +SessionOptions +^^^^^^^^^^^^^^ + +.. autoclass:: onnxruntime.SessionOptions :members: -.. autoclass:: onnxruntime.NodeArg +Data +---- + +OrtValue +^^^^^^^^ + +.. autoclass:: onnxruntime.OrtValue :members: -.. autoclass:: onnxruntime.RunOptions +SparseTensor +^^^^^^^^^^^^ + +.. autoclass:: onnxruntime.SparseTensor :members: -.. autoclass:: onnxruntime.SessionOptions +Devices +------- + +IOBinding +^^^^^^^^^ + +.. autoclass:: onnxruntime.IOBinding + :members: + +OrtDevice +^^^^^^^^^ + +.. autoclass:: onnxruntime.OrtDevice + :members: + +Internal classes +---------------- + +These classes cannot be instantiated by users but they are returned +by methods or functions of this library. + +ModelMetadata +^^^^^^^^^^^^^ + +.. autoclass:: onnxruntime.ModelMetadata + :members: + +NodeArg +^^^^^^^ + +.. autoclass:: onnxruntime.NodeArg :members: Backend @@ -178,7 +284,7 @@ Backend In addition to the regular API which is optimized for performance and usability,  *ONNX Runtime* also implements the -`ONNX backend API `_ +`ONNX backend API `_ for verification of *ONNX* specification conformance. The following functions are supported: diff --git a/docs/api/python/sources/auto_examples/index.rst.txt b/docs/api/python/sources/auto_examples/index.rst.txt index 30fa6cc39cbc1..2c567b8f8546c 100644 --- a/docs/api/python/sources/auto_examples/index.rst.txt +++ b/docs/api/python/sources/auto_examples/index.rst.txt @@ -1,9 +1,5 @@ :orphan: - - -.. _sphx_glr_auto_examples: - Gallery of examples =================== @@ -12,194 +8,176 @@ Gallery of examples +.. raw:: html + +
        + + .. raw:: html
        .. only:: html - .. figure:: /auto_examples/images/thumb/sphx_glr_plot_pipeline_thumb.png - :alt: Draw a pipeline + .. image:: /auto_examples/images/thumb/sphx_glr_plot_pipeline_thumb.png + :alt: Draw a pipeline - :ref:`sphx_glr_auto_examples_plot_pipeline.py` + :ref:`sphx_glr_auto_examples_plot_pipeline.py` .. raw:: html +
        Draw a pipeline
        -.. toctree:: - :hidden: - - /auto_examples/plot_pipeline - .. raw:: html
        .. only:: html - .. figure:: /auto_examples/images/thumb/sphx_glr_plot_load_and_predict_thumb.png - :alt: Load and predict with ONNX Runtime and a very simple model + .. image:: /auto_examples/images/thumb/sphx_glr_plot_load_and_predict_thumb.png + :alt: Load and predict with ONNX Runtime and a very simple model - :ref:`sphx_glr_auto_examples_plot_load_and_predict.py` + :ref:`sphx_glr_auto_examples_plot_load_and_predict.py` .. raw:: html +
        Load and predict with ONNX Runtime and a very simple model
        -.. toctree:: - :hidden: - - /auto_examples/plot_load_and_predict - .. raw:: html -
        +
        .. only:: html - .. figure:: /auto_examples/images/thumb/sphx_glr_plot_backend_thumb.png - :alt: ONNX Runtime Backend for ONNX + .. image:: /auto_examples/images/thumb/sphx_glr_plot_backend_thumb.png + :alt: ONNX Runtime Backend for ONNX - :ref:`sphx_glr_auto_examples_plot_backend.py` + :ref:`sphx_glr_auto_examples_plot_backend.py` .. raw:: html +
        ONNX Runtime Backend for ONNX
        -.. toctree:: - :hidden: - - /auto_examples/plot_backend - .. raw:: html
        .. only:: html - .. figure:: /auto_examples/images/thumb/sphx_glr_plot_metadata_thumb.png - :alt: Metadata + .. image:: /auto_examples/images/thumb/sphx_glr_plot_metadata_thumb.png + :alt: Metadata - :ref:`sphx_glr_auto_examples_plot_metadata.py` + :ref:`sphx_glr_auto_examples_plot_metadata.py` .. raw:: html +
        Metadata
        -.. toctree:: - :hidden: - - /auto_examples/plot_metadata - .. raw:: html
        .. only:: html - .. figure:: /auto_examples/images/thumb/sphx_glr_plot_profiling_thumb.png - :alt: Profile the execution of a simple model + .. image:: /auto_examples/images/thumb/sphx_glr_plot_profiling_thumb.png + :alt: Profile the execution of a simple model - :ref:`sphx_glr_auto_examples_plot_profiling.py` + :ref:`sphx_glr_auto_examples_plot_profiling.py` .. raw:: html +
        Profile the execution of a simple model
        -.. toctree:: - :hidden: - - /auto_examples/plot_profiling - .. raw:: html
        .. only:: html - .. figure:: /auto_examples/images/thumb/sphx_glr_plot_convert_pipeline_vectorizer_thumb.png - :alt: Train, convert and predict with ONNX Runtime + .. image:: /auto_examples/images/thumb/sphx_glr_plot_convert_pipeline_vectorizer_thumb.png + :alt: Train, convert and predict with ONNX Runtime - :ref:`sphx_glr_auto_examples_plot_convert_pipeline_vectorizer.py` + :ref:`sphx_glr_auto_examples_plot_convert_pipeline_vectorizer.py` .. raw:: html +
        Train, convert and predict with ONNX Runtime
        -.. toctree:: - :hidden: - - /auto_examples/plot_convert_pipeline_vectorizer - .. raw:: html
        .. only:: html - .. figure:: /auto_examples/images/thumb/sphx_glr_plot_common_errors_thumb.png - :alt: Common errors with onnxruntime + .. image:: /auto_examples/images/thumb/sphx_glr_plot_common_errors_thumb.png + :alt: Common errors with onnxruntime - :ref:`sphx_glr_auto_examples_plot_common_errors.py` + :ref:`sphx_glr_auto_examples_plot_common_errors.py` .. raw:: html +
        Common errors with onnxruntime
        -.. toctree:: - :hidden: - - /auto_examples/plot_common_errors - .. raw:: html
        .. only:: html - .. figure:: /auto_examples/images/thumb/sphx_glr_plot_train_convert_predict_thumb.png - :alt: Train, convert and predict with ONNX Runtime + .. image:: /auto_examples/images/thumb/sphx_glr_plot_train_convert_predict_thumb.png + :alt: Train, convert and predict with ONNX Runtime - :ref:`sphx_glr_auto_examples_plot_train_convert_predict.py` + :ref:`sphx_glr_auto_examples_plot_train_convert_predict.py` .. raw:: html +
        Train, convert and predict with ONNX Runtime
        -.. toctree:: - :hidden: - - /auto_examples/plot_train_convert_predict .. raw:: html -
        - +
        -.. only :: html +.. toctree:: + :hidden: - .. container:: sphx-glr-footer - :class: sphx-glr-footer-gallery + /auto_examples/plot_pipeline + /auto_examples/plot_load_and_predict + /auto_examples/plot_backend + /auto_examples/plot_metadata + /auto_examples/plot_profiling + /auto_examples/plot_convert_pipeline_vectorizer + /auto_examples/plot_common_errors + /auto_examples/plot_train_convert_predict - .. container:: sphx-glr-download sphx-glr-download-python +.. only:: html - :download:`Download all examples in Python source code: auto_examples_python.zip ` + .. container:: sphx-glr-footer sphx-glr-footer-gallery + .. container:: sphx-glr-download sphx-glr-download-python + :download:`Download all examples in Python source code: auto_examples_python.zip ` - .. container:: sphx-glr-download sphx-glr-download-jupyter + .. container:: sphx-glr-download sphx-glr-download-jupyter - :download:`Download all examples in Jupyter notebooks: auto_examples_jupyter.zip ` + :download:`Download all examples in Jupyter notebooks: auto_examples_jupyter.zip ` .. only:: html diff --git a/docs/api/python/sources/auto_examples/plot_backend.rst.txt b/docs/api/python/sources/auto_examples/plot_backend.rst.txt index b2255d568439a..9f88b28060c68 100644 --- a/docs/api/python/sources/auto_examples/plot_backend.rst.txt +++ b/docs/api/python/sources/auto_examples/plot_backend.rst.txt @@ -23,89 +23,80 @@ ONNX Runtime Backend for ONNX ============================= -*ONNX Runtime* extends the -`onnx backend API `_ +*ONNX Runtime* extends the +`onnx backend API `_ to run predictions using this runtime. Let's use the API to compute the prediction of a simple logistic regression model. -.. GENERATED FROM PYTHON SOURCE LINES 17-35 +.. GENERATED FROM PYTHON SOURCE LINES 17-22 .. code-block:: default import numpy as np - from onnxruntime import datasets - from onnxruntime.capi.onnxruntime_pybind11_state import InvalidArgument - import onnxruntime.backend as backend from onnx import load - name = datasets.get_example("logreg_iris.onnx") - model = load(name) - - rep = backend.prepare(model, 'CPU') - x = np.array([[-1.0, -2.0]], dtype=np.float32) - try: - label, proba = rep.run(x) - print("label={}".format(label)) - print("probabilities={}".format(proba)) - except (RuntimeError, InvalidArgument) as e: - print(e) + import onnxruntime.backend as backend -.. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none - [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Got invalid dimensions for input: float_input for the following indices - index: 0 Got: 1 Expected: 3 - Please fix either the inputs or the model. - - - - -.. GENERATED FROM PYTHON SOURCE LINES 36-38 +.. GENERATED FROM PYTHON SOURCE LINES 23-25 The device depends on how the package was compiled, GPU or CPU. -.. GENERATED FROM PYTHON SOURCE LINES 38-41 +.. GENERATED FROM PYTHON SOURCE LINES 25-42 .. code-block:: default - from onnxruntime import get_device - print(get_device()) + from onnxruntime import datasets, get_device + from onnxruntime.capi.onnxruntime_pybind11_state import InvalidArgument + device = get_device() + name = datasets.get_example("logreg_iris.onnx") + model = load(name) + + rep = backend.prepare(model, device) + x = np.array([[-1.0, -2.0]], dtype=np.float32) + try: + label, proba = rep.run(x) + print("label={}".format(label)) + print("probabilities={}".format(proba)) + except (RuntimeError, InvalidArgument) as e: + print(e) -.. rst-class:: sphx-glr-script-out - Out: + +.. rst-class:: sphx-glr-script-out .. code-block:: none - CPU + [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Got invalid dimensions for input: float_input for the following indices + index: 0 Got: 1 Expected: 3 + Please fix either the inputs or the model. -.. GENERATED FROM PYTHON SOURCE LINES 42-44 +.. GENERATED FROM PYTHON SOURCE LINES 43-45 The backend can also directly load the model without using *onnx*. -.. GENERATED FROM PYTHON SOURCE LINES 44-54 +.. GENERATED FROM PYTHON SOURCE LINES 45-55 .. code-block:: default - rep = backend.prepare(name, 'CPU') + rep = backend.prepare(name, device) x = np.array([[-1.0, -2.0]], dtype=np.float32) try: label, proba = rep.run(x) @@ -120,8 +111,6 @@ without using *onnx*. .. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Got invalid dimensions for input: float_input for the following indices @@ -131,7 +120,7 @@ without using *onnx*. -.. GENERATED FROM PYTHON SOURCE LINES 55-58 +.. GENERATED FROM PYTHON SOURCE LINES 56-59 The backend API is implemented by other frameworks and makes it easier to switch between multiple runtimes @@ -140,28 +129,23 @@ with the same API. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.019 seconds) + **Total running time of the script:** ( 0 minutes 0.013 seconds) .. _sphx_glr_download_auto_examples_plot_backend.py: +.. only:: html -.. only :: html - - .. container:: sphx-glr-footer - :class: sphx-glr-footer-example - - - - .. container:: sphx-glr-download sphx-glr-download-python + .. container:: sphx-glr-footer sphx-glr-footer-example - :download:`Download Python source code: plot_backend.py ` + .. container:: sphx-glr-download sphx-glr-download-python + :download:`Download Python source code: plot_backend.py ` - .. container:: sphx-glr-download sphx-glr-download-jupyter + .. container:: sphx-glr-download sphx-glr-download-jupyter - :download:`Download Jupyter notebook: plot_backend.ipynb ` + :download:`Download Jupyter notebook: plot_backend.ipynb ` .. only:: html diff --git a/docs/api/python/sources/auto_examples/plot_common_errors.rst.txt b/docs/api/python/sources/auto_examples/plot_common_errors.rst.txt index bbe26555a3763..429de99eb7085 100644 --- a/docs/api/python/sources/auto_examples/plot_common_errors.rst.txt +++ b/docs/api/python/sources/auto_examples/plot_common_errors.rst.txt @@ -31,17 +31,18 @@ It starts by loading the model trained in example trained on *Iris* datasets. The model takes a vector of dimension 2 and returns a class among three. -.. GENERATED FROM PYTHON SOURCE LINES 18-29 +.. GENERATED FROM PYTHON SOURCE LINES 18-30 .. code-block:: default + import numpy + import onnxruntime as rt from onnxruntime.capi.onnxruntime_pybind11_state import InvalidArgument - import numpy from onnxruntime.datasets import get_example example2 = get_example("logreg_iris.onnx") - sess = rt.InferenceSession(example2) + sess = rt.InferenceSession(example2, providers=rt.get_available_providers()) input_name = sess.get_inputs()[0].name output_name = sess.get_outputs()[0].name @@ -53,13 +54,13 @@ a vector of dimension 2 and returns a class among three. -.. GENERATED FROM PYTHON SOURCE LINES 30-33 +.. GENERATED FROM PYTHON SOURCE LINES 31-34 The first example fails due to *bad types*. *onnxruntime* only expects single floats (4 bytes) and cannot handle any other kind of floats. -.. GENERATED FROM PYTHON SOURCE LINES 33-41 +.. GENERATED FROM PYTHON SOURCE LINES 34-42 .. code-block:: default @@ -70,14 +71,12 @@ and cannot handle any other kind of floats. except Exception as e: print("Unexpected type") print("{0}: {1}".format(type(e), e)) - -.. rst-class:: sphx-glr-script-out - Out: +.. rst-class:: sphx-glr-script-out .. code-block:: none @@ -87,12 +86,12 @@ and cannot handle any other kind of floats. -.. GENERATED FROM PYTHON SOURCE LINES 42-44 +.. GENERATED FROM PYTHON SOURCE LINES 43-45 The model fails to return an output if the name is misspelled. -.. GENERATED FROM PYTHON SOURCE LINES 44-52 +.. GENERATED FROM PYTHON SOURCE LINES 45-53 .. code-block:: default @@ -110,8 +109,6 @@ is misspelled. .. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none Misspelled output name @@ -120,12 +117,12 @@ is misspelled. -.. GENERATED FROM PYTHON SOURCE LINES 53-55 +.. GENERATED FROM PYTHON SOURCE LINES 54-56 The output name is optional, it can be replaced by *None* and *onnxruntime* will then return all the outputs. -.. GENERATED FROM PYTHON SOURCE LINES 55-64 +.. GENERATED FROM PYTHON SOURCE LINES 56-65 .. code-block:: default @@ -144,8 +141,6 @@ and *onnxruntime* will then return all the outputs. .. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none All outputs @@ -154,11 +149,11 @@ and *onnxruntime* will then return all the outputs. -.. GENERATED FROM PYTHON SOURCE LINES 65-66 +.. GENERATED FROM PYTHON SOURCE LINES 66-67 The same goes if the input name is misspelled. -.. GENERATED FROM PYTHON SOURCE LINES 66-74 +.. GENERATED FROM PYTHON SOURCE LINES 67-75 .. code-block:: default @@ -176,8 +171,6 @@ The same goes if the input name is misspelled. .. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none Misspelled input name @@ -186,23 +179,23 @@ The same goes if the input name is misspelled. -.. GENERATED FROM PYTHON SOURCE LINES 75-77 +.. GENERATED FROM PYTHON SOURCE LINES 76-78 *onnxruntime* does not necessarily fail if the input dimension is a multiple of the expected input dimension. -.. GENERATED FROM PYTHON SOURCE LINES 77-104 +.. GENERATED FROM PYTHON SOURCE LINES 78-105 .. code-block:: default for x in [ - numpy.array([1.0, 2.0, 3.0, 4.0], dtype=numpy.float32), - numpy.array([[1.0, 2.0, 3.0, 4.0]], dtype=numpy.float32), - numpy.array([[1.0, 2.0], [3.0, 4.0]], dtype=numpy.float32), - numpy.array([1.0, 2.0, 3.0], dtype=numpy.float32), - numpy.array([[1.0, 2.0, 3.0]], dtype=numpy.float32), - ]: + numpy.array([1.0, 2.0, 3.0, 4.0], dtype=numpy.float32), + numpy.array([[1.0, 2.0, 3.0, 4.0]], dtype=numpy.float32), + numpy.array([[1.0, 2.0], [3.0, 4.0]], dtype=numpy.float32), + numpy.array([1.0, 2.0, 3.0], dtype=numpy.float32), + numpy.array([[1.0, 2.0, 3.0]], dtype=numpy.float32), + ]: try: r = sess.run([output_name], {input_name: x}) print("Shape={0} and predicted labels={1}".format(x.shape, r)) @@ -210,12 +203,12 @@ dimension is a multiple of the expected input dimension. print("ERROR with Shape={0} - {1}".format(x.shape, e)) for x in [ - numpy.array([1.0, 2.0, 3.0, 4.0], dtype=numpy.float32), - numpy.array([[1.0, 2.0, 3.0, 4.0]], dtype=numpy.float32), - numpy.array([[1.0, 2.0], [3.0, 4.0]], dtype=numpy.float32), - numpy.array([1.0, 2.0, 3.0], dtype=numpy.float32), - numpy.array([[1.0, 2.0, 3.0]], dtype=numpy.float32), - ]: + numpy.array([1.0, 2.0, 3.0, 4.0], dtype=numpy.float32), + numpy.array([[1.0, 2.0, 3.0, 4.0]], dtype=numpy.float32), + numpy.array([[1.0, 2.0], [3.0, 4.0]], dtype=numpy.float32), + numpy.array([1.0, 2.0, 3.0], dtype=numpy.float32), + numpy.array([[1.0, 2.0, 3.0]], dtype=numpy.float32), + ]: try: r = sess.run(None, {input_name: x}) print("Shape={0} and predicted probabilities={1}".format(x.shape, r[1])) @@ -228,8 +221,6 @@ dimension is a multiple of the expected input dimension. .. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none ERROR with Shape=(4,) - [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid rank for input: float_input Got: 1 Expected: 2 Please fix either the inputs or the model. @@ -262,21 +253,21 @@ dimension is a multiple of the expected input dimension. -.. GENERATED FROM PYTHON SOURCE LINES 105-107 +.. GENERATED FROM PYTHON SOURCE LINES 106-108 It does not fail either if the number of dimension is higher than expects but produces a warning. -.. GENERATED FROM PYTHON SOURCE LINES 107-118 +.. GENERATED FROM PYTHON SOURCE LINES 108-119 .. code-block:: default for x in [ - numpy.array([[[1.0, 2.0], [3.0, 4.0]]], dtype=numpy.float32), - numpy.array([[[1.0, 2.0, 3.0]]], dtype=numpy.float32), - numpy.array([[[1.0, 2.0]], [[3.0, 4.0]]], dtype=numpy.float32), - ]: + numpy.array([[[1.0, 2.0], [3.0, 4.0]]], dtype=numpy.float32), + numpy.array([[[1.0, 2.0, 3.0]]], dtype=numpy.float32), + numpy.array([[[1.0, 2.0]], [[3.0, 4.0]]], dtype=numpy.float32), + ]: try: r = sess.run([output_name], {input_name: x}) print("Shape={0} and predicted labels={1}".format(x.shape, r)) @@ -288,8 +279,6 @@ is higher than expects but produces a warning. .. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none ERROR with Shape=(1, 2, 2) - [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid rank for input: float_input Got: 3 Expected: 2 Please fix either the inputs or the model. @@ -302,28 +291,23 @@ is higher than expects but produces a warning. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.018 seconds) + **Total running time of the script:** ( 0 minutes 0.017 seconds) .. _sphx_glr_download_auto_examples_plot_common_errors.py: +.. only:: html -.. only :: html - - .. container:: sphx-glr-footer - :class: sphx-glr-footer-example - - - - .. container:: sphx-glr-download sphx-glr-download-python + .. container:: sphx-glr-footer sphx-glr-footer-example - :download:`Download Python source code: plot_common_errors.py ` + .. container:: sphx-glr-download sphx-glr-download-python + :download:`Download Python source code: plot_common_errors.py ` - .. container:: sphx-glr-download sphx-glr-download-jupyter + .. container:: sphx-glr-download sphx-glr-download-jupyter - :download:`Download Jupyter notebook: plot_common_errors.ipynb ` + :download:`Download Jupyter notebook: plot_common_errors.ipynb ` .. only:: html diff --git a/docs/api/python/sources/auto_examples/plot_convert_pipeline_vectorizer.rst.txt b/docs/api/python/sources/auto_examples/plot_convert_pipeline_vectorizer.rst.txt index 75c95d8f6449d..2dadf6be748bc 100644 --- a/docs/api/python/sources/auto_examples/plot_convert_pipeline_vectorizer.rst.txt +++ b/docs/api/python/sources/auto_examples/plot_convert_pipeline_vectorizer.rst.txt @@ -35,67 +35,106 @@ Train a pipeline The first step consists in retrieving the boston datset. -.. GENERATED FROM PYTHON SOURCE LINES 22-32 +.. GENERATED FROM PYTHON SOURCE LINES 22-34 .. code-block:: default import pandas from sklearn.datasets import load_boston + boston = load_boston() X, y = boston.data, boston.target from sklearn.model_selection import train_test_split + X_train, X_test, y_train, y_test = train_test_split(X, y) - X_train_dict = pandas.DataFrame(X_train[:,1:]).T.to_dict().values() - X_test_dict = pandas.DataFrame(X_test[:,1:]).T.to_dict().values() + X_train_dict = pandas.DataFrame(X_train[:, 1:]).T.to_dict().values() + X_test_dict = pandas.DataFrame(X_test[:, 1:]).T.to_dict().values() + + + + + +.. rst-class:: sphx-glr-script-out + + .. code-block:: none + /home/runner/.local/lib/python3.8/site-packages/sklearn/utils/deprecation.py:87: FutureWarning: Function load_boston is deprecated; `load_boston` is deprecated in 1.0 and will be removed in 1.2. + The Boston housing prices dataset has an ethical problem. You can refer to + the documentation of this function for further details. + The scikit-learn maintainers therefore strongly discourage the use of this + dataset unless the purpose of the code is to study and educate about + ethical issues in data science and machine learning. + In this special case, you can fetch the dataset from the original + source:: + import pandas as pd + import numpy as np + data_url = "http://lib.stat.cmu.edu/datasets/boston" + raw_df = pd.read_csv(data_url, sep="\s+", skiprows=22, header=None) + data = np.hstack([raw_df.values[::2, :], raw_df.values[1::2, :2]]) + target = raw_df.values[1::2, 2] + Alternative datasets include the California housing dataset (i.e. + :func:`~sklearn.datasets.fetch_california_housing`) and the Ames housing + dataset. You can load the datasets as follows:: -.. GENERATED FROM PYTHON SOURCE LINES 33-34 + from sklearn.datasets import fetch_california_housing + housing = fetch_california_housing() + + for the California housing dataset and:: + + from sklearn.datasets import fetch_openml + housing = fetch_openml(name="house_prices", as_frame=True) + + for the Ames housing dataset. + warnings.warn(msg, category=FutureWarning) + + + + +.. GENERATED FROM PYTHON SOURCE LINES 35-36 We create a pipeline. -.. GENERATED FROM PYTHON SOURCE LINES 34-44 +.. GENERATED FROM PYTHON SOURCE LINES 36-45 .. code-block:: default - from sklearn.pipeline import make_pipeline from sklearn.ensemble import GradientBoostingRegressor from sklearn.feature_extraction import DictVectorizer - pipe = make_pipeline( - DictVectorizer(sparse=False), - GradientBoostingRegressor()) - - pipe.fit(X_train_dict, y_train) + from sklearn.pipeline import make_pipeline + pipe = make_pipeline(DictVectorizer(sparse=False), GradientBoostingRegressor()) + pipe.fit(X_train_dict, y_train) -.. rst-class:: sphx-glr-script-out - Out: - - .. code-block:: none - Pipeline(steps=[('dictvectorizer', DictVectorizer(sparse=False)), - ('gradientboostingregressor', GradientBoostingRegressor())]) +.. raw:: html +
        +
        Pipeline(steps=[('dictvectorizer', DictVectorizer(sparse=False)),
        +                    ('gradientboostingregressor', GradientBoostingRegressor())])
        In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
        On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
        +
        +
        +
        - -.. GENERATED FROM PYTHON SOURCE LINES 45-47 +.. GENERATED FROM PYTHON SOURCE LINES 46-48 We compute the prediction on the test set and we show the confusion matrix. -.. GENERATED FROM PYTHON SOURCE LINES 47-52 +.. GENERATED FROM PYTHON SOURCE LINES 48-53 .. code-block:: default @@ -110,34 +149,32 @@ and we show the confusion matrix. .. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none - 0.848444978558249 + 0.8749080059406178 -.. GENERATED FROM PYTHON SOURCE LINES 53-59 +.. GENERATED FROM PYTHON SOURCE LINES 54-60 Conversion to ONNX format +++++++++++++++++++++++++ -We use module +We use module `sklearn-onnx `_ to convert the model into ONNX format. -.. GENERATED FROM PYTHON SOURCE LINES 59-69 +.. GENERATED FROM PYTHON SOURCE LINES 60-70 .. code-block:: default from skl2onnx import convert_sklearn - from skl2onnx.common.data_types import FloatTensorType, Int64TensorType, DictionaryType, SequenceType + from skl2onnx.common.data_types import DictionaryType, FloatTensorType, Int64TensorType, SequenceType # initial_type = [('float_input', DictionaryType(Int64TensorType([1]), FloatTensorType([])))] - initial_type = [('float_input', DictionaryType(Int64TensorType([1]), FloatTensorType([])))] + initial_type = [("float_input", DictionaryType(Int64TensorType([1]), FloatTensorType([])))] onx = convert_sklearn(pipe, initial_types=initial_type) with open("pipeline_vectorize.onnx", "wb") as f: f.write(onx.SerializeToString()) @@ -149,21 +186,22 @@ to convert the model into ONNX format. -.. GENERATED FROM PYTHON SOURCE LINES 70-72 +.. GENERATED FROM PYTHON SOURCE LINES 71-73 We load the model with ONNX Runtime and look at its input and output. -.. GENERATED FROM PYTHON SOURCE LINES 72-82 +.. GENERATED FROM PYTHON SOURCE LINES 73-84 .. code-block:: default import onnxruntime as rt from onnxruntime.capi.onnxruntime_pybind11_state import InvalidArgument - sess = rt.InferenceSession("pipeline_vectorize.onnx") + sess = rt.InferenceSession("pipeline_vectorize.onnx", providers=rt.get_available_providers()) import numpy + inp, out = sess.get_inputs()[0], sess.get_outputs()[0] print("input name='{}' and shape={} and type={}".format(inp.name, inp.shape, inp.type)) print("output name='{}' and shape={} and type={}".format(out.name, out.shape, out.type)) @@ -174,8 +212,6 @@ its input and output. .. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none input name='float_input' and shape=[] and type=map(int64,tensor(float)) @@ -184,12 +220,12 @@ its input and output. -.. GENERATED FROM PYTHON SOURCE LINES 83-85 +.. GENERATED FROM PYTHON SOURCE LINES 85-87 We compute the predictions. We could do that in one call: -.. GENERATED FROM PYTHON SOURCE LINES 85-91 +.. GENERATED FROM PYTHON SOURCE LINES 87-93 .. code-block:: default @@ -205,8 +241,6 @@ We could do that in one call: .. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Unexpected input data type. Actual: ((seq(map(int64,tensor(float))))) , expected: ((map(int64,tensor(float)))) @@ -214,12 +248,12 @@ We could do that in one call: -.. GENERATED FROM PYTHON SOURCE LINES 92-94 +.. GENERATED FROM PYTHON SOURCE LINES 94-96 But it fails because, in case of a DictVectorizer, ONNX Runtime expects one observation at a time. -.. GENERATED FROM PYTHON SOURCE LINES 94-96 +.. GENERATED FROM PYTHON SOURCE LINES 96-98 .. code-block:: default @@ -232,11 +266,11 @@ ONNX Runtime expects one observation at a time. -.. GENERATED FROM PYTHON SOURCE LINES 97-98 +.. GENERATED FROM PYTHON SOURCE LINES 99-100 We compare them to the model's ones. -.. GENERATED FROM PYTHON SOURCE LINES 98-100 +.. GENERATED FROM PYTHON SOURCE LINES 100-102 .. code-block:: default @@ -248,16 +282,14 @@ We compare them to the model's ones. .. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none - 0.9999999999999528 + 0.9999999999999738 -.. GENERATED FROM PYTHON SOURCE LINES 101-103 +.. GENERATED FROM PYTHON SOURCE LINES 103-105 Very similar. *ONNX Runtime* uses floats instead of doubles, that explains the small discrepencies. @@ -265,28 +297,23 @@ that explains the small discrepencies. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 1.592 seconds) + **Total running time of the script:** ( 0 minutes 0.883 seconds) .. _sphx_glr_download_auto_examples_plot_convert_pipeline_vectorizer.py: +.. only:: html -.. only :: html - - .. container:: sphx-glr-footer - :class: sphx-glr-footer-example - - - - .. container:: sphx-glr-download sphx-glr-download-python + .. container:: sphx-glr-footer sphx-glr-footer-example - :download:`Download Python source code: plot_convert_pipeline_vectorizer.py ` + .. container:: sphx-glr-download sphx-glr-download-python + :download:`Download Python source code: plot_convert_pipeline_vectorizer.py ` - .. container:: sphx-glr-download sphx-glr-download-jupyter + .. container:: sphx-glr-download sphx-glr-download-jupyter - :download:`Download Jupyter notebook: plot_convert_pipeline_vectorizer.ipynb ` + :download:`Download Jupyter notebook: plot_convert_pipeline_vectorizer.ipynb ` .. only:: html diff --git a/docs/api/python/sources/auto_examples/plot_dl_keras.rst.txt b/docs/api/python/sources/auto_examples/plot_dl_keras.rst.txt deleted file mode 100644 index 8e759fea5393e..0000000000000 --- a/docs/api/python/sources/auto_examples/plot_dl_keras.rst.txt +++ /dev/null @@ -1,198 +0,0 @@ - -.. DO NOT EDIT. -.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. -.. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: -.. "auto_examples\plot_dl_keras.py" -.. LINE NUMBERS ARE GIVEN BELOW. - -.. only:: html - - .. note:: - :class: sphx-glr-download-link-note - - Click :ref:`here ` - to download the full example code - -.. rst-class:: sphx-glr-example-title - -.. _sphx_glr_auto_examples_plot_dl_keras.py: - - -.. _l-example-backend-api-tensorflow: - -ONNX Runtime for Keras -====================== - -The following demonstrates how to compute the predictions -of a pretrained deep learning model obtained from -`keras `_ -with *onnxruntime*. The conversion requires -`keras `_, -`tensorflow `_, -`keras-onnx `_, -`onnxmltools `_ -but then only *onnxruntime* is required -to compute the predictions. - -.. GENERATED FROM PYTHON SOURCE LINES 22-32 - -.. code-block:: default - - import os - if not os.path.exists('dense121.onnx'): - from keras.applications.densenet import DenseNet121 - model = DenseNet121(include_top=True, weights='imagenet') - - from keras2onnx import convert_keras - onx = convert_keras(model, 'dense121.onnx') - with open("dense121.onnx", "wb") as f: - f.write(onx.SerializeToString()) - - - -.. rst-class:: sphx-glr-script-out - -.. code-block:: pytb - - Traceback (most recent call last): - File "C:\xadupre\microsoft_xadupre\onnxruntime\docs\python\examples\plot_dl_keras.py", line 28, in - onx = convert_keras(model, 'dense121.onnx') - File "C:\xadupre\microsoft_xadupre\keras-onnx\keras2onnx\main.py", line 82, in convert_keras - tf_graph = build_layer_output_from_model(model, output_dict, input_names, - File "C:\xadupre\microsoft_xadupre\keras-onnx\keras2onnx\_parser_tf.py", line 308, in build_layer_output_from_model - graph = model.outputs[0].graph - AttributeError: 'KerasTensor' object has no attribute 'graph' - - - - -.. GENERATED FROM PYTHON SOURCE LINES 33-34 - -Let's load an image (source: wikipedia). - -.. GENERATED FROM PYTHON SOURCE LINES 34-43 - -.. code-block:: default - - - from keras.preprocessing.image import array_to_img, img_to_array, load_img - img = load_img('Sannosawa1.jpg') - ximg = img_to_array(img) - - import matplotlib.pyplot as plt - plt.imshow(ximg / 255) - plt.axis('off') - - -.. GENERATED FROM PYTHON SOURCE LINES 44-45 - -Let's load the model with onnxruntime. - -.. GENERATED FROM PYTHON SOURCE LINES 45-60 - -.. code-block:: default - - import onnxruntime as rt - from onnxruntime.capi.onnxruntime_pybind11_state import InvalidGraph - - try: - sess = rt.InferenceSession('dense121.onnx') - ok = True - except (InvalidGraph, TypeError, RuntimeError) as e: - # Probably a mismatch between onnxruntime and onnx version. - print(e) - ok = False - - if ok: - print("The model expects input shape:", sess.get_inputs()[0].shape) - print("image shape:", ximg.shape) - - -.. GENERATED FROM PYTHON SOURCE LINES 61-62 - -Let's resize the image. - -.. GENERATED FROM PYTHON SOURCE LINES 62-73 - -.. code-block:: default - - - if ok: - from skimage.transform import resize - import numpy - - ximg224 = resize(ximg / 255, (224, 224, 3), anti_aliasing=True) - ximg = ximg224[numpy.newaxis, :, :, :] - ximg = ximg.astype(numpy.float32) - - print("new shape:", ximg.shape) - - -.. GENERATED FROM PYTHON SOURCE LINES 74-75 - -Let's compute the output. - -.. GENERATED FROM PYTHON SOURCE LINES 75-83 - -.. code-block:: default - - - if ok: - input_name = sess.get_inputs()[0].name - res = sess.run(None, {input_name: ximg}) - prob = res[0] - print(prob.ravel()[:10]) # Too big to be displayed. - - - -.. GENERATED FROM PYTHON SOURCE LINES 84-85 - -Let's get more comprehensive results. - -.. GENERATED FROM PYTHON SOURCE LINES 85-95 - -.. code-block:: default - - - if ok: - from keras.applications.densenet import decode_predictions - decoded = decode_predictions(prob) - - import pandas - df = pandas.DataFrame(decoded[0], columns=["class_id", "name", "P"]) - print(df) - - - - -.. rst-class:: sphx-glr-timing - - **Total running time of the script:** ( 0 minutes 6.417 seconds) - - -.. _sphx_glr_download_auto_examples_plot_dl_keras.py: - - -.. only :: html - - .. container:: sphx-glr-footer - :class: sphx-glr-footer-example - - - - .. container:: sphx-glr-download sphx-glr-download-python - - :download:`Download Python source code: plot_dl_keras.py ` - - - - .. container:: sphx-glr-download sphx-glr-download-jupyter - - :download:`Download Jupyter notebook: plot_dl_keras.ipynb ` - - -.. only:: html - - .. rst-class:: sphx-glr-signature - - `Gallery generated by Sphinx-Gallery `_ diff --git a/docs/api/python/sources/auto_examples/plot_load_and_predict.rst.txt b/docs/api/python/sources/auto_examples/plot_load_and_predict.rst.txt index eee7c4064f758..54002b75c521d 100644 --- a/docs/api/python/sources/auto_examples/plot_load_and_predict.rst.txt +++ b/docs/api/python/sources/auto_examples/plot_load_and_predict.rst.txt @@ -27,13 +27,14 @@ This example demonstrates how to load a model and compute the output for an input vector. It also shows how to retrieve the definition of its inputs and outputs. -.. GENERATED FROM PYTHON SOURCE LINES 14-19 +.. GENERATED FROM PYTHON SOURCE LINES 14-20 .. code-block:: default - import onnxruntime as rt import numpy + + import onnxruntime as rt from onnxruntime.datasets import get_example @@ -43,18 +44,18 @@ retrieve the definition of its inputs and outputs. -.. GENERATED FROM PYTHON SOURCE LINES 20-22 +.. GENERATED FROM PYTHON SOURCE LINES 21-23 Let's load a very simple model. -The model is available on github `onnx...test_sigmoid `_. +The model is available on github `onnx...test_sigmoid `_. -.. GENERATED FROM PYTHON SOURCE LINES 22-26 +.. GENERATED FROM PYTHON SOURCE LINES 23-27 .. code-block:: default example1 = get_example("sigmoid.onnx") - sess = rt.InferenceSession(example1) + sess = rt.InferenceSession(example1, providers=rt.get_available_providers()) @@ -63,11 +64,11 @@ The model is available on github `onnx...test_sigmoid ` + .. container:: sphx-glr-download sphx-glr-download-python + :download:`Download Python source code: plot_load_and_predict.py ` - .. container:: sphx-glr-download sphx-glr-download-jupyter + .. container:: sphx-glr-download sphx-glr-download-jupyter - :download:`Download Jupyter notebook: plot_load_and_predict.ipynb ` + :download:`Download Jupyter notebook: plot_load_and_predict.ipynb ` .. only:: html diff --git a/docs/api/python/sources/auto_examples/plot_metadata.rst.txt b/docs/api/python/sources/auto_examples/plot_metadata.rst.txt index 25ad363d9ff3f..52e8201fc7122 100644 --- a/docs/api/python/sources/auto_examples/plot_metadata.rst.txt +++ b/docs/api/python/sources/auto_examples/plot_metadata.rst.txt @@ -29,15 +29,17 @@ Let's see how to do that with a simple logistic regression model trained with *scikit-learn* and converted with *sklearn-onnx*. -.. GENERATED FROM PYTHON SOURCE LINES 16-31 +.. GENERATED FROM PYTHON SOURCE LINES 16-33 .. code-block:: default from onnxruntime.datasets import get_example + example = get_example("logreg_iris.onnx") import onnx + model = onnx.load(example) print("doc_string={}".format(model.doc_string)) @@ -54,8 +56,6 @@ logistic regression model trained with .. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none doc_string= @@ -69,17 +69,18 @@ logistic regression model trained with -.. GENERATED FROM PYTHON SOURCE LINES 32-33 +.. GENERATED FROM PYTHON SOURCE LINES 34-35 With *ONNX Runtime*: -.. GENERATED FROM PYTHON SOURCE LINES 33-44 +.. GENERATED FROM PYTHON SOURCE LINES 35-47 .. code-block:: default - from onnxruntime import InferenceSession - sess = InferenceSession(example) + import onnxruntime as rt + + sess = rt.InferenceSession(example, providers=rt.get_available_providers()) meta = sess.get_modelmeta() print("custom_metadata_map={}".format(meta.custom_metadata_map)) @@ -94,8 +95,6 @@ With *ONNX Runtime*: .. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none custom_metadata_map={} @@ -111,28 +110,23 @@ With *ONNX Runtime*: .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.006 seconds) + **Total running time of the script:** ( 0 minutes 0.003 seconds) .. _sphx_glr_download_auto_examples_plot_metadata.py: +.. only:: html -.. only :: html - - .. container:: sphx-glr-footer - :class: sphx-glr-footer-example - - - - .. container:: sphx-glr-download sphx-glr-download-python + .. container:: sphx-glr-footer sphx-glr-footer-example - :download:`Download Python source code: plot_metadata.py ` + .. container:: sphx-glr-download sphx-glr-download-python + :download:`Download Python source code: plot_metadata.py ` - .. container:: sphx-glr-download sphx-glr-download-jupyter + .. container:: sphx-glr-download sphx-glr-download-jupyter - :download:`Download Jupyter notebook: plot_metadata.ipynb ` + :download:`Download Jupyter notebook: plot_metadata.ipynb ` .. only:: html diff --git a/docs/api/python/sources/auto_examples/plot_pipeline.rst.txt b/docs/api/python/sources/auto_examples/plot_pipeline.rst.txt index 052d278a49d9c..c95b30316886b 100644 --- a/docs/api/python/sources/auto_examples/plot_pipeline.rst.txt +++ b/docs/api/python/sources/auto_examples/plot_pipeline.rst.txt @@ -22,7 +22,7 @@ Draw a pipeline =============== There is no other way to look into one model stored -in ONNX format than looking into its node with +in ONNX format than looking into its node with *onnx*. This example demonstrates how to draw a model and to retrieve it in *json* format. @@ -35,18 +35,20 @@ Retrieve a model in JSON format That's the most simple way. -.. GENERATED FROM PYTHON SOURCE LINES 22-32 +.. GENERATED FROM PYTHON SOURCE LINES 22-34 .. code-block:: default from onnxruntime.datasets import get_example + example1 = get_example("mul_1.onnx") import onnx + model = onnx.load(example1) # model is a ModelProto protobuf message - print(model) + print(model) @@ -55,8 +57,6 @@ That's the most simple way. .. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none ir_version: 3 @@ -124,24 +124,25 @@ That's the most simple way. -.. GENERATED FROM PYTHON SOURCE LINES 33-39 +.. GENERATED FROM PYTHON SOURCE LINES 35-41 Draw a model with ONNX ++++++++++++++++++++++ -We use `net_drawer.py `_ +We use `net_drawer.py `_ included in *onnx* package. We use *onnx* to load the model in a different way than before. -.. GENERATED FROM PYTHON SOURCE LINES 39-47 +.. GENERATED FROM PYTHON SOURCE LINES 41-50 .. code-block:: default from onnx import ModelProto + model = ModelProto() - with open(example1, 'rb') as fid: + with open(example1, "rb") as fid: content = fid.read() model.ParseFromString(content) @@ -152,17 +153,19 @@ in a different way than before. -.. GENERATED FROM PYTHON SOURCE LINES 48-49 +.. GENERATED FROM PYTHON SOURCE LINES 51-52 We convert it into a graph. -.. GENERATED FROM PYTHON SOURCE LINES 49-54 +.. GENERATED FROM PYTHON SOURCE LINES 52-59 .. code-block:: default - from onnx.tools.net_drawer import GetPydotGraph, GetOpNodeProducer - pydot_graph = GetPydotGraph(model.graph, name=model.graph.name, rankdir="LR", - node_producer=GetOpNodeProducer("docstring")) + from onnx.tools.net_drawer import GetOpNodeProducer, GetPydotGraph + + pydot_graph = GetPydotGraph( + model.graph, name=model.graph.name, rankdir="LR", node_producer=GetOpNodeProducer("docstring") + ) pydot_graph.write_dot("graph.dot") @@ -172,24 +175,23 @@ We convert it into a graph. -.. GENERATED FROM PYTHON SOURCE LINES 55-56 +.. GENERATED FROM PYTHON SOURCE LINES 60-61 Then into an image -.. GENERATED FROM PYTHON SOURCE LINES 56-59 +.. GENERATED FROM PYTHON SOURCE LINES 61-65 .. code-block:: default import os - os.system('dot -O -Tpng graph.dot') + os.system("dot -O -Tpng graph.dot") -.. rst-class:: sphx-glr-script-out - Out: +.. rst-class:: sphx-glr-script-out .. code-block:: none @@ -198,67 +200,56 @@ Then into an image -.. GENERATED FROM PYTHON SOURCE LINES 60-61 +.. GENERATED FROM PYTHON SOURCE LINES 66-67 Which we display... -.. GENERATED FROM PYTHON SOURCE LINES 61-70 +.. GENERATED FROM PYTHON SOURCE LINES 67-71 .. code-block:: default import matplotlib.pyplot as plt + image = plt.imread("graph.dot.png") plt.imshow(image) - - - - - - -.. image:: /auto_examples/images/sphx_glr_plot_pipeline_001.png - :alt: plot pipeline - :class: sphx-glr-single-img +.. image-sg:: /auto_examples/images/sphx_glr_plot_pipeline_001.png + :alt: plot pipeline + :srcset: /auto_examples/images/sphx_glr_plot_pipeline_001.png + :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none - + .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.244 seconds) + **Total running time of the script:** ( 0 minutes 0.187 seconds) .. _sphx_glr_download_auto_examples_plot_pipeline.py: +.. only:: html -.. only :: html - - .. container:: sphx-glr-footer - :class: sphx-glr-footer-example - - - - .. container:: sphx-glr-download sphx-glr-download-python + .. container:: sphx-glr-footer sphx-glr-footer-example - :download:`Download Python source code: plot_pipeline.py ` + .. container:: sphx-glr-download sphx-glr-download-python + :download:`Download Python source code: plot_pipeline.py ` - .. container:: sphx-glr-download sphx-glr-download-jupyter + .. container:: sphx-glr-download sphx-glr-download-jupyter - :download:`Download Jupyter notebook: plot_pipeline.ipynb ` + :download:`Download Jupyter notebook: plot_pipeline.ipynb ` .. only:: html diff --git a/docs/api/python/sources/auto_examples/plot_profiling.rst.txt b/docs/api/python/sources/auto_examples/plot_profiling.rst.txt index c45a358030010..f85849b062b6b 100644 --- a/docs/api/python/sources/auto_examples/plot_profiling.rst.txt +++ b/docs/api/python/sources/auto_examples/plot_profiling.rst.txt @@ -26,13 +26,14 @@ Profile the execution of a simple model *ONNX Runtime* can profile the execution of the model. This example shows how to interpret the results. -.. GENERATED FROM PYTHON SOURCE LINES 14-32 +.. GENERATED FROM PYTHON SOURCE LINES 14-31 .. code-block:: default + import numpy import onnx + import onnxruntime as rt - import numpy from onnxruntime.datasets import get_example @@ -53,13 +54,11 @@ This example shows how to interpret the results. - - -.. GENERATED FROM PYTHON SOURCE LINES 33-34 +.. GENERATED FROM PYTHON SOURCE LINES 32-33 Let's load a very simple model and compute some prediction. -.. GENERATED FROM PYTHON SOURCE LINES 34-45 +.. GENERATED FROM PYTHON SOURCE LINES 33-44 .. code-block:: default @@ -67,7 +66,7 @@ Let's load a very simple model and compute some prediction. example1 = get_example("mul_1.onnx") onnx_model = change_ir_version(example1) onnx_model_str = onnx_model.SerializeToString() - sess = rt.InferenceSession(onnx_model_str) + sess = rt.InferenceSession(onnx_model_str, providers=rt.get_available_providers()) input_name = sess.get_inputs()[0].name x = numpy.array([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], dtype=numpy.float32) @@ -80,8 +79,6 @@ Let's load a very simple model and compute some prediction. .. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none [array([[ 1., 4.], @@ -91,19 +88,19 @@ Let's load a very simple model and compute some prediction. -.. GENERATED FROM PYTHON SOURCE LINES 46-48 +.. GENERATED FROM PYTHON SOURCE LINES 45-47 We need to enable to profiling before running the predictions. -.. GENERATED FROM PYTHON SOURCE LINES 48-60 +.. GENERATED FROM PYTHON SOURCE LINES 47-59 .. code-block:: default options = rt.SessionOptions() options.enable_profiling = True - sess_profile = rt.InferenceSession(onnx_model_str, options) + sess_profile = rt.InferenceSession(onnx_model_str, options, providers=rt.get_available_providers()) input_name = sess.get_inputs()[0].name x = numpy.array([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], dtype=numpy.float32) @@ -118,58 +115,53 @@ before running the predictions. .. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none - onnxruntime_profile__2021-03-04_18-40-00.json + onnxruntime_profile__2022-11-01_10-29-57.json -.. GENERATED FROM PYTHON SOURCE LINES 61-63 +.. GENERATED FROM PYTHON SOURCE LINES 60-62 The results are stored un a file in JSON format. Let's see what it contains. -.. GENERATED FROM PYTHON SOURCE LINES 63-71 +.. GENERATED FROM PYTHON SOURCE LINES 62-69 .. code-block:: default import json + with open(prof_file, "r") as f: sess_time = json.load(f) import pprint - pprint.pprint(sess_time) - - + pprint.pprint(sess_time) .. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none [{'args': {}, 'cat': 'Session', - 'dur': 101, + 'dur': 49, 'name': 'model_loading_array', 'ph': 'X', - 'pid': 77, - 'tid': 77, - 'ts': 3}, + 'pid': 3109, + 'tid': 3109, + 'ts': 1}, {'args': {}, 'cat': 'Session', - 'dur': 227, + 'dur': 236, 'name': 'session_initialization', 'ph': 'X', - 'pid': 77, - 'tid': 77, - 'ts': 134}] + 'pid': 3109, + 'tid': 3109, + 'ts': 59}] @@ -177,28 +169,23 @@ Let's see what it contains. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.010 seconds) + **Total running time of the script:** ( 0 minutes 0.004 seconds) .. _sphx_glr_download_auto_examples_plot_profiling.py: +.. only:: html -.. only :: html - - .. container:: sphx-glr-footer - :class: sphx-glr-footer-example - - - - .. container:: sphx-glr-download sphx-glr-download-python + .. container:: sphx-glr-footer sphx-glr-footer-example - :download:`Download Python source code: plot_profiling.py ` + .. container:: sphx-glr-download sphx-glr-download-python + :download:`Download Python source code: plot_profiling.py ` - .. container:: sphx-glr-download sphx-glr-download-jupyter + .. container:: sphx-glr-download sphx-glr-download-jupyter - :download:`Download Jupyter notebook: plot_profiling.ipynb ` + :download:`Download Jupyter notebook: plot_profiling.ipynb ` .. only:: html diff --git a/docs/api/python/sources/auto_examples/plot_train_convert_predict.rst.txt b/docs/api/python/sources/auto_examples/plot_train_convert_predict.rst.txt index 354281587abe5..d4698de656337 100644 --- a/docs/api/python/sources/auto_examples/plot_train_convert_predict.rst.txt +++ b/docs/api/python/sources/auto_examples/plot_train_convert_predict.rst.txt @@ -35,16 +35,18 @@ Train a logistic regression The first step consists in retrieving the iris datset. -.. GENERATED FROM PYTHON SOURCE LINES 23-31 +.. GENERATED FROM PYTHON SOURCE LINES 23-33 .. code-block:: default from sklearn.datasets import load_iris + iris = load_iris() X, y = iris.data, iris.target from sklearn.model_selection import train_test_split + X_train, X_test, y_train, y_test = train_test_split(X, y) @@ -54,16 +56,17 @@ The first step consists in retrieving the iris datset. -.. GENERATED FROM PYTHON SOURCE LINES 32-33 +.. GENERATED FROM PYTHON SOURCE LINES 34-35 Then we fit a model. -.. GENERATED FROM PYTHON SOURCE LINES 33-38 +.. GENERATED FROM PYTHON SOURCE LINES 35-41 .. code-block:: default from sklearn.linear_model import LogisticRegression + clr = LogisticRegression() clr.fit(X_train, y_train) @@ -71,31 +74,21 @@ Then we fit a model. -.. rst-class:: sphx-glr-script-out - - Out: - .. code-block:: none +.. raw:: html - /opt/miniconda/lib/python3.7/site-packages/sklearn/linear_model/_logistic.py:765: ConvergenceWarning: lbfgs failed to converge (status=1): - STOP: TOTAL NO. of ITERATIONS REACHED LIMIT. +
        +
        LogisticRegression()
        In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
        On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
        +
        +
        +
        - Increase the number of iterations (max_iter) or scale the data as shown in: - https://scikit-learn.org/stable/modules/preprocessing.html - Please also refer to the documentation for alternative solver options: - https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression - extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG) - - LogisticRegression() - - - -.. GENERATED FROM PYTHON SOURCE LINES 39-41 +.. GENERATED FROM PYTHON SOURCE LINES 42-44 We compute the prediction on the test set and we show the confusion matrix. -.. GENERATED FROM PYTHON SOURCE LINES 41-46 +.. GENERATED FROM PYTHON SOURCE LINES 44-49 .. code-block:: default @@ -110,27 +103,25 @@ and we show the confusion matrix. .. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none - [[10 0 0] - [ 0 14 0] - [ 0 1 13]] + [[14 0 0] + [ 0 10 0] + [ 0 0 14]] -.. GENERATED FROM PYTHON SOURCE LINES 47-53 +.. GENERATED FROM PYTHON SOURCE LINES 50-56 Conversion to ONNX format +++++++++++++++++++++++++ -We use module +We use module `sklearn-onnx `_ to convert the model into ONNX format. -.. GENERATED FROM PYTHON SOURCE LINES 53-62 +.. GENERATED FROM PYTHON SOURCE LINES 56-65 .. code-block:: default @@ -138,7 +129,7 @@ to convert the model into ONNX format. from skl2onnx import convert_sklearn from skl2onnx.common.data_types import FloatTensorType - initial_type = [('float_input', FloatTensorType([None, 4]))] + initial_type = [("float_input", FloatTensorType([None, 4]))] onx = convert_sklearn(clr, initial_types=initial_type) with open("logreg_iris.onnx", "wb") as f: f.write(onx.SerializeToString()) @@ -150,31 +141,28 @@ to convert the model into ONNX format. -.. GENERATED FROM PYTHON SOURCE LINES 63-65 +.. GENERATED FROM PYTHON SOURCE LINES 66-68 We load the model with ONNX Runtime and look at its input and output. -.. GENERATED FROM PYTHON SOURCE LINES 65-74 +.. GENERATED FROM PYTHON SOURCE LINES 68-76 .. code-block:: default import onnxruntime as rt - sess = rt.InferenceSession("logreg_iris.onnx") - print("input name='{}' and shape={}".format( - sess.get_inputs()[0].name, sess.get_inputs()[0].shape)) - print("output name='{}' and shape={}".format( - sess.get_outputs()[0].name, sess.get_outputs()[0].shape)) + sess = rt.InferenceSession("logreg_iris.onnx", providers=rt.get_available_providers()) + print("input name='{}' and shape={}".format(sess.get_inputs()[0].name, sess.get_inputs()[0].shape)) + print("output name='{}' and shape={}".format(sess.get_outputs()[0].name, sess.get_outputs()[0].shape)) -.. rst-class:: sphx-glr-script-out - Out: +.. rst-class:: sphx-glr-script-out .. code-block:: none @@ -184,11 +172,11 @@ its input and output. -.. GENERATED FROM PYTHON SOURCE LINES 75-76 +.. GENERATED FROM PYTHON SOURCE LINES 77-78 We compute the predictions. -.. GENERATED FROM PYTHON SOURCE LINES 76-84 +.. GENERATED FROM PYTHON SOURCE LINES 78-87 .. code-block:: default @@ -197,6 +185,7 @@ We compute the predictions. label_name = sess.get_outputs()[0].name import numpy + pred_onx = sess.run([label_name], {input_name: X_test.astype(numpy.float32)})[0] print(confusion_matrix(pred, pred_onx)) @@ -206,18 +195,16 @@ We compute the predictions. .. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none - [[10 0 0] - [ 0 15 0] - [ 0 0 13]] + [[14 0 0] + [ 0 10 0] + [ 0 0 14]] -.. GENERATED FROM PYTHON SOURCE LINES 85-94 +.. GENERATED FROM PYTHON SOURCE LINES 88-97 The prediction are perfectly identical. @@ -229,7 +216,7 @@ relevant metrics such as the ROC Curve. Let's see how to get them first with scikit-learn. -.. GENERATED FROM PYTHON SOURCE LINES 94-98 +.. GENERATED FROM PYTHON SOURCE LINES 97-101 .. code-block:: default @@ -243,23 +230,21 @@ scikit-learn. .. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none - [[4.15987711e-03 8.54898335e-01 1.40941788e-01] - [9.44228260e-01 5.57707615e-02 9.78002823e-07] - [5.17324282e-02 8.88396143e-01 5.98714288e-02]] + [[2.11305775e-04 1.94108929e-01 8.05679765e-01] + [2.94214236e-03 7.96030729e-01 2.01027129e-01] + [6.94823786e-02 9.18372171e-01 1.21454503e-02]] -.. GENERATED FROM PYTHON SOURCE LINES 99-101 +.. GENERATED FROM PYTHON SOURCE LINES 102-104 And then with ONNX Runtime. -The probabilies appear to be +The probabilies appear to be -.. GENERATED FROM PYTHON SOURCE LINES 101-108 +.. GENERATED FROM PYTHON SOURCE LINES 104-112 .. code-block:: default @@ -268,6 +253,7 @@ The probabilies appear to be prob_rt = sess.run([prob_name], {input_name: X_test.astype(numpy.float32)})[0] import pprint + pprint.pprint(prob_rt[0:3]) @@ -276,27 +262,26 @@ The probabilies appear to be .. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none - [{0: 0.0041598789393901825, 1: 0.8548984527587891, 2: 0.14094172418117523}, - {0: 0.9442282915115356, 1: 0.055770788341760635, 2: 9.78002844931325e-07}, - {0: 0.05173242464661598, 1: 0.888396143913269, 2: 0.05987146869301796}] + [{0: 0.0002113060181727633, 1: 0.19410927593708038, 2: 0.805679440498352}, + {0: 0.0029421390499919653, 1: 0.7960308194160461, 2: 0.2010270357131958}, + {0: 0.06948233395814896, 1: 0.9183722734451294, 2: 0.012145438231527805}] -.. GENERATED FROM PYTHON SOURCE LINES 109-110 +.. GENERATED FROM PYTHON SOURCE LINES 113-114 Let's benchmark. -.. GENERATED FROM PYTHON SOURCE LINES 110-126 +.. GENERATED FROM PYTHON SOURCE LINES 114-132 .. code-block:: default from timeit import Timer + def speed(inst, number=10, repeat=20): timer = Timer(inst, globals=globals()) raw = numpy.array(timer.repeat(repeat, number=number)) @@ -305,6 +290,7 @@ Let's benchmark. print("Average %1.3g min=%1.3g max=%1.3g" % (ave, mi, ma)) return ave + print("Execution time for clr.predict") speed("clr.predict(X_test)") @@ -317,44 +303,46 @@ Let's benchmark. .. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none Execution time for clr.predict - Average 8.56e-05 min=7.72e-05 max=0.000101 + Average 4.08e-05 min=3.96e-05 max=4.77e-05 Execution time for ONNX Runtime - Average 5.02e-05 min=4.61e-05 max=6.17e-05 + Average 1.85e-05 min=1.77e-05 max=2.14e-05 - 5.0215695519000296e-05 + 1.8539305000189187e-05 -.. GENERATED FROM PYTHON SOURCE LINES 127-130 +.. GENERATED FROM PYTHON SOURCE LINES 133-136 Let's benchmark a scenario similar to what a webservice experiences: the model has to do one prediction at a time as opposed to a batch of prediction. -.. GENERATED FROM PYTHON SOURCE LINES 130-148 +.. GENERATED FROM PYTHON SOURCE LINES 136-158 .. code-block:: default + def loop(X_test, fct, n=None): nrow = X_test.shape[0] if n is None: n = nrow for i in range(0, n): im = i % nrow - fct(X_test[im: im+1]) + fct(X_test[im : im + 1]) + print("Execution time for clr.predict") speed("loop(X_test, clr.predict, 100)") + def sess_predict(x): return sess.run([label_name], {input_name: x.astype(numpy.float32)})[0] + print("Execution time for sess_predict") speed("loop(X_test, sess_predict, 100)") @@ -364,24 +352,22 @@ as opposed to a batch of prediction. .. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none Execution time for clr.predict - Average 0.00723 min=0.00694 max=0.00837 + Average 0.00381 min=0.00373 max=0.00389 Execution time for sess_predict - Average 0.00156 min=0.00152 max=0.00166 + Average 0.000817 min=0.000808 max=0.000836 - 0.0015644603711552918 + 0.0008169881699997461 -.. GENERATED FROM PYTHON SOURCE LINES 149-150 +.. GENERATED FROM PYTHON SOURCE LINES 159-160 Let's do the same for the probabilities. -.. GENERATED FROM PYTHON SOURCE LINES 150-160 +.. GENERATED FROM PYTHON SOURCE LINES 160-172 .. code-block:: default @@ -389,9 +375,11 @@ Let's do the same for the probabilities. print("Execution time for predict_proba") speed("loop(X_test, clr.predict_proba, 100)") + def sess_predict_proba(x): return sess.run([prob_name], {input_name: x.astype(numpy.float32)})[0] + print("Execution time for sess_predict_proba") speed("loop(X_test, sess_predict_proba, 100)") @@ -401,42 +389,41 @@ Let's do the same for the probabilities. .. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none Execution time for predict_proba - Average 0.0108 min=0.0104 max=0.0115 + Average 0.00572 min=0.00558 max=0.00581 Execution time for sess_predict_proba - Average 0.0017 min=0.00163 max=0.00188 + Average 0.000836 min=0.000824 max=0.000864 - 0.0016972313076257703 + 0.0008363299950003977 -.. GENERATED FROM PYTHON SOURCE LINES 161-165 +.. GENERATED FROM PYTHON SOURCE LINES 173-177 -This second comparison is better as +This second comparison is better as ONNX Runtime, in this experience, computes the label and the probabilities in every case. -.. GENERATED FROM PYTHON SOURCE LINES 167-171 +.. GENERATED FROM PYTHON SOURCE LINES 179-183 Benchmark with RandomForest +++++++++++++++++++++++++++ We first train and save a model in ONNX format. -.. GENERATED FROM PYTHON SOURCE LINES 171-180 +.. GENERATED FROM PYTHON SOURCE LINES 183-193 .. code-block:: default from sklearn.ensemble import RandomForestClassifier + rf = RandomForestClassifier() rf.fit(X_train, y_train) - initial_type = [('float_input', FloatTensorType([1, 4]))] + initial_type = [("float_input", FloatTensorType([1, 4]))] onx = convert_sklearn(rf, initial_types=initial_type) with open("rf_iris.onnx", "wb") as f: f.write(onx.SerializeToString()) @@ -448,20 +435,22 @@ We first train and save a model in ONNX format. -.. GENERATED FROM PYTHON SOURCE LINES 181-182 +.. GENERATED FROM PYTHON SOURCE LINES 194-195 We compare. -.. GENERATED FROM PYTHON SOURCE LINES 182-194 +.. GENERATED FROM PYTHON SOURCE LINES 195-209 .. code-block:: default - sess = rt.InferenceSession("rf_iris.onnx") + sess = rt.InferenceSession("rf_iris.onnx", providers=rt.get_available_providers()) + def sess_predict_proba_rf(x): return sess.run([prob_name], {input_name: x.astype(numpy.float32)})[0] + print("Execution time for predict_proba") speed("loop(X_test, rf.predict_proba, 100)") @@ -474,50 +463,50 @@ We compare. .. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none Execution time for predict_proba - Average 1.25 min=1.23 max=1.26 + Average 0.65 min=0.641 max=0.67 Execution time for sess_predict_proba - Average 0.00274 min=0.00245 max=0.00442 + Average 0.00103 min=0.00101 max=0.00113 - 0.0027408220106735826 + 0.001025739425000154 -.. GENERATED FROM PYTHON SOURCE LINES 195-196 +.. GENERATED FROM PYTHON SOURCE LINES 210-211 Let's see with different number of trees. -.. GENERATED FROM PYTHON SOURCE LINES 196-223 +.. GENERATED FROM PYTHON SOURCE LINES 211-240 .. code-block:: default measures = [] - for n_trees in range(5, 51, 5): + for n_trees in range(5, 51, 5): print(n_trees) rf = RandomForestClassifier(n_estimators=n_trees) rf.fit(X_train, y_train) - initial_type = [('float_input', FloatTensorType([1, 4]))] + initial_type = [("float_input", FloatTensorType([1, 4]))] onx = convert_sklearn(rf, initial_types=initial_type) with open("rf_iris_%d.onnx" % n_trees, "wb") as f: f.write(onx.SerializeToString()) - sess = rt.InferenceSession("rf_iris_%d.onnx" % n_trees) + sess = rt.InferenceSession("rf_iris_%d.onnx" % n_trees, providers=rt.get_available_providers()) + def sess_predict_proba_loop(x): return sess.run([prob_name], {input_name: x.astype(numpy.float32)})[0] + tsk = speed("loop(X_test, rf.predict_proba, 100)", number=5, repeat=5) trt = speed("loop(X_test, sess_predict_proba_loop, 100)", number=5, repeat=5) - measures.append({'n_trees': n_trees, 'sklearn': tsk, 'rt': trt}) + measures.append({"n_trees": n_trees, "sklearn": tsk, "rt": trt}) from pandas import DataFrame + df = DataFrame(measures) ax = df.plot(x="n_trees", y="sklearn", label="scikit-learn", c="blue", logy=True) - df.plot(x="n_trees", y="rt", label="onnxruntime", - ax=ax, c="green", logy=True) + df.plot(x="n_trees", y="rt", label="onnxruntime", ax=ax, c="green", logy=True) ax.set_xlabel("Number of trees") ax.set_ylabel("Prediction time (s)") ax.set_title("Speed comparison between scikit-learn and ONNX Runtime\nFor a random forest on Iris dataset") @@ -525,77 +514,71 @@ Let's see with different number of trees. -.. image:: /auto_examples/images/sphx_glr_plot_train_convert_predict_001.png - :alt: Speed comparison between scikit-learn and ONNX Runtime For a random forest on Iris dataset - :class: sphx-glr-single-img +.. image-sg:: /auto_examples/images/sphx_glr_plot_train_convert_predict_001.png + :alt: Speed comparison between scikit-learn and ONNX Runtime For a random forest on Iris dataset + :srcset: /auto_examples/images/sphx_glr_plot_train_convert_predict_001.png + :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none 5 - Average 0.11 min=0.107 max=0.118 - Average 0.00169 min=0.00161 max=0.00177 + Average 0.0465 min=0.0462 max=0.0469 + Average 0.000802 min=0.000789 max=0.000834 10 - Average 0.167 min=0.165 max=0.169 - Average 0.0017 min=0.00165 max=0.00179 + Average 0.0788 min=0.0783 max=0.0799 + Average 0.000815 min=0.000807 max=0.000837 15 - Average 0.228 min=0.227 max=0.231 - Average 0.0017 min=0.00167 max=0.00172 + Average 0.109 min=0.108 max=0.11 + Average 0.00082 min=0.000809 max=0.000844 20 - Average 0.291 min=0.286 max=0.296 - Average 0.00175 min=0.00173 max=0.00176 + Average 0.142 min=0.141 max=0.143 + Average 0.000847 min=0.00084 max=0.000869 25 - Average 0.346 min=0.342 max=0.35 - Average 0.00174 min=0.00172 max=0.00181 + Average 0.173 min=0.171 max=0.174 + Average 0.000846 min=0.000833 max=0.000859 30 - Average 0.407 min=0.404 max=0.41 - Average 0.0018 min=0.00174 max=0.0019 + Average 0.204 min=0.202 max=0.205 + Average 0.000877 min=0.000864 max=0.000921 35 - Average 0.463 min=0.459 max=0.467 - Average 0.0018 min=0.00176 max=0.00187 + Average 0.235 min=0.235 max=0.236 + Average 0.000879 min=0.000871 max=0.000896 40 - Average 0.531 min=0.521 max=0.556 - Average 0.0018 min=0.00179 max=0.00183 + Average 0.268 min=0.267 max=0.269 + Average 0.000898 min=0.000889 max=0.000923 45 - Average 0.582 min=0.577 max=0.597 - Average 0.00187 min=0.00185 max=0.00189 + Average 0.299 min=0.299 max=0.3 + Average 0.000913 min=0.000904 max=0.000935 50 - Average 0.642 min=0.64 max=0.645 - Average 0.00188 min=0.00185 max=0.00196 + Average 0.33 min=0.33 max=0.331 + Average 0.000916 min=0.000904 max=0.000945 - + .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 5 minutes 52.417 seconds) + **Total running time of the script:** ( 3 minutes 0.633 seconds) .. _sphx_glr_download_auto_examples_plot_train_convert_predict.py: +.. only:: html -.. only :: html - - .. container:: sphx-glr-footer - :class: sphx-glr-footer-example - - - - .. container:: sphx-glr-download sphx-glr-download-python + .. container:: sphx-glr-footer sphx-glr-footer-example - :download:`Download Python source code: plot_train_convert_predict.py ` + .. container:: sphx-glr-download sphx-glr-download-python + :download:`Download Python source code: plot_train_convert_predict.py ` - .. container:: sphx-glr-download sphx-glr-download-jupyter + .. container:: sphx-glr-download sphx-glr-download-jupyter - :download:`Download Jupyter notebook: plot_train_convert_predict.ipynb ` + :download:`Download Jupyter notebook: plot_train_convert_predict.ipynb ` .. only:: html diff --git a/docs/api/python/sources/auto_examples/sg_execution_times.rst.txt b/docs/api/python/sources/auto_examples/sg_execution_times.rst.txt index a4f6acba3905f..026b4546486bd 100644 --- a/docs/api/python/sources/auto_examples/sg_execution_times.rst.txt +++ b/docs/api/python/sources/auto_examples/sg_execution_times.rst.txt @@ -5,22 +5,22 @@ Computation times ================= -**05:52.417** total execution time for **auto_examples** files: +**03:01.751** total execution time for **auto_examples** files: +-------------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_plot_train_convert_predict.py` (``plot_train_convert_predict.py``) | 05:52.417 | 0.0 MB | +| :ref:`sphx_glr_auto_examples_plot_train_convert_predict.py` (``plot_train_convert_predict.py``) | 03:00.633 | 0.0 MB | +-------------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_plot_backend.py` (``plot_backend.py``) | 00:00.000 | 0.0 MB | +| :ref:`sphx_glr_auto_examples_plot_convert_pipeline_vectorizer.py` (``plot_convert_pipeline_vectorizer.py``) | 00:00.883 | 0.0 MB | +-------------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_plot_common_errors.py` (``plot_common_errors.py``) | 00:00.000 | 0.0 MB | +| :ref:`sphx_glr_auto_examples_plot_pipeline.py` (``plot_pipeline.py``) | 00:00.187 | 0.0 MB | +-------------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_plot_convert_pipeline_vectorizer.py` (``plot_convert_pipeline_vectorizer.py``) | 00:00.000 | 0.0 MB | +| :ref:`sphx_glr_auto_examples_plot_common_errors.py` (``plot_common_errors.py``) | 00:00.017 | 0.0 MB | +-------------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_plot_load_and_predict.py` (``plot_load_and_predict.py``) | 00:00.000 | 0.0 MB | +| :ref:`sphx_glr_auto_examples_plot_backend.py` (``plot_backend.py``) | 00:00.013 | 0.0 MB | +-------------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_plot_metadata.py` (``plot_metadata.py``) | 00:00.000 | 0.0 MB | +| :ref:`sphx_glr_auto_examples_plot_load_and_predict.py` (``plot_load_and_predict.py``) | 00:00.011 | 0.0 MB | +-------------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_plot_pipeline.py` (``plot_pipeline.py``) | 00:00.000 | 0.0 MB | +| :ref:`sphx_glr_auto_examples_plot_profiling.py` (``plot_profiling.py``) | 00:00.004 | 0.0 MB | +-------------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_plot_profiling.py` (``plot_profiling.py``) | 00:00.000 | 0.0 MB | +| :ref:`sphx_glr_auto_examples_plot_metadata.py` (``plot_metadata.py``) | 00:00.003 | 0.0 MB | +-------------------------------------------------------------------------------------------------------------+-----------+--------+ diff --git a/docs/api/python/sources/examples_md.rst.txt b/docs/api/python/sources/examples_md.rst.txt index 2c0ca6b4e019f..b3426e824efd5 100644 --- a/docs/api/python/sources/examples_md.rst.txt +++ b/docs/api/python/sources/examples_md.rst.txt @@ -5,7 +5,7 @@ Gallery of examples =================== - The first series of examples briefly goes into the main + This series of examples briefly goes into the main feature *ONNX Runtime* implements. Each of them run in a few seconds and relies on machine learned models trained with `scikit-learn `_. @@ -22,14 +22,3 @@ auto_examples/plot_convert_pipeline_vectorizer auto_examples/plot_metadata auto_examples/plot_profiling - - The second series is about deep learning. - Once converted to *ONNX*, the predictions can be - computed with *onnxruntime* without having any - dependencies on the framework used to train the model. - - .. toctree:: - :maxdepth: 1 - :caption: Contents: - - auto_examples/plot_dl_keras diff --git a/docs/api/python/sources/tutorial.rst.txt b/docs/api/python/sources/tutorial.rst.txt index d00a378cfeedc..fccca9cbd1451 100644 --- a/docs/api/python/sources/tutorial.rst.txt +++ b/docs/api/python/sources/tutorial.rst.txt @@ -82,7 +82,7 @@ for this machine learning model. import numpy import onnxruntime as rt - sess = rt.InferenceSession("logreg_iris.onnx") + sess = rt.InferenceSession("logreg_iris.onnx", providers=rt.get_available_providers()) input_name = sess.get_inputs()[0].name pred_onx = sess.run(None, {input_name: X_test.astype(numpy.float32)})[0] print(pred_onx) @@ -97,7 +97,7 @@ by specifying its name into a list. import numpy import onnxruntime as rt - sess = rt.InferenceSession("logreg_iris.onnx") + sess = rt.InferenceSession("logreg_iris.onnx", providers=rt.get_available_providers()) input_name = sess.get_inputs()[0].name label_name = sess.get_outputs()[0].name pred_onx = sess.run([label_name], {input_name: X_test.astype(numpy.float32)})[0] diff --git a/docs/api/python/static/_sphinx_javascript_frameworks_compat.js b/docs/api/python/static/_sphinx_javascript_frameworks_compat.js new file mode 100644 index 0000000000000..8549469dc29fa --- /dev/null +++ b/docs/api/python/static/_sphinx_javascript_frameworks_compat.js @@ -0,0 +1,134 @@ +/* + * _sphinx_javascript_frameworks_compat.js + * ~~~~~~~~~~ + * + * Compatability shim for jQuery and underscores.js. + * + * WILL BE REMOVED IN Sphinx 6.0 + * xref RemovedInSphinx60Warning + * + */ + +/** + * select a different prefix for underscore + */ +$u = _.noConflict(); + + +/** + * small helper function to urldecode strings + * + * See https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/decodeURIComponent#Decoding_query_parameters_from_a_URL + */ +jQuery.urldecode = function(x) { + if (!x) { + return x + } + return decodeURIComponent(x.replace(/\+/g, ' ')); +}; + +/** + * small helper function to urlencode strings + */ +jQuery.urlencode = encodeURIComponent; + +/** + * This function returns the parsed url parameters of the + * current request. Multiple values per key are supported, + * it will always return arrays of strings for the value parts. + */ +jQuery.getQueryParameters = function(s) { + if (typeof s === 'undefined') + s = document.location.search; + var parts = s.substr(s.indexOf('?') + 1).split('&'); + var result = {}; + for (var i = 0; i < parts.length; i++) { + var tmp = parts[i].split('=', 2); + var key = jQuery.urldecode(tmp[0]); + var value = jQuery.urldecode(tmp[1]); + if (key in result) + result[key].push(value); + else + result[key] = [value]; + } + return result; +}; + +/** + * highlight a given string on a jquery object by wrapping it in + * span elements with the given class name. + */ +jQuery.fn.highlightText = function(text, className) { + function highlight(node, addItems) { + if (node.nodeType === 3) { + var val = node.nodeValue; + var pos = val.toLowerCase().indexOf(text); + if (pos >= 0 && + !jQuery(node.parentNode).hasClass(className) && + !jQuery(node.parentNode).hasClass("nohighlight")) { + var span; + var isInSVG = jQuery(node).closest("body, svg, foreignObject").is("svg"); + if (isInSVG) { + span = document.createElementNS("http://www.w3.org/2000/svg", "tspan"); + } else { + span = document.createElement("span"); + span.className = className; + } + span.appendChild(document.createTextNode(val.substr(pos, text.length))); + node.parentNode.insertBefore(span, node.parentNode.insertBefore( + document.createTextNode(val.substr(pos + text.length)), + node.nextSibling)); + node.nodeValue = val.substr(0, pos); + if (isInSVG) { + var rect = document.createElementNS("http://www.w3.org/2000/svg", "rect"); + var bbox = node.parentElement.getBBox(); + rect.x.baseVal.value = bbox.x; + rect.y.baseVal.value = bbox.y; + rect.width.baseVal.value = bbox.width; + rect.height.baseVal.value = bbox.height; + rect.setAttribute('class', className); + addItems.push({ + "parent": node.parentNode, + "target": rect}); + } + } + } + else if (!jQuery(node).is("button, select, textarea")) { + jQuery.each(node.childNodes, function() { + highlight(this, addItems); + }); + } + } + var addItems = []; + var result = this.each(function() { + highlight(this, addItems); + }); + for (var i = 0; i < addItems.length; ++i) { + jQuery(addItems[i].parent).before(addItems[i].target); + } + return result; +}; + +/* + * backward compatibility for jQuery.browser + * This will be supported until firefox bug is fixed. + */ +if (!jQuery.browser) { + jQuery.uaMatch = function(ua) { + ua = ua.toLowerCase(); + + var match = /(chrome)[ \/]([\w.]+)/.exec(ua) || + /(webkit)[ \/]([\w.]+)/.exec(ua) || + /(opera)(?:.*version|)[ \/]([\w.]+)/.exec(ua) || + /(msie) ([\w.]+)/.exec(ua) || + ua.indexOf("compatible") < 0 && /(mozilla)(?:.*? rv:([\w.]+)|)/.exec(ua) || + []; + + return { + browser: match[ 1 ] || "", + version: match[ 2 ] || "0" + }; + }; + jQuery.browser = {}; + jQuery.browser[jQuery.uaMatch(navigator.userAgent).browser] = true; +} diff --git a/docs/api/python/static/basic.css b/docs/api/python/static/basic.css index be19270e4a241..eeb0519a69bac 100644 --- a/docs/api/python/static/basic.css +++ b/docs/api/python/static/basic.css @@ -4,7 +4,7 @@ * * Sphinx stylesheet -- basic theme. * - * :copyright: Copyright 2007-2021 by the Sphinx team, see AUTHORS. + * :copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS. * :license: BSD, see LICENSE for details. * */ @@ -130,7 +130,7 @@ ul.search li a { font-weight: bold; } -ul.search li div.context { +ul.search li p.context { color: #888; margin: 2px 0 0 30px; text-align: left; @@ -222,7 +222,7 @@ table.modindextable td { /* -- general body styles --------------------------------------------------- */ div.body { - min-width: 450px; + min-width: 360px; max-width: 800px; } @@ -236,7 +236,6 @@ div.body p, div.body dd, div.body li, div.body blockquote { a.headerlink { visibility: hidden; } - a.brackets:before, span.brackets > a:before{ content: "["; @@ -247,6 +246,7 @@ span.brackets > a:after { content: "]"; } + h1:hover > a.headerlink, h2:hover > a.headerlink, h3:hover > a.headerlink, @@ -277,25 +277,25 @@ p.rubric { font-weight: bold; } -img.align-left, .figure.align-left, object.align-left { +img.align-left, figure.align-left, .figure.align-left, object.align-left { clear: left; float: left; margin-right: 1em; } -img.align-right, .figure.align-right, object.align-right { +img.align-right, figure.align-right, .figure.align-right, object.align-right { clear: right; float: right; margin-left: 1em; } -img.align-center, .figure.align-center, object.align-center { +img.align-center, figure.align-center, .figure.align-center, object.align-center { display: block; margin-left: auto; margin-right: auto; } -img.align-default, .figure.align-default { +img.align-default, figure.align-default, .figure.align-default { display: block; margin-left: auto; margin-right: auto; @@ -319,7 +319,8 @@ img.align-default, .figure.align-default { /* -- sidebars -------------------------------------------------------------- */ -div.sidebar { +div.sidebar, +aside.sidebar { margin: 0 0 0.5em 1em; border: 1px solid #ddb; padding: 7px; @@ -333,13 +334,11 @@ div.sidebar { p.sidebar-title { font-weight: bold; } - div.admonition, div.topic, blockquote { clear: left; } /* -- topics ---------------------------------------------------------------- */ - div.topic { border: 1px solid #ccc; padding: 7px; @@ -377,12 +376,14 @@ div.body p.centered { /* -- content of sidebars/topics/admonitions -------------------------------- */ div.sidebar > :last-child, +aside.sidebar > :last-child, div.topic > :last-child, div.admonition > :last-child { margin-bottom: 0; } div.sidebar::after, +aside.sidebar::after, div.topic::after, div.admonition::after, blockquote::after { @@ -425,10 +426,6 @@ table.docutils td, table.docutils th { border-bottom: 1px solid #aaa; } -table.footnote td, table.footnote th { - border: 0 !important; -} - th { text-align: left; padding-right: 5px; @@ -455,20 +452,22 @@ td > :last-child { /* -- figures --------------------------------------------------------------- */ -div.figure { +div.figure, figure { margin: 0.5em; padding: 0.5em; } -div.figure p.caption { +div.figure p.caption, figcaption { padding: 0.3em; } -div.figure p.caption span.caption-number { +div.figure p.caption span.caption-number, +figcaption span.caption-number { font-style: italic; } -div.figure p.caption span.caption-text { +div.figure p.caption span.caption-text, +figcaption span.caption-text { } /* -- field list styles ----------------------------------------------------- */ @@ -503,6 +502,63 @@ table.hlist td { vertical-align: top; } +/* -- object description styles --------------------------------------------- */ + +.sig { + font-family: 'Consolas', 'Menlo', 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', monospace; +} + +.sig-name, code.descname { + background-color: transparent; + font-weight: bold; +} + +.sig-name { + font-size: 1.1em; +} + +code.descname { + font-size: 1.2em; +} + +.sig-prename, code.descclassname { + background-color: transparent; +} + +.optional { + font-size: 1.3em; +} + +.sig-paren { + font-size: larger; +} + +.sig-param.n { + font-style: italic; +} + +/* C++ specific styling */ + +.sig-inline.c-texpr, +.sig-inline.cpp-texpr { + font-family: unset; +} + +.sig.c .k, .sig.c .kt, +.sig.cpp .k, .sig.cpp .kt { + color: #0033B3; +} + +.sig.c .m, +.sig.cpp .m { + color: #1750EB; +} + +.sig.c .s, .sig.c .sc, +.sig.cpp .s, .sig.cpp .sc { + color: #067D17; +} + /* -- other body styles ----------------------------------------------------- */ @@ -552,7 +608,6 @@ ol.simple p, ul.simple p { margin-bottom: 0; } - dl.footnote > dt, dl.citation > dt { float: left; @@ -581,11 +636,11 @@ dl.field-list > dt { padding-left: 0.5em; padding-right: 5px; } - dl.field-list > dt:after { content: ":"; } + dl.field-list > dd { padding-left: 0.5em; margin-top: 0em; @@ -629,14 +684,6 @@ dl.glossary dt { font-size: 1.1em; } -.optional { - font-size: 1.3em; -} - -.sig-paren { - font-size: larger; -} - .versionmodified { font-style: italic; } @@ -677,8 +724,9 @@ dl.glossary dt { .classifier:before { font-style: normal; - margin: 0.5em; + margin: 0 0.5em; content: ":"; + display: inline-block; } abbr, acronym { @@ -702,6 +750,7 @@ span.pre { -ms-hyphens: none; -webkit-hyphens: none; hyphens: none; + white-space: nowrap; } div[class*="highlight-"] { @@ -765,8 +814,12 @@ div.code-block-caption code { table.highlighttable td.linenos, span.linenos, -div.doctest > div.highlight span.gp { /* gp: Generic.Prompt */ - user-select: none; +div.highlight span.gp { /* gp: Generic.Prompt */ + user-select: none; + -webkit-user-select: text; /* Safari fallback only */ + -webkit-user-select: none; /* Chrome/Safari */ + -moz-user-select: none; /* Firefox */ + -ms-user-select: none; /* IE10+ */ } div.code-block-caption span.caption-number { @@ -781,16 +834,6 @@ div.literal-block-wrapper { margin: 1em 0; } -code.descname { - background-color: transparent; - font-weight: bold; - font-size: 1.2em; -} - -code.descclassname { - background-color: transparent; -} - code.xref, a code { background-color: transparent; font-weight: bold; diff --git a/docs/api/python/static/doctools.js b/docs/api/python/static/doctools.js index 61ac9d266f95d..527b876ca636d 100644 --- a/docs/api/python/static/doctools.js +++ b/docs/api/python/static/doctools.js @@ -2,320 +2,155 @@ * doctools.js * ~~~~~~~~~~~ * - * Sphinx JavaScript utilities for all documentation. + * Base JavaScript utilities for all Sphinx HTML documentation. * - * :copyright: Copyright 2007-2021 by the Sphinx team, see AUTHORS. + * :copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS. * :license: BSD, see LICENSE for details. * */ - -/** - * select a different prefix for underscore - */ -$u = _.noConflict(); - -/** - * make the code below compatible with browsers without - * an installed firebug like debugger -if (!window.console || !console.firebug) { - var names = ["log", "debug", "info", "warn", "error", "assert", "dir", - "dirxml", "group", "groupEnd", "time", "timeEnd", "count", "trace", - "profile", "profileEnd"]; - window.console = {}; - for (var i = 0; i < names.length; ++i) - window.console[names[i]] = function() {}; -} - */ - -/** - * small helper function to urldecode strings - * - * See https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/decodeURIComponent#Decoding_query_parameters_from_a_URL - */ -jQuery.urldecode = function(x) { - if (!x) { - return x - } - return decodeURIComponent(x.replace(/\+/g, ' ')); -}; - -/** - * small helper function to urlencode strings - */ -jQuery.urlencode = encodeURIComponent; - -/** - * This function returns the parsed url parameters of the - * current request. Multiple values per key are supported, - * it will always return arrays of strings for the value parts. - */ -jQuery.getQueryParameters = function(s) { - if (typeof s === 'undefined') - s = document.location.search; - var parts = s.substr(s.indexOf('?') + 1).split('&'); - var result = {}; - for (var i = 0; i < parts.length; i++) { - var tmp = parts[i].split('=', 2); - var key = jQuery.urldecode(tmp[0]); - var value = jQuery.urldecode(tmp[1]); - if (key in result) - result[key].push(value); - else - result[key] = [value]; +"use strict"; + +const BLACKLISTED_KEY_CONTROL_ELEMENTS = new Set([ + "TEXTAREA", + "INPUT", + "SELECT", + "BUTTON", +]); + +const _ready = (callback) => { + if (document.readyState !== "loading") { + callback(); + } else { + document.addEventListener("DOMContentLoaded", callback); } - return result; }; -/** - * highlight a given string on a jquery object by wrapping it in - * span elements with the given class name. - */ -jQuery.fn.highlightText = function(text, className) { - function highlight(node, addItems) { - if (node.nodeType === 3) { - var val = node.nodeValue; - var pos = val.toLowerCase().indexOf(text); - if (pos >= 0 && - !jQuery(node.parentNode).hasClass(className) && - !jQuery(node.parentNode).hasClass("nohighlight")) { - var span; - var isInSVG = jQuery(node).closest("body, svg, foreignObject").is("svg"); - if (isInSVG) { - span = document.createElementNS("http://www.w3.org/2000/svg", "tspan"); - } else { - span = document.createElement("span"); - span.className = className; - } - span.appendChild(document.createTextNode(val.substr(pos, text.length))); - node.parentNode.insertBefore(span, node.parentNode.insertBefore( - document.createTextNode(val.substr(pos + text.length)), - node.nextSibling)); - node.nodeValue = val.substr(0, pos); - if (isInSVG) { - var rect = document.createElementNS("http://www.w3.org/2000/svg", "rect"); - var bbox = node.parentElement.getBBox(); - rect.x.baseVal.value = bbox.x; - rect.y.baseVal.value = bbox.y; - rect.width.baseVal.value = bbox.width; - rect.height.baseVal.value = bbox.height; - rect.setAttribute('class', className); - addItems.push({ - "parent": node.parentNode, - "target": rect}); - } - } - } - else if (!jQuery(node).is("button, select, textarea")) { - jQuery.each(node.childNodes, function() { - highlight(this, addItems); - }); - } - } - var addItems = []; - var result = this.each(function() { - highlight(this, addItems); - }); - for (var i = 0; i < addItems.length; ++i) { - jQuery(addItems[i].parent).before(addItems[i].target); - } - return result; -}; - -/* - * backward compatibility for jQuery.browser - * This will be supported until firefox bug is fixed. - */ -if (!jQuery.browser) { - jQuery.uaMatch = function(ua) { - ua = ua.toLowerCase(); - - var match = /(chrome)[ \/]([\w.]+)/.exec(ua) || - /(webkit)[ \/]([\w.]+)/.exec(ua) || - /(opera)(?:.*version|)[ \/]([\w.]+)/.exec(ua) || - /(msie) ([\w.]+)/.exec(ua) || - ua.indexOf("compatible") < 0 && /(mozilla)(?:.*? rv:([\w.]+)|)/.exec(ua) || - []; - - return { - browser: match[ 1 ] || "", - version: match[ 2 ] || "0" - }; - }; - jQuery.browser = {}; - jQuery.browser[jQuery.uaMatch(navigator.userAgent).browser] = true; -} - /** * Small JavaScript module for the documentation. */ -var Documentation = { - - init : function() { - this.fixFirefoxAnchorBug(); - this.highlightSearchWords(); - this.initIndexTable(); - if (DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) { - this.initOnKeyListeners(); - } +const Documentation = { + init: () => { + Documentation.initDomainIndexTable(); + Documentation.initOnKeyListeners(); }, /** * i18n support */ - TRANSLATIONS : {}, - PLURAL_EXPR : function(n) { return n === 1 ? 0 : 1; }, - LOCALE : 'unknown', + TRANSLATIONS: {}, + PLURAL_EXPR: (n) => (n === 1 ? 0 : 1), + LOCALE: "unknown", // gettext and ngettext don't access this so that the functions // can safely bound to a different name (_ = Documentation.gettext) - gettext : function(string) { - var translated = Documentation.TRANSLATIONS[string]; - if (typeof translated === 'undefined') - return string; - return (typeof translated === 'string') ? translated : translated[0]; + gettext: (string) => { + const translated = Documentation.TRANSLATIONS[string]; + switch (typeof translated) { + case "undefined": + return string; // no translation + case "string": + return translated; // translation exists + default: + return translated[0]; // (singular, plural) translation tuple exists + } }, - ngettext : function(singular, plural, n) { - var translated = Documentation.TRANSLATIONS[singular]; - if (typeof translated === 'undefined') - return (n == 1) ? singular : plural; - return translated[Documentation.PLURALEXPR(n)]; + ngettext: (singular, plural, n) => { + const translated = Documentation.TRANSLATIONS[singular]; + if (typeof translated !== "undefined") + return translated[Documentation.PLURAL_EXPR(n)]; + return n === 1 ? singular : plural; }, - addTranslations : function(catalog) { - for (var key in catalog.messages) - this.TRANSLATIONS[key] = catalog.messages[key]; - this.PLURAL_EXPR = new Function('n', 'return +(' + catalog.plural_expr + ')'); - this.LOCALE = catalog.locale; + addTranslations: (catalog) => { + Object.assign(Documentation.TRANSLATIONS, catalog.messages); + Documentation.PLURAL_EXPR = new Function( + "n", + `return (${catalog.plural_expr})` + ); + Documentation.LOCALE = catalog.locale; }, /** - * add context elements like header anchor links + * helper function to focus on search bar */ - addContextElements : function() { - $('div[id] > :header:first').each(function() { - $('\u00B6'). - attr('href', '#' + this.id). - attr('title', _('Permalink to this headline')). - appendTo(this); - }); - $('dt[id]').each(function() { - $('\u00B6'). - attr('href', '#' + this.id). - attr('title', _('Permalink to this definition')). - appendTo(this); - }); + focusSearchBar: () => { + document.querySelectorAll("input[name=q]")[0]?.focus(); }, /** - * workaround a firefox stupidity - * see: https://bugzilla.mozilla.org/show_bug.cgi?id=645075 + * Initialise the domain index toggle buttons */ - fixFirefoxAnchorBug : function() { - if (document.location.hash && $.browser.mozilla) - window.setTimeout(function() { - document.location.href += ''; - }, 10); - }, - - /** - * highlight the search words provided in the url in the text - */ - highlightSearchWords : function() { - var params = $.getQueryParameters(); - var terms = (params.highlight) ? params.highlight[0].split(/\s+/) : []; - if (terms.length) { - var body = $('div.body'); - if (!body.length) { - body = $('body'); + initDomainIndexTable: () => { + const toggler = (el) => { + const idNumber = el.id.substr(7); + const toggledRows = document.querySelectorAll(`tr.cg-${idNumber}`); + if (el.src.substr(-9) === "minus.png") { + el.src = `${el.src.substr(0, el.src.length - 9)}plus.png`; + toggledRows.forEach((el) => (el.style.display = "none")); + } else { + el.src = `${el.src.substr(0, el.src.length - 8)}minus.png`; + toggledRows.forEach((el) => (el.style.display = "")); } - window.setTimeout(function() { - $.each(terms, function() { - body.highlightText(this.toLowerCase(), 'highlighted'); - }); - }, 10); - $('') - .appendTo($('#searchbox')); - } - }, - - /** - * init the domain index toggle buttons - */ - initIndexTable : function() { - var togglers = $('img.toggler').click(function() { - var src = $(this).attr('src'); - var idnum = $(this).attr('id').substr(7); - $('tr.cg-' + idnum).toggle(); - if (src.substr(-9) === 'minus.png') - $(this).attr('src', src.substr(0, src.length-9) + 'plus.png'); - else - $(this).attr('src', src.substr(0, src.length-8) + 'minus.png'); - }).css('display', ''); - if (DOCUMENTATION_OPTIONS.COLLAPSE_INDEX) { - togglers.click(); - } - }, - - /** - * helper function to hide the search marks again - */ - hideSearchWords : function() { - $('#searchbox .highlight-link').fadeOut(300); - $('span.highlighted').removeClass('highlighted'); - }, - - /** - * make the url absolute - */ - makeURL : function(relativeURL) { - return DOCUMENTATION_OPTIONS.URL_ROOT + '/' + relativeURL; - }, + }; - /** - * get the current relative url - */ - getCurrentURL : function() { - var path = document.location.pathname; - var parts = path.split(/\//); - $.each(DOCUMENTATION_OPTIONS.URL_ROOT.split(/\//), function() { - if (this === '..') - parts.pop(); - }); - var url = parts.join('/'); - return path.substring(url.lastIndexOf('/') + 1, path.length - 1); + const togglerElements = document.querySelectorAll("img.toggler"); + togglerElements.forEach((el) => + el.addEventListener("click", (event) => toggler(event.currentTarget)) + ); + togglerElements.forEach((el) => (el.style.display = "")); + if (DOCUMENTATION_OPTIONS.COLLAPSE_INDEX) togglerElements.forEach(toggler); }, - initOnKeyListeners: function() { - $(document).keydown(function(event) { - var activeElementType = document.activeElement.tagName; - // don't navigate when in search box, textarea, dropdown or button - if (activeElementType !== 'TEXTAREA' && activeElementType !== 'INPUT' && activeElementType !== 'SELECT' - && activeElementType !== 'BUTTON' && !event.altKey && !event.ctrlKey && !event.metaKey - && !event.shiftKey) { - switch (event.keyCode) { - case 37: // left - var prevHref = $('link[rel="prev"]').prop('href'); - if (prevHref) { - window.location.href = prevHref; - return false; + initOnKeyListeners: () => { + // only install a listener if it is really needed + if ( + !DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS && + !DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS + ) + return; + + document.addEventListener("keydown", (event) => { + // bail for input elements + if (BLACKLISTED_KEY_CONTROL_ELEMENTS.has(document.activeElement.tagName)) return; + // bail with special keys + if (event.altKey || event.ctrlKey || event.metaKey) return; + + if (!event.shiftKey) { + switch (event.key) { + case "ArrowLeft": + if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break; + + const prevLink = document.querySelector('link[rel="prev"]'); + if (prevLink && prevLink.href) { + window.location.href = prevLink.href; + event.preventDefault(); } - case 39: // right - var nextHref = $('link[rel="next"]').prop('href'); - if (nextHref) { - window.location.href = nextHref; - return false; + break; + case "ArrowRight": + if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break; + + const nextLink = document.querySelector('link[rel="next"]'); + if (nextLink && nextLink.href) { + window.location.href = nextLink.href; + event.preventDefault(); } + break; } } + + // some keyboard layouts may need Shift to get / + switch (event.key) { + case "/": + if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) break; + Documentation.focusSearchBar(); + event.preventDefault(); + } }); - } + }, }; // quick alias for translations -_ = Documentation.gettext; +const _ = Documentation.gettext; -$(document).ready(function() { - Documentation.init(); -}); +_ready(Documentation.init); diff --git a/docs/api/python/static/documentation_options.js b/docs/api/python/static/documentation_options.js index a752120bb7d79..96fcefe3ee0ff 100644 --- a/docs/api/python/static/documentation_options.js +++ b/docs/api/python/static/documentation_options.js @@ -1,6 +1,6 @@ var DOCUMENTATION_OPTIONS = { URL_ROOT: document.getElementById("documentation_options").getAttribute('data-url_root'), - VERSION: '1.7.0', + VERSION: '1.14.0', LANGUAGE: 'en', COLLAPSE_INDEX: false, BUILDER: 'html', @@ -8,5 +8,7 @@ var DOCUMENTATION_OPTIONS = { LINK_SUFFIX: '.html', HAS_SOURCE: true, SOURCELINK_SUFFIX: '.txt', - NAVIGATION_WITH_KEYS: false + NAVIGATION_WITH_KEYS: false, + SHOW_SEARCH_SUMMARY: true, + ENABLE_SEARCH_SHORTCUTS: true, }; \ No newline at end of file diff --git a/docs/api/python/static/gallery.css b/docs/api/python/static/gallery.css deleted file mode 100644 index 848774f4c5494..0000000000000 --- a/docs/api/python/static/gallery.css +++ /dev/null @@ -1,204 +0,0 @@ -/* -Sphinx-Gallery has compatible CSS to fix default sphinx themes -Tested for Sphinx 1.3.1 for all themes: default, alabaster, sphinxdoc, -scrolls, agogo, traditional, nature, haiku, pyramid -Tested for Read the Docs theme 0.1.7 */ -.sphx-glr-thumbcontainer { - background: #fff; - border: solid #fff 1px; - -moz-border-radius: 5px; - -webkit-border-radius: 5px; - border-radius: 5px; - box-shadow: none; - float: left; - margin: 5px; - min-height: 230px; - padding-top: 5px; - position: relative; -} -.sphx-glr-thumbcontainer:hover { - border: solid #b4ddfc 1px; - box-shadow: 0 0 15px rgba(142, 176, 202, 0.5); -} -.sphx-glr-thumbcontainer a.internal { - bottom: 0; - display: block; - left: 0; - padding: 150px 10px 0; - position: absolute; - right: 0; - top: 0; -} -/* Next one is to avoid Sphinx traditional theme to cover all the -thumbnail with its default link Background color */ -.sphx-glr-thumbcontainer a.internal:hover { - background-color: transparent; -} - -.sphx-glr-thumbcontainer p { - margin: 0 0 .1em 0; -} -.sphx-glr-thumbcontainer .figure { - margin: 10px; - width: 160px; -} -.sphx-glr-thumbcontainer img { - display: inline; - max-height: 112px; - max-width: 160px; -} -.sphx-glr-thumbcontainer[tooltip]:hover:after { - background: rgba(0, 0, 0, 0.8); - -webkit-border-radius: 5px; - -moz-border-radius: 5px; - border-radius: 5px; - color: #fff; - content: attr(tooltip); - left: 95%; - padding: 5px 15px; - position: absolute; - z-index: 98; - width: 220px; - bottom: 52%; -} -.sphx-glr-thumbcontainer[tooltip]:hover:before { - border: solid; - border-color: #333 transparent; - border-width: 18px 0 0 20px; - bottom: 58%; - content: ''; - left: 85%; - position: absolute; - z-index: 99; -} - -.sphx-glr-script-out { - color: #888; - margin: 0; -} -p.sphx-glr-script-out { - padding-top: 0.7em; -} -.sphx-glr-script-out .highlight { - background-color: transparent; - margin-left: 2.5em; - margin-top: -2.1em; -} -.sphx-glr-script-out .highlight pre { - background-color: #fafae2; - border: 0; - max-height: 30em; - overflow: auto; - padding-left: 1ex; - margin: 0px; - word-break: break-word; -} -.sphx-glr-script-out + p { - margin-top: 1.8em; -} -blockquote.sphx-glr-script-out { - margin-left: 0pt; -} -.sphx-glr-script-out.highlight-pytb .highlight pre { - color: #000; - background-color: #ffe4e4; - border: 1px solid #f66; - margin-top: 10px; - padding: 7px; -} - -div.sphx-glr-footer { - text-align: center; -} - -div.sphx-glr-download { - margin: 1em auto; - vertical-align: middle; -} - -div.sphx-glr-download a { - background-color: #ffc; - background-image: linear-gradient(to bottom, #FFC, #d5d57e); - border-radius: 4px; - border: 1px solid #c2c22d; - color: #000; - display: inline-block; - font-weight: bold; - padding: 1ex; - text-align: center; -} - -div.sphx-glr-download code.download { - display: inline-block; - white-space: normal; - word-break: normal; - overflow-wrap: break-word; - /* border and background are given by the enclosing 'a' */ - border: none; - background: none; -} - -div.sphx-glr-download a:hover { - box-shadow: inset 0 1px 0 rgba(255,255,255,.1), 0 1px 5px rgba(0,0,0,.25); - text-decoration: none; - background-image: none; - background-color: #d5d57e; -} - -.sphx-glr-example-title > :target::before { - display: block; - content: ""; - margin-top: -50px; - height: 50px; - visibility: hidden; -} - -ul.sphx-glr-horizontal { - list-style: none; - padding: 0; -} -ul.sphx-glr-horizontal li { - display: inline; -} -ul.sphx-glr-horizontal img { - height: auto !important; -} - -.sphx-glr-single-img { - margin: auto; - display: block; - max-width: 100%; -} - -.sphx-glr-multi-img { - max-width: 42%; - height: auto; -} - -div.sphx-glr-animation { - margin: auto; - display: block; - max-width: 100%; -} -div.sphx-glr-animation .animation{ - display: block; -} - -p.sphx-glr-signature a.reference.external { - -moz-border-radius: 5px; - -webkit-border-radius: 5px; - border-radius: 5px; - padding: 3px; - font-size: 75%; - text-align: right; - margin-left: auto; - display: table; -} - -.sphx-glr-clear{ - clear: both; -} - -a.sphx-glr-backref-instance { - text-decoration: none; -} diff --git a/docs/api/python/static/graphviz.css b/docs/api/python/static/graphviz.css index b340734c742f5..19e7afd385bed 100644 --- a/docs/api/python/static/graphviz.css +++ b/docs/api/python/static/graphviz.css @@ -4,7 +4,7 @@ * * Sphinx stylesheet -- graphviz extension. * - * :copyright: Copyright 2007-2021 by the Sphinx team, see AUTHORS. + * :copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS. * :license: BSD, see LICENSE for details. * */ diff --git a/docs/api/python/static/jquery-3.5.1.js b/docs/api/python/static/jquery-3.6.0.js similarity index 98% rename from docs/api/python/static/jquery-3.5.1.js rename to docs/api/python/static/jquery-3.6.0.js index 50937333b99a5..fc6c299b73e79 100644 --- a/docs/api/python/static/jquery-3.5.1.js +++ b/docs/api/python/static/jquery-3.6.0.js @@ -1,15 +1,15 @@ /*! - * jQuery JavaScript Library v3.5.1 + * jQuery JavaScript Library v3.6.0 * https://jquery.com/ * * Includes Sizzle.js * https://sizzlejs.com/ * - * Copyright JS Foundation and other contributors + * Copyright OpenJS Foundation and other contributors * Released under the MIT license * https://jquery.org/license * - * Date: 2020-05-04T22:49Z + * Date: 2021-03-02T17:08Z */ ( function( global, factory ) { @@ -76,12 +76,16 @@ var support = {}; var isFunction = function isFunction( obj ) { - // Support: Chrome <=57, Firefox <=52 - // In some browsers, typeof returns "function" for HTML elements - // (i.e., `typeof document.createElement( "object" ) === "function"`). - // We don't want to classify *any* DOM node as a function. - return typeof obj === "function" && typeof obj.nodeType !== "number"; - }; + // Support: Chrome <=57, Firefox <=52 + // In some browsers, typeof returns "function" for HTML elements + // (i.e., `typeof document.createElement( "object" ) === "function"`). + // We don't want to classify *any* DOM node as a function. + // Support: QtWeb <=3.8.5, WebKit <=534.34, wkhtmltopdf tool <=0.12.5 + // Plus for old WebKit, typeof returns "function" for HTML collections + // (e.g., `typeof document.getElementsByTagName("div") === "function"`). (gh-4756) + return typeof obj === "function" && typeof obj.nodeType !== "number" && + typeof obj.item !== "function"; + }; var isWindow = function isWindow( obj ) { @@ -147,7 +151,7 @@ function toType( obj ) { var - version = "3.5.1", + version = "3.6.0", // Define a local copy of jQuery jQuery = function( selector, context ) { @@ -401,7 +405,7 @@ jQuery.extend( { if ( isArrayLike( Object( arr ) ) ) { jQuery.merge( ret, typeof arr === "string" ? - [ arr ] : arr + [ arr ] : arr ); } else { push.call( ret, arr ); @@ -496,9 +500,9 @@ if ( typeof Symbol === "function" ) { // Populate the class2type map jQuery.each( "Boolean Number String Function Array Date RegExp Object Error Symbol".split( " " ), -function( _i, name ) { - class2type[ "[object " + name + "]" ] = name.toLowerCase(); -} ); + function( _i, name ) { + class2type[ "[object " + name + "]" ] = name.toLowerCase(); + } ); function isArrayLike( obj ) { @@ -518,14 +522,14 @@ function isArrayLike( obj ) { } var Sizzle = /*! - * Sizzle CSS Selector Engine v2.3.5 + * Sizzle CSS Selector Engine v2.3.6 * https://sizzlejs.com/ * * Copyright JS Foundation and other contributors * Released under the MIT license * https://js.foundation/ * - * Date: 2020-03-14 + * Date: 2021-02-16 */ ( function( window ) { var i, @@ -1108,8 +1112,8 @@ support = Sizzle.support = {}; * @returns {Boolean} True iff elem is a non-HTML XML node */ isXML = Sizzle.isXML = function( elem ) { - var namespace = elem.namespaceURI, - docElem = ( elem.ownerDocument || elem ).documentElement; + var namespace = elem && elem.namespaceURI, + docElem = elem && ( elem.ownerDocument || elem ).documentElement; // Support: IE <=8 // Assume HTML when documentElement doesn't yet exist, such as inside loading iframes @@ -3024,9 +3028,9 @@ var rneedsContext = jQuery.expr.match.needsContext; function nodeName( elem, name ) { - return elem.nodeName && elem.nodeName.toLowerCase() === name.toLowerCase(); + return elem.nodeName && elem.nodeName.toLowerCase() === name.toLowerCase(); -}; +} var rsingleTag = ( /^<([a-z][^\/\0>:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i ); @@ -3997,8 +4001,8 @@ jQuery.extend( { resolveContexts = Array( i ), resolveValues = slice.call( arguments ), - // the master Deferred - master = jQuery.Deferred(), + // the primary Deferred + primary = jQuery.Deferred(), // subordinate callback factory updateFunc = function( i ) { @@ -4006,30 +4010,30 @@ jQuery.extend( { resolveContexts[ i ] = this; resolveValues[ i ] = arguments.length > 1 ? slice.call( arguments ) : value; if ( !( --remaining ) ) { - master.resolveWith( resolveContexts, resolveValues ); + primary.resolveWith( resolveContexts, resolveValues ); } }; }; // Single- and empty arguments are adopted like Promise.resolve if ( remaining <= 1 ) { - adoptValue( singleValue, master.done( updateFunc( i ) ).resolve, master.reject, + adoptValue( singleValue, primary.done( updateFunc( i ) ).resolve, primary.reject, !remaining ); // Use .then() to unwrap secondary thenables (cf. gh-3000) - if ( master.state() === "pending" || + if ( primary.state() === "pending" || isFunction( resolveValues[ i ] && resolveValues[ i ].then ) ) { - return master.then(); + return primary.then(); } } // Multiple arguments are aggregated like Promise.all array elements while ( i-- ) { - adoptValue( resolveValues[ i ], updateFunc( i ), master.reject ); + adoptValue( resolveValues[ i ], updateFunc( i ), primary.reject ); } - return master.promise(); + return primary.promise(); } } ); @@ -4180,8 +4184,8 @@ var access = function( elems, fn, key, value, chainable, emptyGet, raw ) { for ( ; i < len; i++ ) { fn( elems[ i ], key, raw ? - value : - value.call( elems[ i ], i, fn( elems[ i ], key ) ) + value : + value.call( elems[ i ], i, fn( elems[ i ], key ) ) ); } } @@ -5089,10 +5093,7 @@ function buildFragment( elems, context, scripts, selection, ignored ) { } -var - rkeyEvent = /^key/, - rmouseEvent = /^(?:mouse|pointer|contextmenu|drag|drop)|click/, - rtypenamespace = /^([^.]*)(?:\.(.+)|)/; +var rtypenamespace = /^([^.]*)(?:\.(.+)|)/; function returnTrue() { return true; @@ -5387,8 +5388,8 @@ jQuery.event = { event = jQuery.event.fix( nativeEvent ), handlers = ( - dataPriv.get( this, "events" ) || Object.create( null ) - )[ event.type ] || [], + dataPriv.get( this, "events" ) || Object.create( null ) + )[ event.type ] || [], special = jQuery.event.special[ event.type ] || {}; // Use the fix-ed jQuery.Event rather than the (read-only) native event @@ -5512,12 +5513,12 @@ jQuery.event = { get: isFunction( hook ) ? function() { if ( this.originalEvent ) { - return hook( this.originalEvent ); + return hook( this.originalEvent ); } } : function() { if ( this.originalEvent ) { - return this.originalEvent[ name ]; + return this.originalEvent[ name ]; } }, @@ -5656,7 +5657,13 @@ function leverageNative( el, type, expectSync ) { // Cancel the outer synthetic event event.stopImmediatePropagation(); event.preventDefault(); - return result.value; + + // Support: Chrome 86+ + // In Chrome, if an element having a focusout handler is blurred by + // clicking outside of it, it invokes the handler synchronously. If + // that handler calls `.remove()` on the element, the data is cleared, + // leaving `result` undefined. We need to guard against this. + return result && result.value; } // If this is an inner synthetic event for an event with a bubbling surrogate @@ -5821,34 +5828,7 @@ jQuery.each( { targetTouches: true, toElement: true, touches: true, - - which: function( event ) { - var button = event.button; - - // Add which for key events - if ( event.which == null && rkeyEvent.test( event.type ) ) { - return event.charCode != null ? event.charCode : event.keyCode; - } - - // Add which for click: 1 === left; 2 === middle; 3 === right - if ( !event.which && button !== undefined && rmouseEvent.test( event.type ) ) { - if ( button & 1 ) { - return 1; - } - - if ( button & 2 ) { - return 3; - } - - if ( button & 4 ) { - return 2; - } - - return 0; - } - - return event.which; - } + which: true }, jQuery.event.addProp ); jQuery.each( { focus: "focusin", blur: "focusout" }, function( type, delegateType ) { @@ -5874,6 +5854,12 @@ jQuery.each( { focus: "focusin", blur: "focusout" }, function( type, delegateTyp return true; }, + // Suppress native focus or blur as it's already being fired + // in leverageNative. + _default: function() { + return true; + }, + delegateType: delegateType }; } ); @@ -6541,6 +6527,10 @@ var rboxStyle = new RegExp( cssExpand.join( "|" ), "i" ); // set in CSS while `offset*` properties report correct values. // Behavior in IE 9 is more subtle than in newer versions & it passes // some versions of this test; make sure not to make it pass there! + // + // Support: Firefox 70+ + // Only Firefox includes border widths + // in computed dimensions. (gh-4529) reliableTrDimensions: function() { var table, tr, trChild, trStyle; if ( reliableTrDimensionsVal == null ) { @@ -6548,17 +6538,32 @@ var rboxStyle = new RegExp( cssExpand.join( "|" ), "i" ); tr = document.createElement( "tr" ); trChild = document.createElement( "div" ); - table.style.cssText = "position:absolute;left:-11111px"; + table.style.cssText = "position:absolute;left:-11111px;border-collapse:separate"; + tr.style.cssText = "border:1px solid"; + + // Support: Chrome 86+ + // Height set through cssText does not get applied. + // Computed height then comes back as 0. tr.style.height = "1px"; trChild.style.height = "9px"; + // Support: Android 8 Chrome 86+ + // In our bodyBackground.html iframe, + // display for all div elements is set to "inline", + // which causes a problem only in Android 8 Chrome 86. + // Ensuring the div is display: block + // gets around this issue. + trChild.style.display = "block"; + documentElement .appendChild( table ) .appendChild( tr ) .appendChild( trChild ); trStyle = window.getComputedStyle( tr ); - reliableTrDimensionsVal = parseInt( trStyle.height ) > 3; + reliableTrDimensionsVal = ( parseInt( trStyle.height, 10 ) + + parseInt( trStyle.borderTopWidth, 10 ) + + parseInt( trStyle.borderBottomWidth, 10 ) ) === tr.offsetHeight; documentElement.removeChild( table ); } @@ -7022,10 +7027,10 @@ jQuery.each( [ "height", "width" ], function( _i, dimension ) { // Running getBoundingClientRect on a disconnected node // in IE throws an error. ( !elem.getClientRects().length || !elem.getBoundingClientRect().width ) ? - swap( elem, cssShow, function() { - return getWidthOrHeight( elem, dimension, extra ); - } ) : - getWidthOrHeight( elem, dimension, extra ); + swap( elem, cssShow, function() { + return getWidthOrHeight( elem, dimension, extra ); + } ) : + getWidthOrHeight( elem, dimension, extra ); } }, @@ -7084,7 +7089,7 @@ jQuery.cssHooks.marginLeft = addGetHookIf( support.reliableMarginLeft, swap( elem, { marginLeft: 0 }, function() { return elem.getBoundingClientRect().left; } ) - ) + "px"; + ) + "px"; } } ); @@ -7223,7 +7228,7 @@ Tween.propHooks = { if ( jQuery.fx.step[ tween.prop ] ) { jQuery.fx.step[ tween.prop ]( tween ); } else if ( tween.elem.nodeType === 1 && ( - jQuery.cssHooks[ tween.prop ] || + jQuery.cssHooks[ tween.prop ] || tween.elem.style[ finalPropName( tween.prop ) ] != null ) ) { jQuery.style( tween.elem, tween.prop, tween.now + tween.unit ); } else { @@ -7468,7 +7473,7 @@ function defaultPrefilter( elem, props, opts ) { anim.done( function() { - /* eslint-enable no-loop-func */ + /* eslint-enable no-loop-func */ // The final step of a "hide" animation is actually hiding the element if ( !hidden ) { @@ -7588,7 +7593,7 @@ function Animation( elem, properties, options ) { tweens: [], createTween: function( prop, end ) { var tween = jQuery.Tween( elem, animation.opts, prop, end, - animation.opts.specialEasing[ prop ] || animation.opts.easing ); + animation.opts.specialEasing[ prop ] || animation.opts.easing ); animation.tweens.push( tween ); return tween; }, @@ -7761,7 +7766,8 @@ jQuery.fn.extend( { anim.stop( true ); } }; - doAnimation.finish = doAnimation; + + doAnimation.finish = doAnimation; return empty || optall.queue === false ? this.each( doAnimation ) : @@ -8401,8 +8407,8 @@ jQuery.fn.extend( { if ( this.setAttribute ) { this.setAttribute( "class", className || value === false ? - "" : - dataPriv.get( this, "__className__" ) || "" + "" : + dataPriv.get( this, "__className__" ) || "" ); } } @@ -8417,7 +8423,7 @@ jQuery.fn.extend( { while ( ( elem = this[ i++ ] ) ) { if ( elem.nodeType === 1 && ( " " + stripAndCollapse( getClass( elem ) ) + " " ).indexOf( className ) > -1 ) { - return true; + return true; } } @@ -8707,9 +8713,7 @@ jQuery.extend( jQuery.event, { special.bindType || type; // jQuery handler - handle = ( - dataPriv.get( cur, "events" ) || Object.create( null ) - )[ event.type ] && + handle = ( dataPriv.get( cur, "events" ) || Object.create( null ) )[ event.type ] && dataPriv.get( cur, "handle" ); if ( handle ) { handle.apply( cur, data ); @@ -8856,7 +8860,7 @@ var rquery = ( /\?/ ); // Cross-browser xml parsing jQuery.parseXML = function( data ) { - var xml; + var xml, parserErrorElem; if ( !data || typeof data !== "string" ) { return null; } @@ -8865,12 +8869,17 @@ jQuery.parseXML = function( data ) { // IE throws on parseFromString with invalid input. try { xml = ( new window.DOMParser() ).parseFromString( data, "text/xml" ); - } catch ( e ) { - xml = undefined; - } + } catch ( e ) {} - if ( !xml || xml.getElementsByTagName( "parsererror" ).length ) { - jQuery.error( "Invalid XML: " + data ); + parserErrorElem = xml && xml.getElementsByTagName( "parsererror" )[ 0 ]; + if ( !xml || parserErrorElem ) { + jQuery.error( "Invalid XML: " + ( + parserErrorElem ? + jQuery.map( parserErrorElem.childNodes, function( el ) { + return el.textContent; + } ).join( "\n" ) : + data + ) ); } return xml; }; @@ -8971,16 +8980,14 @@ jQuery.fn.extend( { // Can add propHook for "elements" to filter or add form elements var elements = jQuery.prop( this, "elements" ); return elements ? jQuery.makeArray( elements ) : this; - } ) - .filter( function() { + } ).filter( function() { var type = this.type; // Use .is( ":disabled" ) so that fieldset[disabled] works return this.name && !jQuery( this ).is( ":disabled" ) && rsubmittable.test( this.nodeName ) && !rsubmitterTypes.test( type ) && ( this.checked || !rcheckableType.test( type ) ); - } ) - .map( function( _i, elem ) { + } ).map( function( _i, elem ) { var val = jQuery( this ).val(); if ( val == null ) { @@ -9033,7 +9040,8 @@ var // Anchor tag for parsing the document origin originAnchor = document.createElement( "a" ); - originAnchor.href = location.href; + +originAnchor.href = location.href; // Base "constructor" for jQuery.ajaxPrefilter and jQuery.ajaxTransport function addToPrefiltersOrTransports( structure ) { @@ -9414,8 +9422,8 @@ jQuery.extend( { // Context for global events is callbackContext if it is a DOM node or jQuery collection globalEventContext = s.context && ( callbackContext.nodeType || callbackContext.jquery ) ? - jQuery( callbackContext ) : - jQuery.event, + jQuery( callbackContext ) : + jQuery.event, // Deferreds deferred = jQuery.Deferred(), @@ -9727,8 +9735,10 @@ jQuery.extend( { response = ajaxHandleResponses( s, jqXHR, responses ); } - // Use a noop converter for missing script - if ( !isSuccess && jQuery.inArray( "script", s.dataTypes ) > -1 ) { + // Use a noop converter for missing script but not if jsonp + if ( !isSuccess && + jQuery.inArray( "script", s.dataTypes ) > -1 && + jQuery.inArray( "json", s.dataTypes ) < 0 ) { s.converters[ "text script" ] = function() {}; } @@ -10466,12 +10476,6 @@ jQuery.offset = { options.using.call( elem, props ); } else { - if ( typeof props.top === "number" ) { - props.top += "px"; - } - if ( typeof props.left === "number" ) { - props.left += "px"; - } curElem.css( props ); } } @@ -10640,8 +10644,11 @@ jQuery.each( [ "top", "left" ], function( _i, prop ) { // Create innerHeight, innerWidth, height, width, outerHeight and outerWidth methods jQuery.each( { Height: "height", Width: "width" }, function( name, type ) { - jQuery.each( { padding: "inner" + name, content: type, "": "outer" + name }, - function( defaultExtra, funcName ) { + jQuery.each( { + padding: "inner" + name, + content: type, + "": "outer" + name + }, function( defaultExtra, funcName ) { // Margin is only for outerHeight, outerWidth jQuery.fn[ funcName ] = function( margin, value ) { @@ -10726,7 +10733,8 @@ jQuery.fn.extend( { } } ); -jQuery.each( ( "blur focus focusin focusout resize scroll click dblclick " + +jQuery.each( + ( "blur focus focusin focusout resize scroll click dblclick " + "mousedown mouseup mousemove mouseover mouseout mouseenter mouseleave " + "change select submit keydown keypress keyup contextmenu" ).split( " " ), function( _i, name ) { @@ -10737,7 +10745,8 @@ jQuery.each( ( "blur focus focusin focusout resize scroll click dblclick " + this.on( name, null, data, fn ) : this.trigger( name ); }; - } ); + } +); diff --git a/docs/api/python/static/jquery.js b/docs/api/python/static/jquery.js index b0614034ad3a9..c4c6022f2982e 100644 --- a/docs/api/python/static/jquery.js +++ b/docs/api/python/static/jquery.js @@ -1,2 +1,2 @@ -/*! jQuery v3.5.1 | (c) JS Foundation and other contributors | jquery.org/license */ -!function(e,t){"use strict";"object"==typeof module&&"object"==typeof module.exports?module.exports=e.document?t(e,!0):function(e){if(!e.document)throw new Error("jQuery requires a window with a document");return t(e)}:t(e)}("undefined"!=typeof window?window:this,function(C,e){"use strict";var t=[],r=Object.getPrototypeOf,s=t.slice,g=t.flat?function(e){return t.flat.call(e)}:function(e){return t.concat.apply([],e)},u=t.push,i=t.indexOf,n={},o=n.toString,v=n.hasOwnProperty,a=v.toString,l=a.call(Object),y={},m=function(e){return"function"==typeof e&&"number"!=typeof e.nodeType},x=function(e){return null!=e&&e===e.window},E=C.document,c={type:!0,src:!0,nonce:!0,noModule:!0};function b(e,t,n){var r,i,o=(n=n||E).createElement("script");if(o.text=e,t)for(r in c)(i=t[r]||t.getAttribute&&t.getAttribute(r))&&o.setAttribute(r,i);n.head.appendChild(o).parentNode.removeChild(o)}function w(e){return null==e?e+"":"object"==typeof e||"function"==typeof e?n[o.call(e)]||"object":typeof e}var f="3.5.1",S=function(e,t){return new S.fn.init(e,t)};function p(e){var t=!!e&&"length"in e&&e.length,n=w(e);return!m(e)&&!x(e)&&("array"===n||0===t||"number"==typeof t&&0+~]|"+M+")"+M+"*"),U=new RegExp(M+"|>"),X=new RegExp(F),V=new RegExp("^"+I+"$"),G={ID:new RegExp("^#("+I+")"),CLASS:new RegExp("^\\.("+I+")"),TAG:new RegExp("^("+I+"|[*])"),ATTR:new RegExp("^"+W),PSEUDO:new RegExp("^"+F),CHILD:new RegExp("^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\("+M+"*(even|odd|(([+-]|)(\\d*)n|)"+M+"*(?:([+-]|)"+M+"*(\\d+)|))"+M+"*\\)|)","i"),bool:new RegExp("^(?:"+R+")$","i"),needsContext:new RegExp("^"+M+"*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\("+M+"*((?:-\\d)?\\d*)"+M+"*\\)|)(?=[^-]|$)","i")},Y=/HTML$/i,Q=/^(?:input|select|textarea|button)$/i,J=/^h\d$/i,K=/^[^{]+\{\s*\[native \w/,Z=/^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/,ee=/[+~]/,te=new RegExp("\\\\[\\da-fA-F]{1,6}"+M+"?|\\\\([^\\r\\n\\f])","g"),ne=function(e,t){var n="0x"+e.slice(1)-65536;return t||(n<0?String.fromCharCode(n+65536):String.fromCharCode(n>>10|55296,1023&n|56320))},re=/([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g,ie=function(e,t){return t?"\0"===e?"\ufffd":e.slice(0,-1)+"\\"+e.charCodeAt(e.length-1).toString(16)+" ":"\\"+e},oe=function(){T()},ae=be(function(e){return!0===e.disabled&&"fieldset"===e.nodeName.toLowerCase()},{dir:"parentNode",next:"legend"});try{H.apply(t=O.call(p.childNodes),p.childNodes),t[p.childNodes.length].nodeType}catch(e){H={apply:t.length?function(e,t){L.apply(e,O.call(t))}:function(e,t){var n=e.length,r=0;while(e[n++]=t[r++]);e.length=n-1}}}function se(t,e,n,r){var i,o,a,s,u,l,c,f=e&&e.ownerDocument,p=e?e.nodeType:9;if(n=n||[],"string"!=typeof t||!t||1!==p&&9!==p&&11!==p)return n;if(!r&&(T(e),e=e||C,E)){if(11!==p&&(u=Z.exec(t)))if(i=u[1]){if(9===p){if(!(a=e.getElementById(i)))return n;if(a.id===i)return n.push(a),n}else if(f&&(a=f.getElementById(i))&&y(e,a)&&a.id===i)return n.push(a),n}else{if(u[2])return H.apply(n,e.getElementsByTagName(t)),n;if((i=u[3])&&d.getElementsByClassName&&e.getElementsByClassName)return H.apply(n,e.getElementsByClassName(i)),n}if(d.qsa&&!N[t+" "]&&(!v||!v.test(t))&&(1!==p||"object"!==e.nodeName.toLowerCase())){if(c=t,f=e,1===p&&(U.test(t)||z.test(t))){(f=ee.test(t)&&ye(e.parentNode)||e)===e&&d.scope||((s=e.getAttribute("id"))?s=s.replace(re,ie):e.setAttribute("id",s=S)),o=(l=h(t)).length;while(o--)l[o]=(s?"#"+s:":scope")+" "+xe(l[o]);c=l.join(",")}try{return H.apply(n,f.querySelectorAll(c)),n}catch(e){N(t,!0)}finally{s===S&&e.removeAttribute("id")}}}return g(t.replace($,"$1"),e,n,r)}function ue(){var r=[];return function e(t,n){return r.push(t+" ")>b.cacheLength&&delete e[r.shift()],e[t+" "]=n}}function le(e){return e[S]=!0,e}function ce(e){var t=C.createElement("fieldset");try{return!!e(t)}catch(e){return!1}finally{t.parentNode&&t.parentNode.removeChild(t),t=null}}function fe(e,t){var n=e.split("|"),r=n.length;while(r--)b.attrHandle[n[r]]=t}function pe(e,t){var n=t&&e,r=n&&1===e.nodeType&&1===t.nodeType&&e.sourceIndex-t.sourceIndex;if(r)return r;if(n)while(n=n.nextSibling)if(n===t)return-1;return e?1:-1}function de(t){return function(e){return"input"===e.nodeName.toLowerCase()&&e.type===t}}function he(n){return function(e){var t=e.nodeName.toLowerCase();return("input"===t||"button"===t)&&e.type===n}}function ge(t){return function(e){return"form"in e?e.parentNode&&!1===e.disabled?"label"in e?"label"in e.parentNode?e.parentNode.disabled===t:e.disabled===t:e.isDisabled===t||e.isDisabled!==!t&&ae(e)===t:e.disabled===t:"label"in e&&e.disabled===t}}function ve(a){return le(function(o){return o=+o,le(function(e,t){var n,r=a([],e.length,o),i=r.length;while(i--)e[n=r[i]]&&(e[n]=!(t[n]=e[n]))})})}function ye(e){return e&&"undefined"!=typeof e.getElementsByTagName&&e}for(e in d=se.support={},i=se.isXML=function(e){var t=e.namespaceURI,n=(e.ownerDocument||e).documentElement;return!Y.test(t||n&&n.nodeName||"HTML")},T=se.setDocument=function(e){var t,n,r=e?e.ownerDocument||e:p;return r!=C&&9===r.nodeType&&r.documentElement&&(a=(C=r).documentElement,E=!i(C),p!=C&&(n=C.defaultView)&&n.top!==n&&(n.addEventListener?n.addEventListener("unload",oe,!1):n.attachEvent&&n.attachEvent("onunload",oe)),d.scope=ce(function(e){return a.appendChild(e).appendChild(C.createElement("div")),"undefined"!=typeof e.querySelectorAll&&!e.querySelectorAll(":scope fieldset div").length}),d.attributes=ce(function(e){return e.className="i",!e.getAttribute("className")}),d.getElementsByTagName=ce(function(e){return e.appendChild(C.createComment("")),!e.getElementsByTagName("*").length}),d.getElementsByClassName=K.test(C.getElementsByClassName),d.getById=ce(function(e){return a.appendChild(e).id=S,!C.getElementsByName||!C.getElementsByName(S).length}),d.getById?(b.filter.ID=function(e){var t=e.replace(te,ne);return function(e){return e.getAttribute("id")===t}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n=t.getElementById(e);return n?[n]:[]}}):(b.filter.ID=function(e){var n=e.replace(te,ne);return function(e){var t="undefined"!=typeof e.getAttributeNode&&e.getAttributeNode("id");return t&&t.value===n}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n,r,i,o=t.getElementById(e);if(o){if((n=o.getAttributeNode("id"))&&n.value===e)return[o];i=t.getElementsByName(e),r=0;while(o=i[r++])if((n=o.getAttributeNode("id"))&&n.value===e)return[o]}return[]}}),b.find.TAG=d.getElementsByTagName?function(e,t){return"undefined"!=typeof t.getElementsByTagName?t.getElementsByTagName(e):d.qsa?t.querySelectorAll(e):void 0}:function(e,t){var n,r=[],i=0,o=t.getElementsByTagName(e);if("*"===e){while(n=o[i++])1===n.nodeType&&r.push(n);return r}return o},b.find.CLASS=d.getElementsByClassName&&function(e,t){if("undefined"!=typeof t.getElementsByClassName&&E)return t.getElementsByClassName(e)},s=[],v=[],(d.qsa=K.test(C.querySelectorAll))&&(ce(function(e){var t;a.appendChild(e).innerHTML="",e.querySelectorAll("[msallowcapture^='']").length&&v.push("[*^$]="+M+"*(?:''|\"\")"),e.querySelectorAll("[selected]").length||v.push("\\["+M+"*(?:value|"+R+")"),e.querySelectorAll("[id~="+S+"-]").length||v.push("~="),(t=C.createElement("input")).setAttribute("name",""),e.appendChild(t),e.querySelectorAll("[name='']").length||v.push("\\["+M+"*name"+M+"*="+M+"*(?:''|\"\")"),e.querySelectorAll(":checked").length||v.push(":checked"),e.querySelectorAll("a#"+S+"+*").length||v.push(".#.+[+~]"),e.querySelectorAll("\\\f"),v.push("[\\r\\n\\f]")}),ce(function(e){e.innerHTML="";var t=C.createElement("input");t.setAttribute("type","hidden"),e.appendChild(t).setAttribute("name","D"),e.querySelectorAll("[name=d]").length&&v.push("name"+M+"*[*^$|!~]?="),2!==e.querySelectorAll(":enabled").length&&v.push(":enabled",":disabled"),a.appendChild(e).disabled=!0,2!==e.querySelectorAll(":disabled").length&&v.push(":enabled",":disabled"),e.querySelectorAll("*,:x"),v.push(",.*:")})),(d.matchesSelector=K.test(c=a.matches||a.webkitMatchesSelector||a.mozMatchesSelector||a.oMatchesSelector||a.msMatchesSelector))&&ce(function(e){d.disconnectedMatch=c.call(e,"*"),c.call(e,"[s!='']:x"),s.push("!=",F)}),v=v.length&&new RegExp(v.join("|")),s=s.length&&new RegExp(s.join("|")),t=K.test(a.compareDocumentPosition),y=t||K.test(a.contains)?function(e,t){var n=9===e.nodeType?e.documentElement:e,r=t&&t.parentNode;return e===r||!(!r||1!==r.nodeType||!(n.contains?n.contains(r):e.compareDocumentPosition&&16&e.compareDocumentPosition(r)))}:function(e,t){if(t)while(t=t.parentNode)if(t===e)return!0;return!1},D=t?function(e,t){if(e===t)return l=!0,0;var n=!e.compareDocumentPosition-!t.compareDocumentPosition;return n||(1&(n=(e.ownerDocument||e)==(t.ownerDocument||t)?e.compareDocumentPosition(t):1)||!d.sortDetached&&t.compareDocumentPosition(e)===n?e==C||e.ownerDocument==p&&y(p,e)?-1:t==C||t.ownerDocument==p&&y(p,t)?1:u?P(u,e)-P(u,t):0:4&n?-1:1)}:function(e,t){if(e===t)return l=!0,0;var n,r=0,i=e.parentNode,o=t.parentNode,a=[e],s=[t];if(!i||!o)return e==C?-1:t==C?1:i?-1:o?1:u?P(u,e)-P(u,t):0;if(i===o)return pe(e,t);n=e;while(n=n.parentNode)a.unshift(n);n=t;while(n=n.parentNode)s.unshift(n);while(a[r]===s[r])r++;return r?pe(a[r],s[r]):a[r]==p?-1:s[r]==p?1:0}),C},se.matches=function(e,t){return se(e,null,null,t)},se.matchesSelector=function(e,t){if(T(e),d.matchesSelector&&E&&!N[t+" "]&&(!s||!s.test(t))&&(!v||!v.test(t)))try{var n=c.call(e,t);if(n||d.disconnectedMatch||e.document&&11!==e.document.nodeType)return n}catch(e){N(t,!0)}return 0":{dir:"parentNode",first:!0}," ":{dir:"parentNode"},"+":{dir:"previousSibling",first:!0},"~":{dir:"previousSibling"}},preFilter:{ATTR:function(e){return e[1]=e[1].replace(te,ne),e[3]=(e[3]||e[4]||e[5]||"").replace(te,ne),"~="===e[2]&&(e[3]=" "+e[3]+" "),e.slice(0,4)},CHILD:function(e){return e[1]=e[1].toLowerCase(),"nth"===e[1].slice(0,3)?(e[3]||se.error(e[0]),e[4]=+(e[4]?e[5]+(e[6]||1):2*("even"===e[3]||"odd"===e[3])),e[5]=+(e[7]+e[8]||"odd"===e[3])):e[3]&&se.error(e[0]),e},PSEUDO:function(e){var t,n=!e[6]&&e[2];return G.CHILD.test(e[0])?null:(e[3]?e[2]=e[4]||e[5]||"":n&&X.test(n)&&(t=h(n,!0))&&(t=n.indexOf(")",n.length-t)-n.length)&&(e[0]=e[0].slice(0,t),e[2]=n.slice(0,t)),e.slice(0,3))}},filter:{TAG:function(e){var t=e.replace(te,ne).toLowerCase();return"*"===e?function(){return!0}:function(e){return e.nodeName&&e.nodeName.toLowerCase()===t}},CLASS:function(e){var t=m[e+" "];return t||(t=new RegExp("(^|"+M+")"+e+"("+M+"|$)"))&&m(e,function(e){return t.test("string"==typeof e.className&&e.className||"undefined"!=typeof e.getAttribute&&e.getAttribute("class")||"")})},ATTR:function(n,r,i){return function(e){var t=se.attr(e,n);return null==t?"!="===r:!r||(t+="","="===r?t===i:"!="===r?t!==i:"^="===r?i&&0===t.indexOf(i):"*="===r?i&&-1:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i;function D(e,n,r){return m(n)?S.grep(e,function(e,t){return!!n.call(e,t,e)!==r}):n.nodeType?S.grep(e,function(e){return e===n!==r}):"string"!=typeof n?S.grep(e,function(e){return-1)[^>]*|#([\w-]+))$/;(S.fn.init=function(e,t,n){var r,i;if(!e)return this;if(n=n||j,"string"==typeof e){if(!(r="<"===e[0]&&">"===e[e.length-1]&&3<=e.length?[null,e,null]:q.exec(e))||!r[1]&&t)return!t||t.jquery?(t||n).find(e):this.constructor(t).find(e);if(r[1]){if(t=t instanceof S?t[0]:t,S.merge(this,S.parseHTML(r[1],t&&t.nodeType?t.ownerDocument||t:E,!0)),N.test(r[1])&&S.isPlainObject(t))for(r in t)m(this[r])?this[r](t[r]):this.attr(r,t[r]);return this}return(i=E.getElementById(r[2]))&&(this[0]=i,this.length=1),this}return e.nodeType?(this[0]=e,this.length=1,this):m(e)?void 0!==n.ready?n.ready(e):e(S):S.makeArray(e,this)}).prototype=S.fn,j=S(E);var L=/^(?:parents|prev(?:Until|All))/,H={children:!0,contents:!0,next:!0,prev:!0};function O(e,t){while((e=e[t])&&1!==e.nodeType);return e}S.fn.extend({has:function(e){var t=S(e,this),n=t.length;return this.filter(function(){for(var e=0;e\x20\t\r\n\f]*)/i,he=/^$|^module$|\/(?:java|ecma)script/i;ce=E.createDocumentFragment().appendChild(E.createElement("div")),(fe=E.createElement("input")).setAttribute("type","radio"),fe.setAttribute("checked","checked"),fe.setAttribute("name","t"),ce.appendChild(fe),y.checkClone=ce.cloneNode(!0).cloneNode(!0).lastChild.checked,ce.innerHTML="",y.noCloneChecked=!!ce.cloneNode(!0).lastChild.defaultValue,ce.innerHTML="",y.option=!!ce.lastChild;var ge={thead:[1,"","
        "],col:[2,"","
        "],tr:[2,"","
        "],td:[3,"","
        "],_default:[0,"",""]};function ve(e,t){var n;return n="undefined"!=typeof e.getElementsByTagName?e.getElementsByTagName(t||"*"):"undefined"!=typeof e.querySelectorAll?e.querySelectorAll(t||"*"):[],void 0===t||t&&A(e,t)?S.merge([e],n):n}function ye(e,t){for(var n=0,r=e.length;n",""]);var me=/<|&#?\w+;/;function xe(e,t,n,r,i){for(var o,a,s,u,l,c,f=t.createDocumentFragment(),p=[],d=0,h=e.length;d\s*$/g;function qe(e,t){return A(e,"table")&&A(11!==t.nodeType?t:t.firstChild,"tr")&&S(e).children("tbody")[0]||e}function Le(e){return e.type=(null!==e.getAttribute("type"))+"/"+e.type,e}function He(e){return"true/"===(e.type||"").slice(0,5)?e.type=e.type.slice(5):e.removeAttribute("type"),e}function Oe(e,t){var n,r,i,o,a,s;if(1===t.nodeType){if(Y.hasData(e)&&(s=Y.get(e).events))for(i in Y.remove(t,"handle events"),s)for(n=0,r=s[i].length;n").attr(n.scriptAttrs||{}).prop({charset:n.scriptCharset,src:n.url}).on("load error",i=function(e){r.remove(),i=null,e&&t("error"===e.type?404:200,e.type)}),E.head.appendChild(r[0])},abort:function(){i&&i()}}});var Ut,Xt=[],Vt=/(=)\?(?=&|$)|\?\?/;S.ajaxSetup({jsonp:"callback",jsonpCallback:function(){var e=Xt.pop()||S.expando+"_"+Ct.guid++;return this[e]=!0,e}}),S.ajaxPrefilter("json jsonp",function(e,t,n){var r,i,o,a=!1!==e.jsonp&&(Vt.test(e.url)?"url":"string"==typeof e.data&&0===(e.contentType||"").indexOf("application/x-www-form-urlencoded")&&Vt.test(e.data)&&"data");if(a||"jsonp"===e.dataTypes[0])return r=e.jsonpCallback=m(e.jsonpCallback)?e.jsonpCallback():e.jsonpCallback,a?e[a]=e[a].replace(Vt,"$1"+r):!1!==e.jsonp&&(e.url+=(Et.test(e.url)?"&":"?")+e.jsonp+"="+r),e.converters["script json"]=function(){return o||S.error(r+" was not called"),o[0]},e.dataTypes[0]="json",i=C[r],C[r]=function(){o=arguments},n.always(function(){void 0===i?S(C).removeProp(r):C[r]=i,e[r]&&(e.jsonpCallback=t.jsonpCallback,Xt.push(r)),o&&m(i)&&i(o[0]),o=i=void 0}),"script"}),y.createHTMLDocument=((Ut=E.implementation.createHTMLDocument("").body).innerHTML="
        ",2===Ut.childNodes.length),S.parseHTML=function(e,t,n){return"string"!=typeof e?[]:("boolean"==typeof t&&(n=t,t=!1),t||(y.createHTMLDocument?((r=(t=E.implementation.createHTMLDocument("")).createElement("base")).href=E.location.href,t.head.appendChild(r)):t=E),o=!n&&[],(i=N.exec(e))?[t.createElement(i[1])]:(i=xe([e],t,o),o&&o.length&&S(o).remove(),S.merge([],i.childNodes)));var r,i,o},S.fn.load=function(e,t,n){var r,i,o,a=this,s=e.indexOf(" ");return-1").append(S.parseHTML(e)).find(r):e)}).always(n&&function(e,t){a.each(function(){n.apply(this,o||[e.responseText,t,e])})}),this},S.expr.pseudos.animated=function(t){return S.grep(S.timers,function(e){return t===e.elem}).length},S.offset={setOffset:function(e,t,n){var r,i,o,a,s,u,l=S.css(e,"position"),c=S(e),f={};"static"===l&&(e.style.position="relative"),s=c.offset(),o=S.css(e,"top"),u=S.css(e,"left"),("absolute"===l||"fixed"===l)&&-1<(o+u).indexOf("auto")?(a=(r=c.position()).top,i=r.left):(a=parseFloat(o)||0,i=parseFloat(u)||0),m(t)&&(t=t.call(e,n,S.extend({},s))),null!=t.top&&(f.top=t.top-s.top+a),null!=t.left&&(f.left=t.left-s.left+i),"using"in t?t.using.call(e,f):("number"==typeof f.top&&(f.top+="px"),"number"==typeof f.left&&(f.left+="px"),c.css(f))}},S.fn.extend({offset:function(t){if(arguments.length)return void 0===t?this:this.each(function(e){S.offset.setOffset(this,t,e)});var e,n,r=this[0];return r?r.getClientRects().length?(e=r.getBoundingClientRect(),n=r.ownerDocument.defaultView,{top:e.top+n.pageYOffset,left:e.left+n.pageXOffset}):{top:0,left:0}:void 0},position:function(){if(this[0]){var e,t,n,r=this[0],i={top:0,left:0};if("fixed"===S.css(r,"position"))t=r.getBoundingClientRect();else{t=this.offset(),n=r.ownerDocument,e=r.offsetParent||n.documentElement;while(e&&(e===n.body||e===n.documentElement)&&"static"===S.css(e,"position"))e=e.parentNode;e&&e!==r&&1===e.nodeType&&((i=S(e).offset()).top+=S.css(e,"borderTopWidth",!0),i.left+=S.css(e,"borderLeftWidth",!0))}return{top:t.top-i.top-S.css(r,"marginTop",!0),left:t.left-i.left-S.css(r,"marginLeft",!0)}}},offsetParent:function(){return this.map(function(){var e=this.offsetParent;while(e&&"static"===S.css(e,"position"))e=e.offsetParent;return e||re})}}),S.each({scrollLeft:"pageXOffset",scrollTop:"pageYOffset"},function(t,i){var o="pageYOffset"===i;S.fn[t]=function(e){return $(this,function(e,t,n){var r;if(x(e)?r=e:9===e.nodeType&&(r=e.defaultView),void 0===n)return r?r[i]:e[t];r?r.scrollTo(o?r.pageXOffset:n,o?n:r.pageYOffset):e[t]=n},t,e,arguments.length)}}),S.each(["top","left"],function(e,n){S.cssHooks[n]=$e(y.pixelPosition,function(e,t){if(t)return t=Be(e,n),Me.test(t)?S(e).position()[n]+"px":t})}),S.each({Height:"height",Width:"width"},function(a,s){S.each({padding:"inner"+a,content:s,"":"outer"+a},function(r,o){S.fn[o]=function(e,t){var n=arguments.length&&(r||"boolean"!=typeof e),i=r||(!0===e||!0===t?"margin":"border");return $(this,function(e,t,n){var r;return x(e)?0===o.indexOf("outer")?e["inner"+a]:e.document.documentElement["client"+a]:9===e.nodeType?(r=e.documentElement,Math.max(e.body["scroll"+a],r["scroll"+a],e.body["offset"+a],r["offset"+a],r["client"+a])):void 0===n?S.css(e,t,i):S.style(e,t,n,i)},s,n?e:void 0,n)}})}),S.each(["ajaxStart","ajaxStop","ajaxComplete","ajaxError","ajaxSuccess","ajaxSend"],function(e,t){S.fn[t]=function(e){return this.on(t,e)}}),S.fn.extend({bind:function(e,t,n){return this.on(e,null,t,n)},unbind:function(e,t){return this.off(e,null,t)},delegate:function(e,t,n,r){return this.on(t,e,n,r)},undelegate:function(e,t,n){return 1===arguments.length?this.off(e,"**"):this.off(t,e||"**",n)},hover:function(e,t){return this.mouseenter(e).mouseleave(t||e)}}),S.each("blur focus focusin focusout resize scroll click dblclick mousedown mouseup mousemove mouseover mouseout mouseenter mouseleave change select submit keydown keypress keyup contextmenu".split(" "),function(e,n){S.fn[n]=function(e,t){return 0+~]|"+M+")"+M+"*"),U=new RegExp(M+"|>"),X=new RegExp(F),V=new RegExp("^"+I+"$"),G={ID:new RegExp("^#("+I+")"),CLASS:new RegExp("^\\.("+I+")"),TAG:new RegExp("^("+I+"|[*])"),ATTR:new RegExp("^"+W),PSEUDO:new RegExp("^"+F),CHILD:new RegExp("^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\("+M+"*(even|odd|(([+-]|)(\\d*)n|)"+M+"*(?:([+-]|)"+M+"*(\\d+)|))"+M+"*\\)|)","i"),bool:new RegExp("^(?:"+R+")$","i"),needsContext:new RegExp("^"+M+"*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\("+M+"*((?:-\\d)?\\d*)"+M+"*\\)|)(?=[^-]|$)","i")},Y=/HTML$/i,Q=/^(?:input|select|textarea|button)$/i,J=/^h\d$/i,K=/^[^{]+\{\s*\[native \w/,Z=/^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/,ee=/[+~]/,te=new RegExp("\\\\[\\da-fA-F]{1,6}"+M+"?|\\\\([^\\r\\n\\f])","g"),ne=function(e,t){var n="0x"+e.slice(1)-65536;return t||(n<0?String.fromCharCode(n+65536):String.fromCharCode(n>>10|55296,1023&n|56320))},re=/([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g,ie=function(e,t){return t?"\0"===e?"\ufffd":e.slice(0,-1)+"\\"+e.charCodeAt(e.length-1).toString(16)+" ":"\\"+e},oe=function(){T()},ae=be(function(e){return!0===e.disabled&&"fieldset"===e.nodeName.toLowerCase()},{dir:"parentNode",next:"legend"});try{H.apply(t=O.call(p.childNodes),p.childNodes),t[p.childNodes.length].nodeType}catch(e){H={apply:t.length?function(e,t){L.apply(e,O.call(t))}:function(e,t){var n=e.length,r=0;while(e[n++]=t[r++]);e.length=n-1}}}function se(t,e,n,r){var i,o,a,s,u,l,c,f=e&&e.ownerDocument,p=e?e.nodeType:9;if(n=n||[],"string"!=typeof t||!t||1!==p&&9!==p&&11!==p)return n;if(!r&&(T(e),e=e||C,E)){if(11!==p&&(u=Z.exec(t)))if(i=u[1]){if(9===p){if(!(a=e.getElementById(i)))return n;if(a.id===i)return n.push(a),n}else if(f&&(a=f.getElementById(i))&&y(e,a)&&a.id===i)return n.push(a),n}else{if(u[2])return H.apply(n,e.getElementsByTagName(t)),n;if((i=u[3])&&d.getElementsByClassName&&e.getElementsByClassName)return H.apply(n,e.getElementsByClassName(i)),n}if(d.qsa&&!N[t+" "]&&(!v||!v.test(t))&&(1!==p||"object"!==e.nodeName.toLowerCase())){if(c=t,f=e,1===p&&(U.test(t)||z.test(t))){(f=ee.test(t)&&ye(e.parentNode)||e)===e&&d.scope||((s=e.getAttribute("id"))?s=s.replace(re,ie):e.setAttribute("id",s=S)),o=(l=h(t)).length;while(o--)l[o]=(s?"#"+s:":scope")+" "+xe(l[o]);c=l.join(",")}try{return H.apply(n,f.querySelectorAll(c)),n}catch(e){N(t,!0)}finally{s===S&&e.removeAttribute("id")}}}return g(t.replace($,"$1"),e,n,r)}function ue(){var r=[];return function e(t,n){return r.push(t+" ")>b.cacheLength&&delete e[r.shift()],e[t+" "]=n}}function le(e){return e[S]=!0,e}function ce(e){var t=C.createElement("fieldset");try{return!!e(t)}catch(e){return!1}finally{t.parentNode&&t.parentNode.removeChild(t),t=null}}function fe(e,t){var n=e.split("|"),r=n.length;while(r--)b.attrHandle[n[r]]=t}function pe(e,t){var n=t&&e,r=n&&1===e.nodeType&&1===t.nodeType&&e.sourceIndex-t.sourceIndex;if(r)return r;if(n)while(n=n.nextSibling)if(n===t)return-1;return e?1:-1}function de(t){return function(e){return"input"===e.nodeName.toLowerCase()&&e.type===t}}function he(n){return function(e){var t=e.nodeName.toLowerCase();return("input"===t||"button"===t)&&e.type===n}}function ge(t){return function(e){return"form"in e?e.parentNode&&!1===e.disabled?"label"in e?"label"in e.parentNode?e.parentNode.disabled===t:e.disabled===t:e.isDisabled===t||e.isDisabled!==!t&&ae(e)===t:e.disabled===t:"label"in e&&e.disabled===t}}function ve(a){return le(function(o){return o=+o,le(function(e,t){var n,r=a([],e.length,o),i=r.length;while(i--)e[n=r[i]]&&(e[n]=!(t[n]=e[n]))})})}function ye(e){return e&&"undefined"!=typeof e.getElementsByTagName&&e}for(e in d=se.support={},i=se.isXML=function(e){var t=e&&e.namespaceURI,n=e&&(e.ownerDocument||e).documentElement;return!Y.test(t||n&&n.nodeName||"HTML")},T=se.setDocument=function(e){var t,n,r=e?e.ownerDocument||e:p;return r!=C&&9===r.nodeType&&r.documentElement&&(a=(C=r).documentElement,E=!i(C),p!=C&&(n=C.defaultView)&&n.top!==n&&(n.addEventListener?n.addEventListener("unload",oe,!1):n.attachEvent&&n.attachEvent("onunload",oe)),d.scope=ce(function(e){return a.appendChild(e).appendChild(C.createElement("div")),"undefined"!=typeof e.querySelectorAll&&!e.querySelectorAll(":scope fieldset div").length}),d.attributes=ce(function(e){return e.className="i",!e.getAttribute("className")}),d.getElementsByTagName=ce(function(e){return e.appendChild(C.createComment("")),!e.getElementsByTagName("*").length}),d.getElementsByClassName=K.test(C.getElementsByClassName),d.getById=ce(function(e){return a.appendChild(e).id=S,!C.getElementsByName||!C.getElementsByName(S).length}),d.getById?(b.filter.ID=function(e){var t=e.replace(te,ne);return function(e){return e.getAttribute("id")===t}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n=t.getElementById(e);return n?[n]:[]}}):(b.filter.ID=function(e){var n=e.replace(te,ne);return function(e){var t="undefined"!=typeof e.getAttributeNode&&e.getAttributeNode("id");return t&&t.value===n}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n,r,i,o=t.getElementById(e);if(o){if((n=o.getAttributeNode("id"))&&n.value===e)return[o];i=t.getElementsByName(e),r=0;while(o=i[r++])if((n=o.getAttributeNode("id"))&&n.value===e)return[o]}return[]}}),b.find.TAG=d.getElementsByTagName?function(e,t){return"undefined"!=typeof t.getElementsByTagName?t.getElementsByTagName(e):d.qsa?t.querySelectorAll(e):void 0}:function(e,t){var n,r=[],i=0,o=t.getElementsByTagName(e);if("*"===e){while(n=o[i++])1===n.nodeType&&r.push(n);return r}return o},b.find.CLASS=d.getElementsByClassName&&function(e,t){if("undefined"!=typeof t.getElementsByClassName&&E)return t.getElementsByClassName(e)},s=[],v=[],(d.qsa=K.test(C.querySelectorAll))&&(ce(function(e){var t;a.appendChild(e).innerHTML="",e.querySelectorAll("[msallowcapture^='']").length&&v.push("[*^$]="+M+"*(?:''|\"\")"),e.querySelectorAll("[selected]").length||v.push("\\["+M+"*(?:value|"+R+")"),e.querySelectorAll("[id~="+S+"-]").length||v.push("~="),(t=C.createElement("input")).setAttribute("name",""),e.appendChild(t),e.querySelectorAll("[name='']").length||v.push("\\["+M+"*name"+M+"*="+M+"*(?:''|\"\")"),e.querySelectorAll(":checked").length||v.push(":checked"),e.querySelectorAll("a#"+S+"+*").length||v.push(".#.+[+~]"),e.querySelectorAll("\\\f"),v.push("[\\r\\n\\f]")}),ce(function(e){e.innerHTML="";var t=C.createElement("input");t.setAttribute("type","hidden"),e.appendChild(t).setAttribute("name","D"),e.querySelectorAll("[name=d]").length&&v.push("name"+M+"*[*^$|!~]?="),2!==e.querySelectorAll(":enabled").length&&v.push(":enabled",":disabled"),a.appendChild(e).disabled=!0,2!==e.querySelectorAll(":disabled").length&&v.push(":enabled",":disabled"),e.querySelectorAll("*,:x"),v.push(",.*:")})),(d.matchesSelector=K.test(c=a.matches||a.webkitMatchesSelector||a.mozMatchesSelector||a.oMatchesSelector||a.msMatchesSelector))&&ce(function(e){d.disconnectedMatch=c.call(e,"*"),c.call(e,"[s!='']:x"),s.push("!=",F)}),v=v.length&&new RegExp(v.join("|")),s=s.length&&new RegExp(s.join("|")),t=K.test(a.compareDocumentPosition),y=t||K.test(a.contains)?function(e,t){var n=9===e.nodeType?e.documentElement:e,r=t&&t.parentNode;return e===r||!(!r||1!==r.nodeType||!(n.contains?n.contains(r):e.compareDocumentPosition&&16&e.compareDocumentPosition(r)))}:function(e,t){if(t)while(t=t.parentNode)if(t===e)return!0;return!1},j=t?function(e,t){if(e===t)return l=!0,0;var n=!e.compareDocumentPosition-!t.compareDocumentPosition;return n||(1&(n=(e.ownerDocument||e)==(t.ownerDocument||t)?e.compareDocumentPosition(t):1)||!d.sortDetached&&t.compareDocumentPosition(e)===n?e==C||e.ownerDocument==p&&y(p,e)?-1:t==C||t.ownerDocument==p&&y(p,t)?1:u?P(u,e)-P(u,t):0:4&n?-1:1)}:function(e,t){if(e===t)return l=!0,0;var n,r=0,i=e.parentNode,o=t.parentNode,a=[e],s=[t];if(!i||!o)return e==C?-1:t==C?1:i?-1:o?1:u?P(u,e)-P(u,t):0;if(i===o)return pe(e,t);n=e;while(n=n.parentNode)a.unshift(n);n=t;while(n=n.parentNode)s.unshift(n);while(a[r]===s[r])r++;return r?pe(a[r],s[r]):a[r]==p?-1:s[r]==p?1:0}),C},se.matches=function(e,t){return se(e,null,null,t)},se.matchesSelector=function(e,t){if(T(e),d.matchesSelector&&E&&!N[t+" "]&&(!s||!s.test(t))&&(!v||!v.test(t)))try{var n=c.call(e,t);if(n||d.disconnectedMatch||e.document&&11!==e.document.nodeType)return n}catch(e){N(t,!0)}return 0":{dir:"parentNode",first:!0}," ":{dir:"parentNode"},"+":{dir:"previousSibling",first:!0},"~":{dir:"previousSibling"}},preFilter:{ATTR:function(e){return e[1]=e[1].replace(te,ne),e[3]=(e[3]||e[4]||e[5]||"").replace(te,ne),"~="===e[2]&&(e[3]=" "+e[3]+" "),e.slice(0,4)},CHILD:function(e){return e[1]=e[1].toLowerCase(),"nth"===e[1].slice(0,3)?(e[3]||se.error(e[0]),e[4]=+(e[4]?e[5]+(e[6]||1):2*("even"===e[3]||"odd"===e[3])),e[5]=+(e[7]+e[8]||"odd"===e[3])):e[3]&&se.error(e[0]),e},PSEUDO:function(e){var t,n=!e[6]&&e[2];return G.CHILD.test(e[0])?null:(e[3]?e[2]=e[4]||e[5]||"":n&&X.test(n)&&(t=h(n,!0))&&(t=n.indexOf(")",n.length-t)-n.length)&&(e[0]=e[0].slice(0,t),e[2]=n.slice(0,t)),e.slice(0,3))}},filter:{TAG:function(e){var t=e.replace(te,ne).toLowerCase();return"*"===e?function(){return!0}:function(e){return e.nodeName&&e.nodeName.toLowerCase()===t}},CLASS:function(e){var t=m[e+" "];return t||(t=new RegExp("(^|"+M+")"+e+"("+M+"|$)"))&&m(e,function(e){return t.test("string"==typeof e.className&&e.className||"undefined"!=typeof e.getAttribute&&e.getAttribute("class")||"")})},ATTR:function(n,r,i){return function(e){var t=se.attr(e,n);return null==t?"!="===r:!r||(t+="","="===r?t===i:"!="===r?t!==i:"^="===r?i&&0===t.indexOf(i):"*="===r?i&&-1:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i;function j(e,n,r){return m(n)?S.grep(e,function(e,t){return!!n.call(e,t,e)!==r}):n.nodeType?S.grep(e,function(e){return e===n!==r}):"string"!=typeof n?S.grep(e,function(e){return-1)[^>]*|#([\w-]+))$/;(S.fn.init=function(e,t,n){var r,i;if(!e)return this;if(n=n||D,"string"==typeof e){if(!(r="<"===e[0]&&">"===e[e.length-1]&&3<=e.length?[null,e,null]:q.exec(e))||!r[1]&&t)return!t||t.jquery?(t||n).find(e):this.constructor(t).find(e);if(r[1]){if(t=t instanceof S?t[0]:t,S.merge(this,S.parseHTML(r[1],t&&t.nodeType?t.ownerDocument||t:E,!0)),N.test(r[1])&&S.isPlainObject(t))for(r in t)m(this[r])?this[r](t[r]):this.attr(r,t[r]);return this}return(i=E.getElementById(r[2]))&&(this[0]=i,this.length=1),this}return e.nodeType?(this[0]=e,this.length=1,this):m(e)?void 0!==n.ready?n.ready(e):e(S):S.makeArray(e,this)}).prototype=S.fn,D=S(E);var L=/^(?:parents|prev(?:Until|All))/,H={children:!0,contents:!0,next:!0,prev:!0};function O(e,t){while((e=e[t])&&1!==e.nodeType);return e}S.fn.extend({has:function(e){var t=S(e,this),n=t.length;return this.filter(function(){for(var e=0;e\x20\t\r\n\f]*)/i,he=/^$|^module$|\/(?:java|ecma)script/i;ce=E.createDocumentFragment().appendChild(E.createElement("div")),(fe=E.createElement("input")).setAttribute("type","radio"),fe.setAttribute("checked","checked"),fe.setAttribute("name","t"),ce.appendChild(fe),y.checkClone=ce.cloneNode(!0).cloneNode(!0).lastChild.checked,ce.innerHTML="",y.noCloneChecked=!!ce.cloneNode(!0).lastChild.defaultValue,ce.innerHTML="",y.option=!!ce.lastChild;var ge={thead:[1,"","
        "],col:[2,"","
        "],tr:[2,"","
        "],td:[3,"","
        "],_default:[0,"",""]};function ve(e,t){var n;return n="undefined"!=typeof e.getElementsByTagName?e.getElementsByTagName(t||"*"):"undefined"!=typeof e.querySelectorAll?e.querySelectorAll(t||"*"):[],void 0===t||t&&A(e,t)?S.merge([e],n):n}function ye(e,t){for(var n=0,r=e.length;n",""]);var me=/<|&#?\w+;/;function xe(e,t,n,r,i){for(var o,a,s,u,l,c,f=t.createDocumentFragment(),p=[],d=0,h=e.length;d\s*$/g;function je(e,t){return A(e,"table")&&A(11!==t.nodeType?t:t.firstChild,"tr")&&S(e).children("tbody")[0]||e}function De(e){return e.type=(null!==e.getAttribute("type"))+"/"+e.type,e}function qe(e){return"true/"===(e.type||"").slice(0,5)?e.type=e.type.slice(5):e.removeAttribute("type"),e}function Le(e,t){var n,r,i,o,a,s;if(1===t.nodeType){if(Y.hasData(e)&&(s=Y.get(e).events))for(i in Y.remove(t,"handle events"),s)for(n=0,r=s[i].length;n").attr(n.scriptAttrs||{}).prop({charset:n.scriptCharset,src:n.url}).on("load error",i=function(e){r.remove(),i=null,e&&t("error"===e.type?404:200,e.type)}),E.head.appendChild(r[0])},abort:function(){i&&i()}}});var _t,zt=[],Ut=/(=)\?(?=&|$)|\?\?/;S.ajaxSetup({jsonp:"callback",jsonpCallback:function(){var e=zt.pop()||S.expando+"_"+wt.guid++;return this[e]=!0,e}}),S.ajaxPrefilter("json jsonp",function(e,t,n){var r,i,o,a=!1!==e.jsonp&&(Ut.test(e.url)?"url":"string"==typeof e.data&&0===(e.contentType||"").indexOf("application/x-www-form-urlencoded")&&Ut.test(e.data)&&"data");if(a||"jsonp"===e.dataTypes[0])return r=e.jsonpCallback=m(e.jsonpCallback)?e.jsonpCallback():e.jsonpCallback,a?e[a]=e[a].replace(Ut,"$1"+r):!1!==e.jsonp&&(e.url+=(Tt.test(e.url)?"&":"?")+e.jsonp+"="+r),e.converters["script json"]=function(){return o||S.error(r+" was not called"),o[0]},e.dataTypes[0]="json",i=C[r],C[r]=function(){o=arguments},n.always(function(){void 0===i?S(C).removeProp(r):C[r]=i,e[r]&&(e.jsonpCallback=t.jsonpCallback,zt.push(r)),o&&m(i)&&i(o[0]),o=i=void 0}),"script"}),y.createHTMLDocument=((_t=E.implementation.createHTMLDocument("").body).innerHTML="
        ",2===_t.childNodes.length),S.parseHTML=function(e,t,n){return"string"!=typeof e?[]:("boolean"==typeof t&&(n=t,t=!1),t||(y.createHTMLDocument?((r=(t=E.implementation.createHTMLDocument("")).createElement("base")).href=E.location.href,t.head.appendChild(r)):t=E),o=!n&&[],(i=N.exec(e))?[t.createElement(i[1])]:(i=xe([e],t,o),o&&o.length&&S(o).remove(),S.merge([],i.childNodes)));var r,i,o},S.fn.load=function(e,t,n){var r,i,o,a=this,s=e.indexOf(" ");return-1").append(S.parseHTML(e)).find(r):e)}).always(n&&function(e,t){a.each(function(){n.apply(this,o||[e.responseText,t,e])})}),this},S.expr.pseudos.animated=function(t){return S.grep(S.timers,function(e){return t===e.elem}).length},S.offset={setOffset:function(e,t,n){var r,i,o,a,s,u,l=S.css(e,"position"),c=S(e),f={};"static"===l&&(e.style.position="relative"),s=c.offset(),o=S.css(e,"top"),u=S.css(e,"left"),("absolute"===l||"fixed"===l)&&-1<(o+u).indexOf("auto")?(a=(r=c.position()).top,i=r.left):(a=parseFloat(o)||0,i=parseFloat(u)||0),m(t)&&(t=t.call(e,n,S.extend({},s))),null!=t.top&&(f.top=t.top-s.top+a),null!=t.left&&(f.left=t.left-s.left+i),"using"in t?t.using.call(e,f):c.css(f)}},S.fn.extend({offset:function(t){if(arguments.length)return void 0===t?this:this.each(function(e){S.offset.setOffset(this,t,e)});var e,n,r=this[0];return r?r.getClientRects().length?(e=r.getBoundingClientRect(),n=r.ownerDocument.defaultView,{top:e.top+n.pageYOffset,left:e.left+n.pageXOffset}):{top:0,left:0}:void 0},position:function(){if(this[0]){var e,t,n,r=this[0],i={top:0,left:0};if("fixed"===S.css(r,"position"))t=r.getBoundingClientRect();else{t=this.offset(),n=r.ownerDocument,e=r.offsetParent||n.documentElement;while(e&&(e===n.body||e===n.documentElement)&&"static"===S.css(e,"position"))e=e.parentNode;e&&e!==r&&1===e.nodeType&&((i=S(e).offset()).top+=S.css(e,"borderTopWidth",!0),i.left+=S.css(e,"borderLeftWidth",!0))}return{top:t.top-i.top-S.css(r,"marginTop",!0),left:t.left-i.left-S.css(r,"marginLeft",!0)}}},offsetParent:function(){return this.map(function(){var e=this.offsetParent;while(e&&"static"===S.css(e,"position"))e=e.offsetParent;return e||re})}}),S.each({scrollLeft:"pageXOffset",scrollTop:"pageYOffset"},function(t,i){var o="pageYOffset"===i;S.fn[t]=function(e){return $(this,function(e,t,n){var r;if(x(e)?r=e:9===e.nodeType&&(r=e.defaultView),void 0===n)return r?r[i]:e[t];r?r.scrollTo(o?r.pageXOffset:n,o?n:r.pageYOffset):e[t]=n},t,e,arguments.length)}}),S.each(["top","left"],function(e,n){S.cssHooks[n]=Fe(y.pixelPosition,function(e,t){if(t)return t=We(e,n),Pe.test(t)?S(e).position()[n]+"px":t})}),S.each({Height:"height",Width:"width"},function(a,s){S.each({padding:"inner"+a,content:s,"":"outer"+a},function(r,o){S.fn[o]=function(e,t){var n=arguments.length&&(r||"boolean"!=typeof e),i=r||(!0===e||!0===t?"margin":"border");return $(this,function(e,t,n){var r;return x(e)?0===o.indexOf("outer")?e["inner"+a]:e.document.documentElement["client"+a]:9===e.nodeType?(r=e.documentElement,Math.max(e.body["scroll"+a],r["scroll"+a],e.body["offset"+a],r["offset"+a],r["client"+a])):void 0===n?S.css(e,t,i):S.style(e,t,n,i)},s,n?e:void 0,n)}})}),S.each(["ajaxStart","ajaxStop","ajaxComplete","ajaxError","ajaxSuccess","ajaxSend"],function(e,t){S.fn[t]=function(e){return this.on(t,e)}}),S.fn.extend({bind:function(e,t,n){return this.on(e,null,t,n)},unbind:function(e,t){return this.off(e,null,t)},delegate:function(e,t,n,r){return this.on(t,e,n,r)},undelegate:function(e,t,n){return 1===arguments.length?this.off(e,"**"):this.off(t,e||"**",n)},hover:function(e,t){return this.mouseenter(e).mouseleave(t||e)}}),S.each("blur focus focusin focusout resize scroll click dblclick mousedown mouseup mousemove mouseover mouseout mouseenter mouseleave change select submit keydown keypress keyup contextmenu".split(" "),function(e,n){S.fn[n]=function(e,t){return 0 { + const [docname, title, anchor, descr, score, filename] = result + return score }, */ @@ -28,9 +30,11 @@ if (!Scorer) { // or matches in the last dotted part of the object name objPartialMatch: 6, // Additive scores depending on the priority of the object - objPrio: {0: 15, // used to be importantResults - 1: 5, // used to be objectResults - 2: -5}, // used to be unimportantResults + objPrio: { + 0: 15, // used to be importantResults + 1: 5, // used to be objectResults + 2: -5, // used to be unimportantResults + }, // Used when the priority is not in the mapping. objPrioDefault: 0, @@ -39,452 +43,495 @@ if (!Scorer) { partialTitle: 7, // query found in terms term: 5, - partialTerm: 2 + partialTerm: 2, }; } -if (!splitQuery) { - function splitQuery(query) { - return query.split(/\s+/); +const _removeChildren = (element) => { + while (element && element.lastChild) element.removeChild(element.lastChild); +}; + +/** + * See https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions#escaping + */ +const _escapeRegExp = (string) => + string.replace(/[.*+\-?^${}()|[\]\\]/g, "\\$&"); // $& means the whole matched string + +const _displayItem = (item, searchTerms) => { + const docBuilder = DOCUMENTATION_OPTIONS.BUILDER; + const docUrlRoot = DOCUMENTATION_OPTIONS.URL_ROOT; + const docFileSuffix = DOCUMENTATION_OPTIONS.FILE_SUFFIX; + const docLinkSuffix = DOCUMENTATION_OPTIONS.LINK_SUFFIX; + const showSearchSummary = DOCUMENTATION_OPTIONS.SHOW_SEARCH_SUMMARY; + + const [docName, title, anchor, descr, score, _filename] = item; + + let listItem = document.createElement("li"); + let requestUrl; + let linkUrl; + if (docBuilder === "dirhtml") { + // dirhtml builder + let dirname = docName + "/"; + if (dirname.match(/\/index\/$/)) + dirname = dirname.substring(0, dirname.length - 6); + else if (dirname === "index/") dirname = ""; + requestUrl = docUrlRoot + dirname; + linkUrl = requestUrl; + } else { + // normal html builders + requestUrl = docUrlRoot + docName + docFileSuffix; + linkUrl = docName + docLinkSuffix; } + let linkEl = listItem.appendChild(document.createElement("a")); + linkEl.href = linkUrl + anchor; + linkEl.dataset.score = score; + linkEl.innerHTML = title; + if (descr) + listItem.appendChild(document.createElement("span")).innerHTML = + " (" + descr + ")"; + else if (showSearchSummary) + fetch(requestUrl) + .then((responseData) => responseData.text()) + .then((data) => { + if (data) + listItem.appendChild( + Search.makeSearchSummary(data, searchTerms) + ); + }); + Search.output.appendChild(listItem); +}; +const _finishSearch = (resultCount) => { + Search.stopPulse(); + Search.title.innerText = _("Search Results"); + if (!resultCount) + Search.status.innerText = Documentation.gettext( + "Your search did not match any documents. Please make sure that all words are spelled correctly and that you've selected enough categories." + ); + else + Search.status.innerText = _( + `Search finished, found ${resultCount} page(s) matching the search query.` + ); +}; +const _displayNextItem = ( + results, + resultCount, + searchTerms +) => { + // results left, load the summary and display it + // this is intended to be dynamic (don't sub resultsCount) + if (results.length) { + _displayItem(results.pop(), searchTerms); + setTimeout( + () => _displayNextItem(results, resultCount, searchTerms), + 5 + ); + } + // search finished, update title and status message + else _finishSearch(resultCount); +}; + +/** + * Default splitQuery function. Can be overridden in ``sphinx.search`` with a + * custom function per language. + * + * The regular expression works by splitting the string on consecutive characters + * that are not Unicode letters, numbers, underscores, or emoji characters. + * This is the same as ``\W+`` in Python, preserving the surrogate pair area. + */ +if (typeof splitQuery === "undefined") { + var splitQuery = (query) => query + .split(/[^\p{Letter}\p{Number}_\p{Emoji_Presentation}]+/gu) + .filter(term => term) // remove remaining empty strings } /** * Search Module */ -var Search = { - - _index : null, - _queued_query : null, - _pulse_status : -1, - - htmlToText : function(htmlString) { - var virtualDocument = document.implementation.createHTMLDocument('virtual'); - var htmlElement = $(htmlString, virtualDocument); - htmlElement.find('.headerlink').remove(); - docContent = htmlElement.find('[role=main]')[0]; - if(docContent === undefined) { - console.warn("Content block not found. Sphinx search tries to obtain it " + - "via '[role=main]'. Could you check your theme or template."); - return ""; - } - return docContent.textContent || docContent.innerText; +const Search = { + _index: null, + _queued_query: null, + _pulse_status: -1, + + htmlToText: (htmlString) => { + const htmlElement = new DOMParser().parseFromString(htmlString, 'text/html'); + htmlElement.querySelectorAll(".headerlink").forEach((el) => { el.remove() }); + const docContent = htmlElement.querySelector('[role="main"]'); + if (docContent !== undefined) return docContent.textContent; + console.warn( + "Content block not found. Sphinx search tries to obtain it via '[role=main]'. Could you check your theme or template." + ); + return ""; }, - init : function() { - var params = $.getQueryParameters(); - if (params.q) { - var query = params.q[0]; - $('input[name="q"]')[0].value = query; - this.performSearch(query); - } + init: () => { + const query = new URLSearchParams(window.location.search).get("q"); + document + .querySelectorAll('input[name="q"]') + .forEach((el) => (el.value = query)); + if (query) Search.performSearch(query); }, - loadIndex : function(url) { - $.ajax({type: "GET", url: url, data: null, - dataType: "script", cache: true, - complete: function(jqxhr, textstatus) { - if (textstatus != "success") { - document.getElementById("searchindexloader").src = url; - } - }}); - }, + loadIndex: (url) => + (document.body.appendChild(document.createElement("script")).src = url), - setIndex : function(index) { - var q; - this._index = index; - if ((q = this._queued_query) !== null) { - this._queued_query = null; - Search.query(q); + setIndex: (index) => { + Search._index = index; + if (Search._queued_query !== null) { + const query = Search._queued_query; + Search._queued_query = null; + Search.query(query); } }, - hasIndex : function() { - return this._index !== null; - }, + hasIndex: () => Search._index !== null, - deferQuery : function(query) { - this._queued_query = query; - }, + deferQuery: (query) => (Search._queued_query = query), - stopPulse : function() { - this._pulse_status = 0; - }, + stopPulse: () => (Search._pulse_status = -1), - startPulse : function() { - if (this._pulse_status >= 0) - return; - function pulse() { - var i; + startPulse: () => { + if (Search._pulse_status >= 0) return; + + const pulse = () => { Search._pulse_status = (Search._pulse_status + 1) % 4; - var dotString = ''; - for (i = 0; i < Search._pulse_status; i++) - dotString += '.'; - Search.dots.text(dotString); - if (Search._pulse_status > -1) - window.setTimeout(pulse, 500); - } + Search.dots.innerText = ".".repeat(Search._pulse_status); + if (Search._pulse_status >= 0) window.setTimeout(pulse, 500); + }; pulse(); }, /** * perform a search for something (or wait until index is loaded) */ - performSearch : function(query) { + performSearch: (query) => { // create the required interface elements - this.out = $('#search-results'); - this.title = $('

        ' + _('Searching') + '

        ').appendTo(this.out); - this.dots = $('').appendTo(this.title); - this.status = $('

         

        ').appendTo(this.out); - this.output = $('