Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inclusive terminology update #1750

Merged
merged 10 commits into from
Sep 30, 2022
42 changes: 21 additions & 21 deletions docs/ImportONNXDefs.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@
# Overview <a name="overview"></a>
ONNX-MLIR defines an ONNX dialect to represent operations specified by ONNX.The ONNX dialect is created with MLIR table
gen tool. The definition of each operation is transferred from ONNX automatically with a
python script,
[utils/gen_onnx_mlir.py](../utils/gen_onnx_mlir.py).
python script,
[utils/gen_onnx_mlir.py](../utils/gen_onnx_mlir.py).
This script retrieves operation definition from
ONNX package to generate ONNXOps.td.inc for dialect table gen and OpBuilderTable.inc for
ONNX model importer in ONNX-MLIR.
Expand All @@ -20,8 +20,8 @@ The following sections will describe how to use gen_onnx_mlir.py to add an opera

# Add an Operation <a name="add_operation"></a>
To generate an operation for ONNX dialect, add this operation into the dictionary,
'version_dict', in gen_onnx_mlir.py.
The key of this directory is the operation name and the value is the list of
'version_dict', in gen_onnx_mlir.py.
The key of this directory is the operation name and the value is the list of
opset for this operation. Usually only the top version opset of this operation (in onnx-mlir/third_party/onnx) is supported. Details about versioning can be found in [version section](#version).
With this entry, the script will generate the operation definition for ONNX dialect.

Expand All @@ -31,25 +31,25 @@ With this entry, the script will generate the operation definition for ONNX dial
* By default, all operation has shape inference interface and `NoSideEffect` trait.
* If an operation has `ResultTypeInferenceOpInterface`, add it to dictionary `OpsWithResultTypeInference`.
This interface infers the type of result tensor, not shape.
* If an operation has subgraph, it will has interface `HasOnnxSubgraphOpInterface`.
* If an operation has subgraph, it will has interface `HasOnnxSubgraphOpInterface`.
This attribute is inferred from the ONNX operation definition.
* You can define helper function for an operation with dictionary `OpsWithHelpers`.
* You can define helper function for an operation with dictionary `OpsWithHelpers`.

By default, all operation has shape inference interface and `NoSideEffect` trait.
If an operation has `ResultTypeInferenceOpInterface`, use dictionary `OpsWithResultTypeInference`.
This interface infers the type of result tensor, not shape.
If an operation has subgraph, it will has interface `HasOnnxSubgraphOpInterface`.
If an operation has subgraph, it will has interface `HasOnnxSubgraphOpInterface`.

## Add canonicalization interface
If a transformation should be applied locally to an operation across passes, canonicalization
interface can be used for this transformation. To enable the canonicalization for an operation,
add the name of this operation into this list of `OpsWithCanonicalizer` and then the operation
If a transformation should be applied locally to an operation across passes, canonicalization
interface can be used for this transformation. To enable the canonicalization for an operation,
add the name of this operation into this list of `OpsWithCanonicalizer` and then the operation
will have `hasCanonicalizer = 1;` in its definition.

## Customize builder
The default builders for an operation require the type of results as a parameter. However, the type
of results can be inferred. A customize builder may be a useful to simplify the code. Based on the
type of inference, there are two kinds builder, unranked type and broadcast type. To enable the
type of inference, there are two kinds builder, unranked type and broadcast type. To enable the
special builder for an operation, you can add its name into `custom_builder_unranked_ops_list`
and `custom_builder_broadcast_ops_list` respectively.

Expand All @@ -67,7 +67,7 @@ and get rid of customize builder.
The operation description for an operation lists out the allowed types of each input/output and
attribute. The table gen will generate a default verifier to check IR for the allowed types.
If an operation has extra constraints, a customized verifier should be defined to enhance error detection.
For example, two inputs of an operation may require the same element type or same rank.
For example, two inputs of an operation may require the same element type or same rank.
Such information can be found in the ONNX operation definition, but can not be expressed with the dialect definition.
The best way to test for these constraints are in a verifier. To add the interface of customized verifier to an operation, locate the array below in `gen_onnx_mlir.py` and add your operation in it.
```
Expand Down Expand Up @@ -108,14 +108,14 @@ In your build directory, execute the following command.
```
This command will generate those two files (src/Dialect/ONNX/ONNXOps.td.inc and
OpBuilderTable.inc), and copy them to the right place in src directory.
If you modified gen_onnx_mlir.py, you need to check in two generated files too. They are treated
If you modified gen_onnx_mlir.py, you need to check in two generated files too. They are treated
source file in ONNX-MLIR build so that user of ONNX-MLIR does not need to install the particular
version of ONNX. Do not modify these files directly.
You can also run the script directly with the files generated in utils directory. `python ../utils/gen_onnx_mlir.py`.

## Update the documentation

When adding a new op version or making changes to the ONNX version, we would like to also reflect these changes in the ONNX documentation of our supported operations. While the latest [ONNX specs](https://github.com/onnx/onnx/blob/master/docs/Operators.md) are always available, the specs that we support are often a bit back, plus we support older versions under the versioned name as mentioned in the previous section.
When adding a new op version or making changes to the ONNX version, we would like to also reflect these changes in the ONNX documentation of our supported operations. While the latest [ONNX specs](https://github.com/onnx/onnx/blob/main/docs/Operators.md) are always available, the specs that we support are often a bit back, plus we support older versions under the versioned name as mentioned in the previous section.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch , thanks


There is a convenient command to update both the ONNX and Krnl dialect, as shown below.
```
Expand All @@ -128,30 +128,30 @@ The same command should be used when adding operations/making changes to the Krn
# Operation Version <a ref="version"></a>
ONNX-MLIR project started when ONNX was at version 1.7.0 and does not intended to be backward compatible. We relies on onnx/converter to convert the model to the version which ONNX-MLIR supports. As ONNX version is evolving, ONNX-MLIR tries to follow but may be behind the latest version.

## Version of Operations
## Version of Operations
As stated previous, we try to support the latest version of ONNX operations. The version of each operation currently supported is recorded in [utils/gen_onnx_mlir.py](../utils/gen_onnx_mlir.py). This mechanism provides some stability in version. To check the changes in version, run gen_onnx_mlir.py with flag "--check-version" and the changes will be reported. To move to a newer version, manually update the version dictionary in the script.

## Support Multiple versions
To support multiple versions of an op, the selected version should be added in the version dictionary in [utils/gen_onnx_mlir.py](../utils/gen_onnx_mlir.py). For example, there are two versions (opset), 11 and 13, forReduceSum that are supported. The corresponding entry in version_dic is `'ReduceSum': [13, 11]`.

In ONNX dialect, the op for the top version has no version in the op name, while other version with name followed by 'V' and version number. For example, ReduceSum of opset 13 will be `ONNXReduceSumOp`, while ReduceSum of opset 11 is 'ONNXReduceSumV11Op`. Since most of ONNX op are compatible when upgraded to higher version, we can keep the name of the operation in the dialect and just update version_dict in gen_onnx_mlir.py without touching the code in ONNX-MLIR.

When a model is imported, the highest version which is not higher than the next available version is used. For the example of ReduceSum, if the opset is 12, ONNXReduceSumV11Op is chosen.
When a model is imported, the highest version which is not higher than the next available version is used. For the example of ReduceSum, if the opset is 12, ONNXReduceSumV11Op is chosen.

## Migrating
To migrate a new version ONNX, first the third_part/onnx should be upgraded and your installation
of ONNX.
Then you can run gen_onnx_mlir.py with flag `--check_operation_version`. The top version for all
Then you can run gen_onnx_mlir.py with flag `--check_operation_version`. The top version for all
operation will be outputted as a new `version_dict`.
If the interface of an operation remains the same (from the change document of ONNX), you can
just use the new version.
If the interface does change, you can insert the new version as the first in the version list.
For the existing code, all the corresponding code has to be changed. For example, when ReduceSum
is moved from version 11 to 13, ONNXReduceSumOp is replaced with ONNXReduceSumOpV11 first.
For the existing code, all the corresponding code has to be changed. For example, when ReduceSum
is moved from version 11 to 13, ONNXReduceSumOp is replaced with ONNXReduceSumOpV11 first.
Then the code for version 13 will use ONNXReduceSumOp.
The reason for such design is that most of ONNX changes do not change the interface. We do not
want to put burden on developer to remember which version of operation is used unless absolutely
necessary.
necessary.
It is not always needed to keep the code for an older version, which may be rewritten into the new
operation. Thus, we just need to have the dialect definition, but not the code for inference or
lowering.
lowering.
2 changes: 1 addition & 1 deletion third_party/benchmark
Submodule benchmark updated 55 files
+3 −3 .github/.libcxx-setup.sh
+10 −5 .github/workflows/bazel.yml
+10 −2 .github/workflows/sanitizer.yml
+32 −82 .github/workflows/wheels.yml
+2 −0 AUTHORS
+6 −17 BUILD.bazel
+2 −1 CMakeLists.txt
+2 −0 CONTRIBUTORS
+32 −0 LICENSE
+2 −0 README.md
+2 −2 bindings/python/build_defs.bzl
+3 −1 bindings/python/google_benchmark/__init__.py
+3 −1 bindings/python/google_benchmark/benchmark.cc
+5 −3 cmake/CXXFeatureCheck.cmake
+36 −16 config/generate_export_header.bzl
+34 −0 docs/python_bindings.md
+3 −1 docs/releasing.md
+24 −9 docs/user_guide.md
+97 −10 include/benchmark/benchmark.h
+1 −1 requirements.txt
+9 −0 setup.py
+2 −2 src/CMakeLists.txt
+79 −19 src/benchmark.cc
+7 −1 src/benchmark_api_internal.cc
+2 −0 src/benchmark_api_internal.h
+1 −10 src/benchmark_main.cc
+2 −2 src/benchmark_name.cc
+28 −8 src/benchmark_register.cc
+2 −2 src/benchmark_register.h
+71 −7 src/benchmark_runner.cc
+9 −0 src/benchmark_runner.h
+8 −0 src/console_reporter.cc
+2 −2 src/cycleclock.h
+0 −6 src/json_reporter.cc
+5 −3 src/statistics.cc
+111 −116 src/sysinfo.cc
+1 −1 src/timers.cc
+20 −0 test/AssemblyTests.cmake
+45 −9 test/BUILD
+4 −0 test/CMakeLists.txt
+8 −0 test/benchmark_name_gtest.cc
+5 −4 test/benchmark_test.cc
+2 −2 test/complexity_test.cc
+39 −0 test/donotoptimize_assembly_test.cc
+30 −2 test/donotoptimize_test.cc
+12 −13 test/filter_test.cc
+2 −0 test/options_test.cc
+12 −0 test/register_benchmark_test.cc
+1 −1 test/reporter_output_test.cc
+10 −0 test/spec_arg_test.cc
+43 −0 test/spec_arg_verbosity_test.cc
+37 −0 test/time_unit_gtest.cc
+8 −0 tools/gbench/Inputs/test1_run1.json
+8 −0 tools/gbench/Inputs/test1_run2.json
+66 −15 tools/gbench/report.py
2 changes: 1 addition & 1 deletion third_party/pybind11
Submodule pybind11 updated 185 files
2 changes: 1 addition & 1 deletion third_party/rapidcheck
Submodule rapidcheck updated 38 files
+2 −2 CMakeLists.txt
+62 −0 doc/catch.md
+1 −1 doc/configuration.md
+1 −0 doc/user_guide.md
+1 −1 ext/get_boost.sh
+2 −0 extras/boost_test/include/rapidcheck/boost_test.h
+7 −2 extras/catch/include/rapidcheck/catch.h
+3 −0 include/rapidcheck/Compat.h
+17 −0 include/rapidcheck/Compat.hpp
+2 −1 include/rapidcheck/Gen.hpp
+2 −1 include/rapidcheck/Seq.h
+3 −2 include/rapidcheck/Seq.hpp
+8 −8 include/rapidcheck/detail/Any.hpp
+4 −3 include/rapidcheck/detail/ApplyTuple.h
+2 −2 include/rapidcheck/detail/ShowType.hpp
+1 −1 include/rapidcheck/gen/Arbitrary.h
+1 −1 include/rapidcheck/gen/Arbitrary.hpp
+5 −4 include/rapidcheck/gen/Container.hpp
+2 −1 include/rapidcheck/gen/Create.h
+2 −1 include/rapidcheck/gen/Create.hpp
+6 −5 include/rapidcheck/gen/Transform.h
+8 −7 include/rapidcheck/gen/Transform.hpp
+7 −7 include/rapidcheck/gen/detail/ShrinkValueIterator.hpp
+7 −1 include/rapidcheck/seq/SeqIterator.h
+5 −4 include/rapidcheck/seq/Transform.h
+9 −8 include/rapidcheck/seq/Transform.hpp
+3 −2 include/rapidcheck/shrink/Shrink.hpp
+4 −3 include/rapidcheck/shrinkable/Create.h
+7 −6 include/rapidcheck/shrinkable/Create.hpp
+3 −2 include/rapidcheck/shrinkable/Transform.h
+5 −4 include/rapidcheck/shrinkable/Transform.hpp
+2 −2 include/rapidcheck/state/Command.hpp
+3 −2 include/rapidcheck/state/gen/Commands.hpp
+3 −3 test/ShrinkableTests.cpp
+2 −2 test/detail/ShowTypeTests.cpp
+4 −4 test/seq/TransformTests.cpp
+2 −2 test/state/gen/ExecCommandsTests.cpp
+1 −1 test/util/Serialization.h