Skip to content

Commit

Permalink
project name change from "Intel(R) Low Precision Optimization Tool" t…
Browse files Browse the repository at this point in the history
…o "Intel(R) Neural Compressor"
  • Loading branch information
ftian1 committed Oct 1, 2021
1 parent 5540ad5 commit 306e97d
Show file tree
Hide file tree
Showing 945 changed files with 2,819 additions and 2,807 deletions.
95 changes: 50 additions & 45 deletions README.md

Large diffs are not rendered by default.

12 changes: 6 additions & 6 deletions api-documentation/apis.rst
Original file line number Diff line number Diff line change
@@ -1,20 +1,20 @@
APIs
####

.. automodule:: lpot.benchmark
.. automodule:: neural_compressor.benchmark
:members:

.. autoclass:: lpot.benchmark.Benchmark
.. autoclass:: neural_compressor.benchmark.Benchmark
:members:

.. automodule:: lpot.objective
.. automodule:: neural_compressor.objective
:members:

.. automodule:: lpot.pruning
.. automodule:: neural_compressor.pruning
:members:

.. automodule:: lpot.quantization
.. automodule:: neural_compressor.quantization
:members:

.. automodule:: lpot.version
.. automodule:: neural_compressor.version
:members:
22 changes: 11 additions & 11 deletions conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,20 +17,20 @@
sys.path.insert(0, os.path.abspath('.'))
import importlib.util
moduleName = 'version'
modulePath = os.getcwd() + '/lpot/version.py'
modulePath = os.getcwd() + '/neural_compressor/version.py'
spec = importlib.util.spec_from_file_location(moduleName,modulePath)
LPOTversion = importlib.util.module_from_spec(spec)
spec.loader.exec_module(LPOTversion)
NCversion = importlib.util.module_from_spec(spec)
spec.loader.exec_module(NCversion)


# -- Project information -----------------------------------------------------

project = 'Intel® Low Precision Optimization Tool'
copyright = '2021, Intel® Low Precision Optimization Tool'
author = 'Intel® LPOT developers'
project = 'Intel® Neural Compressor'
copyright = '2021, Intel® Neural Compressor'
author = 'Intel® Neural Compressor developers'

# The short X.Y version
version = LPOTversion.__version__
version = NCversion.__version__
# The full version, including alpha/beta/rc tags
release = ''

Expand Down Expand Up @@ -137,7 +137,7 @@
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'ProjectnameIntelLowPrecisionOptimizationTool.tex', '\\textgreater{} Project name: Intel® Low Precision Optimization Tool Documentation',
(master_doc, 'ProjectnameIntelLowPrecisionOptimizationTool.tex', '\\textgreater{} Project name: Intel® Neural Compressor Documentation',
'Various', 'manual'),
]

Expand All @@ -147,7 +147,7 @@
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'projectnameintellowprecisionoptimizationtool', '> Project name: Intel® Low Precision Optimization Tool Documentation',
(master_doc, 'projectnameintellowprecisionoptimizationtool', '> Project name: Intel® Neural Compressor Documentation',
[author], 1)
]

Expand All @@ -158,7 +158,7 @@
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'ProjectnameIntelLowPrecisionOptimizationTool', '> Project name: Intel® Low Precision Optimization Tool Documentation',
(master_doc, 'ProjectnameIntelLowPrecisionOptimizationTool', '> Project name: Intel® Neural Compressor Documentation',
author, 'ProjectnameIntelLowPrecisionOptimizationTool', 'One line description of project.',
'Miscellaneous'),
]
Expand All @@ -171,7 +171,7 @@ def setup(app):
sphinx_md_useGitHubURL = True
baseBranch = "master"
commitSHA = getenv('GITHUB_SHA')
githubBaseURL = 'https://github.com/' + (getenv('GITHUB_REPOSITORY') or 'intel/lpot') + '/'
githubBaseURL = 'https://github.com/' + (getenv('GITHUB_REPOSITORY') or 'intel/neural-compressor') + '/'
githubFileURL = githubBaseURL + "blob/"
githubDirURL = githubBaseURL + "tree/"
if commitSHA:
Expand Down
10 changes: 5 additions & 5 deletions contributions.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
Contribution Guidelines
=======================

If you have improvements to Intel® Low Precision Optimization Tool, send your pull requests for
[review](https://github.com/intel/lpot/pulls). If you are new to Github, view the pull request
If you have improvements to Intel® Neural Compressor, send your pull requests for
[review](https://github.com/intel/neural-compressor/pulls). If you are new to Github, view the pull request
[How To](https://help.github.com/articles/using-pull-requests/).


Expand All @@ -12,8 +12,8 @@ Before sending your pull requests, follow the information below:
- Changes are consistent with the Python [Coding Style](https://github.com/google/styleguide/blob/gh-pages/pyguide.md).
- Use pylint to check your Python code.
- Use flake8 and autopep8 to make Python code clean.
- Add unit tests in [Unit Tests](https://github.com/intel/lpot/tree/master/test) to cover the code you would like to contribute.
- Run [Unit Tests](https://github.com/intel/lpot/tree/master/test).
- Add unit tests in [Unit Tests](https://github.com/intel/neural-compressor/tree/master/test) to cover the code you would like to contribute.
- Run [Unit Tests](https://github.com/intel/neural-compressor/tree/master/test).

## Pull Request Template

Expand Down Expand Up @@ -43,7 +43,7 @@ Provide the development or test environment info.
## Support

Submit your questions, feature requests, and bug reports to the
[GitHub issues](https://github.com/intel/lpot/issues) page. You may also reach out to lpot[email protected].
[GitHub issues](https://github.com/intel/neural-compressor/issues) page. You may also reach out to [Maintainers](neural_compressor[email protected]).

## Contributor Covenant Code of Conduct

Expand Down
4 changes: 2 additions & 2 deletions docs/QAT.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,8 +96,8 @@ More on quantization-aware training:
* We can simulate the accuracy of a quantized model in floating points since we are using fake-quantization to model the numerics of actual quantized arithmetic.
* We can easily mimic post-training quantization.

Intel® Low Precision Optimization Tool can support QAT calibration for
PyTorch models. Refer to the [QAT model](https://github.com/intel/lpot/tree/master/examples/pytorch/eager/image_recognition/imagenet/cpu/qat/README.md) for step-by-step tuning.
Intel® Neural Compressor can support QAT calibration for
PyTorch models. Refer to the [QAT model](https://github.com/intel/neural-compressor/tree/master/examples/pytorch/eager/image_recognition/imagenet/cpu/qat/README.md) for step-by-step tuning.

### Example
View a [QAT example of PyTorch resnet50](/examples/pytorch/eager/image_recognition/imagenet/cpu/qat).
Expand Down
26 changes: 13 additions & 13 deletions docs/adaptor.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,15 @@ Adaptor

## Introduction

Intel® Low Precision Optimization Tool (LPOT) built the low-precision inference
solution on popular Deep Learning frameworks such as TensorFlow, PyTorch,
MXNet, and ONNX Runtime. The adaptor layer is the bridge between the LPOT
Intel® Neural Compressor builds the low-precision inference
solution on popular deep learning frameworks such as TensorFlow, PyTorch,
MXNet, and ONNX Runtime. The adaptor layer is the bridge between the
tuning strategy and vanilla framework quantization APIs.

## Adaptor Design

LPOT supports a new adaptor extension by
implementing a subclass `Adaptor` class in the lpot.adaptor package
Neural Compressor supports a new adaptor extension by
implementing a subclass `Adaptor` class in the neural_compressor.adaptor package
and registering this strategy by the `adaptor_registry` decorator.

For example, a user can implement an `Abc` adaptor like below:
Expand Down Expand Up @@ -46,21 +46,21 @@ class AbcAdaptor(Adaptor):
#### Background

Besides the adaptor API, we also introduced the Query API which describes the
behavior of a specific framework. With this API, LPOT can easily query the
behavior of a specific framework. With this API, Neural Compressor can easily query the
following information on the current runtime framework.

* The runtime version information.
* The Quantizable ops type.
* The supported sequence of each quantizable op.
* The instance of each sequence.

In the past, the above information was generally defined and hidden in every corner of the code which made effective maintenance difficult. With the Query API, we only need to create one unified yaml file and call the corresponding API to get the information. For example, the [tensorflow.yaml](../lpot/adaptor/tensorflow.yaml) keeps the current Tensorflow framework ability. We recommend that the end user not make modifications if requirements are not clear.
In the past, the above information was generally defined and hidden in every corner of the code which made effective maintenance difficult. With the Query API, we only need to create one unified yaml file and call the corresponding API to get the information. For example, the [tensorflow.yaml](../neural_compressor/adaptor/tensorflow.yaml) keeps the current Tensorflow framework ability. We recommend that the end user not make modifications if requirements are not clear.

#### Unify Config Introduction

Below is a fragment of the Tensorflow configuration file.

* **precisions** field defines the supported precision for LPOT.
* **precisions** field defines the supported precision for Neural Compressor.
- valid_mixed_precision enumerates all supported precision combinations for specific scenario. For example, if one hardware doesn't support bf16, it should be `int8 + fp32`.
* **ops** field defines the valid OP type list for each precision.
* **capabilities** field focuses on the quantization ability of specific ops such as granularity, scheme, and algorithm. The activation assumes the same data type for both input and output activation by default based on op semantics defined by frameworks.
Expand Down Expand Up @@ -193,13 +193,13 @@ Below is a fragment of the Tensorflow configuration file.
```
#### Query API Introduction

The abstract class `QueryBackendCapability` is defined in [query.py](../lpot/adaptor/query.py#L21). Each framework should inherit it and implement the member function if needed. Refer to Tensorflow implementation [TensorflowQuery](../lpot/adaptor/tensorflow.py#L628).
The abstract class `QueryBackendCapability` is defined in [query.py](../neural_compressor/adaptor/query.py#L21). Each framework should inherit it and implement the member function if needed. Refer to Tensorflow implementation [TensorflowQuery](../neural_compressor/adaptor/tensorflow.py#L628).


## Customize a New Framework Backend

Look at onnxruntime as an example. ONNX Runtime is a backend proposed by Microsoft, and is based on the MLAS kernel by default.
Onnxruntime already has [quantization tools](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/python/tools/quantization), so the question becomes how to integrate onnxruntime quantization tools into LPOT.
Onnxruntime already has [quantization tools](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/python/tools/quantization), so the question becomes how to integrate onnxruntime quantization tools into Neural Compressor.

1. Capability

Expand All @@ -214,7 +214,7 @@ Onnxruntime already has [quantization tools](https://github.com/microsoft/onnxru
* &1.8 nodes_to_quantize, nodes_to_exclude
* op_types_to_quantize

We can pass a tune capability to LPOT such as:
We can pass a tune capability to Neural Compressor such as:

```yaml
{'optypewise': {'conv':
Expand Down Expand Up @@ -243,7 +243,7 @@ Onnxruntime already has [quantization tools](https://github.com/microsoft/onnxru

2. Parse tune config

LPOT can generate a tune config from your tune capability such as the
Neural Compressor can generate a tune config from your tune capability such as the
following:

```yaml
Expand Down Expand Up @@ -286,4 +286,4 @@ Onnxruntime already has [quantization tools](https://github.com/microsoft/onnxru
4. Do quantization

This part depends on your backend implementations. Refer to [onnxruntime](../lpot/adaptor/onnxrt.py) as an example.
This part depends on your backend implementations. Refer to [onnxruntime](../neural_compressor/adaptor/onnxrt.py) as an example.
Loading

0 comments on commit 306e97d

Please sign in to comment.