diff --git a/schema/index.html b/schema/index.html index 1b99937a5b3..16f8a6e5bc8 100644 --- a/schema/index.html +++ b/schema/index.html @@ -287,4 +287,4 @@ "UNet/ModuleList[down_path]/UNetConvBlock[4]/Sequential[block]/Conv2d[0]" ]
"UNet/ModuleList\\[up_path\\].*"
-

Type: boolean Default: true

If set to True, then a RuntimeError will be raised if the names of the ignored/target scopes do not match the names of the scopes in the model graph.

Type: number

PyTorch only - Used to increase/decrease gradients for compression algorithms' parameters. The gradients will be multiplied by the specified value. If unspecified, the gradients will not be adjusted.

array_of_objects_version

Type: array
No Additional Items

Each item of this array must be:


Type: object

Applies quantization on top of the input model, simulating future low-precision execution specifics, and selects the quantization layout and parameters to strive for the best possible quantized model accuracy and performance.
See Quantization.md and the rest of this schema for more details and parameters.

Same definition as compression_oneOf_i0_oneOf_i0
Type: object

Applies filter pruning during training of the model to effectively remove entire sub-dimensions of tensors in the original model from computation and therefore increase performance.
See Pruning.md and the rest of this schema for more details and parameters.

Same definition as compression_oneOf_i0_oneOf_i1
Type: object

Applies sparsity on top of the current model. Each weight tensor value will be either kept as-is, or set to 0 based on its magnitude. For large sparsity levels, this will improve performance on hardware that can profit from it. See Sparsity.md and the rest of this schema for more details and parameters.

Same definition as compression_oneOf_i0_oneOf_i2
Type: object

Applies sparsity on top of the current model. Each weight tensor value will be either kept as-is, or set to 0 based on its importance as determined by the regularization-based sparsity algorithm. For large sparsity levels, this will improve performance on hardware that can profit from it. See Sparsity.md and the rest of this schema for more details and parameters.

Same definition as compression_oneOf_i0_oneOf_i3
Type: object

This algorithm is only useful in combination with other compression algorithms and improves theend accuracy result of the corresponding algorithm by calculating knowledge distillation loss between the compressed model currently in training and its original, uncompressed counterpart. See KnowledgeDistillation.md and the rest of this schema for more details and parameters.

Same definition as compression_oneOf_i0_oneOf_i4
Type: object

This algorithm takes no additional parameters and is used when you want to load a checkpoint trained with another sparsity algorithm and do other compression without changing the sparsity mask.

Same definition as compression_oneOf_i0_oneOf_i5
Type: object

Applies quantization on top of the input model, simulating future low-precision execution specifics, and selects the quantization layout and parameters to strive for the best possible quantized model accuracy and performance.
See Quantization.md and the rest of this schema for more details and parameters.

Same definition as compression_oneOf_i0_oneOf_i6


Options for the execution of the NNCF-powered 'Accuracy Aware' training pipeline. The 'mode' property determines the mode of the accuracy-aware training execution and further available parameters.

early_exit

Type: object

Early exit mode schema. See EarlyExitTraining.md for more general info on this mode.

No Additional Properties

Type: const
Specific value: "early_exit"


No Additional Properties

Type: object

The following properties are required:

  • maximal_relative_accuracy_degradation
Type: object

The following properties are required:

  • maximal_absolute_accuracy_degradation

Type: number

Maximally allowed accuracy degradation of the model in percent relative to the original model accuracy.

Type: number

Maximally allowed accuracy degradation of the model in units of absolute metric of the original model.

Type: number Default: 10000

The maximal total fine-tuning epoch count. If the accuracy criteria wouldn't reach during fine-tuning, the most accurate model will be returned.

adaptive_compression_level

Type: object

Adaptive compression level training mode schema. See AdaptiveCompressionLevelTraining.md for more general info on this mode.

No Additional Properties

Type: const
Specific value: "adaptive_compression_level"


No Additional Properties

Type: object

The following properties are required:

  • maximal_relative_accuracy_degradation
Type: object

The following properties are required:

  • maximal_absolute_accuracy_degradation

Type: number

Maximally allowed accuracy degradation of the model in percent relative to the original model accuracy.

Type: number

Maximally allowed accuracy degradation of the model in units of absolute metric of the original model.

Type: number Default: 5

Number of epochs to fine-tune during the initial training phase of the adaptive compression training loop.

Type: number Default: 0.1

Initial value for the compression rate increase/decrease training phase of the compression training loop.

Type: number Default: 0.5

Factor used to reduce the compression rate change step in the adaptive compression training loop.

Type: number Default: 0.5

Factor used to reduce the learning rate after compression rate step is reduced

Type: number Default: 0.025

The minimal compression rate change step value after which the training loop is terminated.

Type: number Default: 3

The number of epochs to fine-tune the model for a given compression rate after the initial training phase of the training loop.

Type: number Default: 10000

The maximal total fine-tuning epoch count. If the epoch counter reaches this number, the fine-tuning process will stop and the model with the largest compression rate will be returned.

Type: number

PyTorch only - Used to increase/decrease gradients for compression algorithms' parameters. The gradients will be multiplied by the specified value. If unspecified, the gradients will not be adjusted.

Type: boolean

[Deprecated] Whether to enable strict input tensor shape matching when building the internal graph representation of the model. Set this to false if your model inputs have any variable dimension other than the 0-th (batch) dimension, or if any non-batch dimension of the intermediate tensors in your model execution flow depends on the input dimension, otherwise the compression will most likely fail.

Type: string

Log directory for NNCF-specific logging outputs.

\ No newline at end of file +

Type: boolean Default: true

If set to True, then a RuntimeError will be raised if the names of the ignored/target scopes do not match the names of the scopes in the model graph.

Type: number

PyTorch only - Used to increase/decrease gradients for compression algorithms' parameters. The gradients will be multiplied by the specified value. If unspecified, the gradients will not be adjusted.

array_of_objects_version

Type: array
No Additional Items

Each item of this array must be:


Type: object

Applies quantization on top of the input model, simulating future low-precision execution specifics, and selects the quantization layout and parameters to strive for the best possible quantized model accuracy and performance.
See Quantization.md and the rest of this schema for more details and parameters.

Same definition as compression_oneOf_i0_oneOf_i0
Type: object

Applies filter pruning during training of the model to effectively remove entire sub-dimensions of tensors in the original model from computation and therefore increase performance.
See Pruning.md and the rest of this schema for more details and parameters.

Same definition as compression_oneOf_i0_oneOf_i1
Type: object

Applies sparsity on top of the current model. Each weight tensor value will be either kept as-is, or set to 0 based on its magnitude. For large sparsity levels, this will improve performance on hardware that can profit from it. See Sparsity.md and the rest of this schema for more details and parameters.

Same definition as compression_oneOf_i0_oneOf_i2
Type: object

Applies sparsity on top of the current model. Each weight tensor value will be either kept as-is, or set to 0 based on its importance as determined by the regularization-based sparsity algorithm. For large sparsity levels, this will improve performance on hardware that can profit from it. See Sparsity.md and the rest of this schema for more details and parameters.

Same definition as compression_oneOf_i0_oneOf_i3
Type: object

This algorithm is only useful in combination with other compression algorithms and improves theend accuracy result of the corresponding algorithm by calculating knowledge distillation loss between the compressed model currently in training and its original, uncompressed counterpart. See KnowledgeDistillation.md and the rest of this schema for more details and parameters.

Same definition as compression_oneOf_i0_oneOf_i4
Type: object

This algorithm takes no additional parameters and is used when you want to load a checkpoint trained with another sparsity algorithm and do other compression without changing the sparsity mask.

Same definition as compression_oneOf_i0_oneOf_i5
Type: object

Applies quantization on top of the input model, simulating future low-precision execution specifics, and selects the quantization layout and parameters to strive for the best possible quantized model accuracy and performance.
See Quantization.md and the rest of this schema for more details and parameters.

Same definition as compression_oneOf_i0_oneOf_i6


Options for the execution of the NNCF-powered 'Accuracy Aware' training pipeline. The 'mode' property determines the mode of the accuracy-aware training execution and further available parameters.

early_exit

Type: object

Early exit mode schema. See EarlyExitTraining.md for more general info on this mode.

No Additional Properties

Type: const
Specific value: "early_exit"


No Additional Properties

Type: object

The following properties are required:

  • maximal_relative_accuracy_degradation
Type: object

The following properties are required:

  • maximal_absolute_accuracy_degradation

Type: number

Maximally allowed accuracy degradation of the model in percent relative to the original model accuracy.

Type: number

Maximally allowed accuracy degradation of the model in units of absolute metric of the original model.

Type: number Default: 10000

The maximal total fine-tuning epoch count. If the accuracy criteria wouldn't reach during fine-tuning, the most accurate model will be returned.

adaptive_compression_level

Type: object

Adaptive compression level training mode schema. See AdaptiveCompressionLevelTraining.md for more general info on this mode.

No Additional Properties

Type: const
Specific value: "adaptive_compression_level"


No Additional Properties

Type: object

The following properties are required:

  • maximal_relative_accuracy_degradation
Type: object

The following properties are required:

  • maximal_absolute_accuracy_degradation

Type: number

Maximally allowed accuracy degradation of the model in percent relative to the original model accuracy.

Type: number

Maximally allowed accuracy degradation of the model in units of absolute metric of the original model.

Type: number Default: 5

Number of epochs to fine-tune during the initial training phase of the adaptive compression training loop.

Type: number Default: 0.1

Initial value for the compression rate increase/decrease training phase of the compression training loop.

Type: number Default: 0.5

Factor used to reduce the compression rate change step in the adaptive compression training loop.

Type: number Default: 0.5

Factor used to reduce the learning rate after compression rate step is reduced

Type: number Default: 0.025

The minimal compression rate change step value after which the training loop is terminated.

Type: number Default: 3

The number of epochs to fine-tune the model for a given compression rate after the initial training phase of the training loop.

Type: number Default: 10000

The maximal total fine-tuning epoch count. If the epoch counter reaches this number, the fine-tuning process will stop and the model with the largest compression rate will be returned.

Type: number

PyTorch only - Used to increase/decrease gradients for compression algorithms' parameters. The gradients will be multiplied by the specified value. If unspecified, the gradients will not be adjusted.

Type: boolean

[Deprecated] Whether to enable strict input tensor shape matching when building the internal graph representation of the model. Set this to false if your model inputs have any variable dimension other than the 0-th (batch) dimension, or if any non-batch dimension of the intermediate tensors in your model execution flow depends on the input dimension, otherwise the compression will most likely fail.

Type: string

Log directory for NNCF-specific logging outputs.

\ No newline at end of file