coremltools 4.0b2
Pre-release
Pre-release
What's New
- Improved documentation available at http://coremltools.readme.io.
- New converter path to directly convert PyTorch models without going through ONNX.
- Enhanced TensorFlow 2 conversion support, which now includes support for dynamic control flow and LSTM layers. Support for several popular models and architectures, including Transformers such as GPT and BERT-variants.
- New unified conversion API
ct.convert()
for converting PyTorch and TensorFlow (includingtf.keras
) models. - New Model Intermediate Language (MIL) builder library to either build neural network models directly or implement composite operations.
- New utilities to configure inputs while converting from PyTorch and TensorFlow, using
ct.convert()
withct.ImageType()
,ct.ClassifierConfig()
, etc., see details: https://coremltools.readme.io/docs/neural-network-conversion. - onnx-coreml converter is now moved under coremltools and can be accessed as
ct.converters.onnx.convert()
.
Deprecations
-
Deprecated the following methods
NeuralNetworkShaper
class.get_allowed_shape_ranges()
.can_allow_multiple_input_shapes()
.visualize_spec()
method of theMLModel
class.quantize_spec_weights()
, instead use thequantize_weights()
method.get_custom_layer_names()
,replace_custom_layer_name()
,has_custom_layer()
, moved them to internal methods.
-
Added deprecation warnings for, will be deprecated in next major release.
convert_neural_network_weights_to_fp16()
,convert_neural_network_spec_weights_to_fp16()
. Instead use thequantize_weights()
method. See https://coremltools.readme.io/docs/quantization for details.
Known Issues
- Latest version of Pytorch tested to work with the converter is Torch 1.5.0.
- TensorFlow 2 model conversion is supported for models with 1 concrete function.
- Conversion for TensorFlow and PyTorch models with quantized weights is currently not supported.
coremltools.utils.rename_feature
does not work correctly in renaming the output feature of a model of type neural network classifierleaky_relu
layer is not added yet to the PyTorch converter, although it's supported in MIL and the Tensorflow converters.