-
Notifications
You must be signed in to change notification settings - Fork 323
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Status of the support for the ONNX model zoo #128
Comments
My finding: |
I hit the same assertion as @doru1004 for $ ./bin/onnx-mlir --EmitLib ./bertsquad-8.onnx
./bin/onnx-mlir: /home/xxxxx/miniconda3/lib/libtinfo.so.6: no version information available (required by ./bin/onnx-mlir)
onnx-mlir: /home/xxxxx/llvm-project/mlir/lib/IR/Value.cpp:22: mlir::Value::Value(mlir::Operation*, unsigned int): Assertion `op->getNumResults() > resultNo&& "invalid result number"' failed.
[1] 848382 abort (core dumped) ./bin/onnx-mlir --EmitLib ./bertsquad-8.onnx
$ ./bin/onnx-mlir --EmitLib ./bertsquad-10.onnx
./bin/onnx-mlir: /home/xxxxx/miniconda3/lib/libtinfo.so.6: no version information available (required by ./bin/onnx-mlir)
onnx-mlir: /home/xxxxx/llvm-project/mlir/lib/IR/Value.cpp:22: mlir::Value::Value(mlir::Operation*, unsigned int): Assertion `op->getNumResults() > resultNo&& "invalid result number"' failed.
[1] 856164 abort (core dumped) ./bin/onnx-mlir --EmitLib ./bertsquad-10.onnx
$ ./bin/onnx-mlir --EmitLib ./gpt2-10.onnx
./bin/onnx-mlir: /home/xxxxx/miniconda3/lib/libtinfo.so.6: no version information available (required by ./bin/onnx-mlir)
onnx-mlir: /home/xxxxx/llvm-project/mlir/lib/IR/Value.cpp:22: mlir::Value::Value(mlir::Operation*, unsigned int): Assertion `op->getNumResults() > resultNo&& "invalid result number"' failed.
[1] 856207 abort (core dumped) ./bin/onnx-mlir --EmitLib ./gpt2-10.onnx |
ResNet breaks on shape inference pass. I notice that the output in basic MLIR is
The output should be There is a node in the graph called |
Resnet50-v1 also is not working
|
@tjingrant I think we are running a version of ResNet as part of the test suite, is that different from the one above? |
@Xatter and @doru1004 , it appears that the ResNet version included in the tests is different than the onnx zoo or the onnx repo. The version included by the tests is downloaded from here: The download location is defined in this file: This downloaded model works ok. But When I try to EmitONNXIR for the implementations of resnet v1 and v2 available at the onnx repo. I get the following (different) errors:
Using the following versions of onnx-mlir, llvm-project, and protobuf: git clone https://github.com/llvm/llvm-project.git
cd llvm-project && git checkout 91671e13efbc5dbd17b832d7973401350d0a6ee6 && cd ..
git clone --recursive https://github.com/onnx/onnx-mlir.git
cd onnx-mlir && git checkout --recurse-submodules 75930ffbcf14cfbaccd8417c47c3598f56342926 && cd ..
git clone https://github.com/protocolbuffers/protobuf.git
cd protobuf && git checkout --recurse-submodules d16bf914bc5ba569d2b70376051d15f68ce4322d && cd ..
``
|
I wrote a script to get onnx model zoo coverage status. And I executed ONNX MLIR twice with different versions of docker image onnxmlirczar/onnx-mlir-build:x86.
ONNX MLIR Compiling Target: ONNX Model Zoo 17 of 118 onnx models can be compiled by onnx-mlir successfully.
ONNX Files Compiled Successfully with onnx-mlir:
Failed Reason GroupsI took errors with "error:" prefix as expected errors, and "onnx-milr:" prefix as mlir assertion failure.
Failed SourcesI also categorize the errors with very rough way by source.
For more details, see attached pdf report |
From latest build, it seems 40 Models are supported now. Success compiled models: ./models/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-7.onnx ./models/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-8.onnx ./models/vision/classification/mnist/model/mnist-7.onnx ./models/vision/classification/mnist/model/mnist-8.onnx ./models/vision/classification/mobilenet/model/mobilenetv2-7.onnx ./models/vision/classification/resnet/model/resnet101-v1-7.onnx ./models/vision/classification/resnet/model/resnet101-v2-7.onnx ./models/vision/classification/resnet/model/resnet152-v1-7.onnx ./models/vision/classification/resnet/model/resnet152-v2-7.onnx ./models/vision/classification/resnet/model/resnet18-v1-7.onnx ./models/vision/classification/resnet/model/resnet18-v2-7.onnx ./models/vision/classification/resnet/model/resnet34-v1-7.onnx ./models/vision/classification/resnet/model/resnet34-v2-7.onnx ./models/vision/classification/resnet/model/resnet50-caffe2-v1-6.onnx ./models/vision/classification/resnet/model/resnet50-caffe2-v1-7.onnx ./models/vision/classification/resnet/model/resnet50-caffe2-v1-8.onnx ./models/vision/classification/resnet/model/resnet50-caffe2-v1-9.onnx ./models/vision/classification/resnet/model/resnet50-v1-7.onnx ./models/vision/classification/resnet/model/resnet50-v2-7.onnx ./models/vision/classification/shufflenet/model/shufflenet-6.onnx ./models/vision/classification/shufflenet/model/shufflenet-7.onnx ./models/vision/classification/shufflenet/model/shufflenet-8.onnx ./models/vision/classification/shufflenet/model/shufflenet-9.onnx ./models/vision/classification/shufflenet/model/shufflenet-v2-10.onnx ./models/vision/classification/squeezenet/model/squeezenet1.0-3.onnx ./models/vision/classification/squeezenet/model/squeezenet1.0-6.onnx ./models/vision/classification/squeezenet/model/squeezenet1.0-7.onnx ./models/vision/classification/squeezenet/model/squeezenet1.0-8.onnx ./models/vision/classification/squeezenet/model/squeezenet1.0-9.onnx ./models/vision/classification/squeezenet/model/squeezenet1.1-7.onnx ./models/vision/classification/vgg/model/vgg16-7.onnx ./models/vision/classification/vgg/model/vgg16-bn-7.onnx ./models/vision/classification/vgg/model/vgg19-7.onnx ./models/vision/classification/vgg/model/vgg19-bn-7.onnx ./models/vision/classification/vgg/model/vgg19-caffe2-6.onnx ./models/vision/classification/vgg/model/vgg19-caffe2-7.onnx ./models/vision/classification/vgg/model/vgg19-caffe2-8.onnx ./models/vision/classification/vgg/model/vgg19-caffe2-9.onnx ./models/vision/object_detection_segmentation/duc/model/ResNet101-DUC-7.onnx ./models/vision/object_detection_segmentation/yolov2-coco/model/yolov2-coco-9.onnx |
Update: MLIR now has
|
Update: Until Feb-20th, 77 models can be compiled.
|
any update? |
FYI, I wrote a python script to examine the current status and below is the result. Will report the status monthly. (@AlexandreEichenberger @chentong319 I added error messages when compilation failed) ONNX-MLIR supports 90 ONNX ops['abs', 'acos', 'acosh', 'add', 'and', 'argmax', 'asin', 'asinh', 'atan', 'atanh', 'averagepool', 'batchnormalization', 'cast', 'ceil', 'clip', 'concat', 'constant', 'constantofshape', 'conv', 'cos', 'div', 'dropout', 'elu', 'erf', 'exp', 'flatten', 'floor', 'gather', 'gemm', 'globalaveragepool', 'globalmaxpool', 'gru', 'hardsigmoid', 'identity', 'leakyrelu', 'less', 'log', 'logsoftmax', 'loop', 'lrn', 'lstm', 'matmul', 'max', 'maxpool', 'min', 'mul', 'neg', 'or', 'pad', 'pow', 'prelu', 'range', 'reciprocal', 'reducel1', 'reducel2', 'reducelogsum', 'reducelogsumexp', 'reducemax', 'reducemean', 'reducemin', 'reduceprod', 'reducesum', 'reducesumsquare', 'relu', 'reshape', 'resize', 'rnn', 'scan', 'selu', 'shape', 'sigmoid', 'sign', 'sin', 'sinh', 'size', 'slice', 'softmax', 'softplus', 'softsign', 'split', 'sqrt', 'squeeze', 'sub', 'sum', 'tan', 'tanh', 'tile', 'transpose', 'unsqueeze', 'xor'] There are 128 models in the ONNX model zoo[1] processing vision/style_transfer/fast_neural_style/model/candy-8.onnx ONNX models and their ops
Looks like ONNX-MLIR supports 103 models, where 83 models can be really compiled. Count the number of models in which an op is used (sorted in the decreasing order):
ALEX: I modified the text manually to add the newly supported ops and comment on deprecated op. |
Big thanks @tungld |
Just check some old models to see why Gemm failed. Actually these models seemed incorrect, for example, the output of MaxPooling (4D tensors) was passed to Gemm which supported only 2D, so Gemm failed. Looking at onnx/models, these old models will be removed by this PR: onnx/models#389. |
New update: 101 models can be compiled now (it was 83 in the previous update). Out of 17 models failed to compile, 12 models are deprecated (using Opset <=3). Some models need to be compiled with ONNX-MLIR supports 102 ONNX ops['abs', 'acos', 'acosh', 'add', 'and', 'argmax', 'asin', 'asinh', 'atan', 'atanh', 'averagepool', 'batchnormalization', 'cast', 'ceil', 'clip', 'concat', 'constant', 'constantofshape', 'conv', 'cos', 'div', 'dropout', 'elu', 'equal', 'erf', 'exp', 'flatten', 'floor', 'gather', 'gemm', 'globalaveragepool', 'globalmaxpool', 'greater', 'greaterorequal', 'gru', 'hardsigmoid', 'identity', 'instancenormalization', 'leakyrelu', 'less', 'lessorequal', 'log', 'logsoftmax', 'loop', 'lrn', 'lstm', 'matmul', 'max', 'maxpool', 'mean', 'min', 'mod', 'mul', 'neg', 'nonzero', 'not', 'or', 'pad', 'pow', 'prelu', 'range', 'reciprocal', 'reducel1', 'reducel2', 'reducelogsum', 'reducelogsumexp', 'reducemax', 'reducemean', 'reducemin', 'reduceprod', 'reducesum', 'reducesumsquare', 'relu', 'reshape', 'resize', 'rnn', 'round', 'scan', 'selu', 'shape', 'sigmoid', 'sign', 'sin', 'sinh', 'size', 'slice', 'softmax', 'softplus', 'softsign', 'split', 'sqrt', 'squeeze', 'sub', 'sum', 'tan', 'tanh', 'tile', 'transpose', 'unsqueeze', 'upsample', 'where', 'xor'] There are 128 models in the ONNX model zoo where 12 models are deprecated (using very old opsets, e.g. <= 3)See onnx/models#389 for a list of deprecated models ONNX models and their ops
Looks like ONNX-MLIR supports 118 models, of which 101 models can be really compiled and 17 models failed to compile (12 models are deprecated) Count the number of models in which an op is used (sorted in the decreasing order):
|
@tungld great progress, thanks to all for adding operations. It might be interesting to remove the deprecated all together. How many non-deprecated benchmark are there total, and how did you decide which one is deprecated? |
There are 128 models in total, of which 12 are deprecated. So 116 models are non-deprecated. I considered 9 models in onnx/models#389 are deprecated, plus 3 models using Opset 3 that I examined by myself that they used very old opset for BatchNormalization op. |
Updated status for onnx-mIir Oct. 21 ONNX-MLIR supports 105 ONNX ops['abs', 'acos', 'acosh', 'add', 'and', 'argmax', 'asin', 'asinh', 'atan', 'atanh', 'averagepool', 'batchnormalization', 'cast', 'ceil', 'clip', 'concat', 'constant', 'constantofshape', 'conv', 'cos', 'cumsum', 'div', 'dropout', 'elu', 'equal', 'erf', 'exp', 'expand', 'flatten', 'floor', 'gather', 'gemm', 'globalaveragepool', 'globalmaxpool', 'greater', 'greaterorequal', 'gru', 'hardsigmoid', 'identity', 'instancenormalization', 'leakyrelu', 'less', 'lessorequal', 'log', 'logsoftmax', 'loop', 'lrn', 'lstm', 'matmul', 'max', 'maxpool', 'mean', 'min', 'mod', 'mul', 'neg', 'nonzero', 'not', 'onehot', 'or', 'pad', 'pow', 'prelu', 'range', 'reciprocal', 'reducel1', 'reducel2', 'reducelogsum', 'reducelogsumexp', 'reducemax', 'reducemean', 'reducemin', 'reduceprod', 'reducesum', 'reducesumsquare', 'relu', 'reshape', 'resize', 'rnn', 'round', 'scan', 'selu', 'shape', 'sigmoid', 'sign', 'sin', 'sinh', 'size', 'slice', 'softmax', 'softplus', 'softsign', 'split', 'sqrt', 'squeeze', 'sub', 'sum', 'tan', 'tanh', 'tile', 'transpose', 'unsqueeze', 'upsample', 'where', 'xor'] Checking 116 out of 128 models in the ONNX model zoo (12 models are excluded since they use very old opsets, e.g. <= 3)Looks like ONNX-MLIR supports 109 models, of which 103 models can be really compiled and 6 models failed to compile ONNX models and their ops
Count the number of models in which an op is used (sorted in the decreasing order):
Tung: Update (Oct. 27), newly supported ops: Compress, Hardmax, NonMaxSuppression. Tiny-yolo3-11 can be compiled. |
SummaryWe examine 116 models out of 128 models in the ONNX model zoo (12 models are excluded because they use quite old opset, < 3). Out of 116 models:
ONNX models and their ops
Looks like ONNX-MLIR supports 112 models, of which 107 models can be really compiled and 5 models failed to compile Count the number of models in which an op is used (sorted in the decreasing order):
|
Updated on April 28, 2022: this is the first time we check end-to-end runs for all models in the model zoo. Before this, we only checked the compilation phase. Tested 165 models in the model zoo in which 102 models run correctly (meaning correct inference results). 165 models tested: gpt2-10, gpt2-lm-head-10, bidaf-9, t5-decoder-with-lm-head-12, t5-encoder-12, bertsquad-12-int8, bertsquad-12, bertsquad-8, bertsquad-10, roberta-sequence-classification-9, roberta-base-11, super-resolution-10, arcfaceresnet100-8, emotion-ferplus-8, emotion-ferplus-2, emotion-ferplus-7, inception-v1-7, inception-v1-6, inception-v1-9, inception-v1-12-int8, inception-v1-3, inception-v1-12, inception-v1-8, googlenet-3, googlenet-9, googlenet-12-int8, googlenet-12, googlenet-8, googlenet-6, googlenet-7, inception-v2-8, inception-v2-6, inception-v2-9, inception-v2-3, inception-v2-7, mnist-7, mnist-8, mnist-1, rcnn-ilsvrc13-9, rcnn-ilsvrc13-7, rcnn-ilsvrc13-8, rcnn-ilsvrc13-3, rcnn-ilsvrc13-6, zfnet512-6, zfnet512-7, zfnet512-12, zfnet512-3, zfnet512-8, zfnet512-9, zfnet512-12-int8, caffenet-12-int8, caffenet-9, caffenet-12, caffenet-7, caffenet-6, caffenet-8, caffenet-3, mobilenetv2-12, mobilenetv2-7, mobilenetv2-12-int8, squeezenet1.1-7, squeezenet1.0-6, squeezenet1.0-12-int8, squeezenet1.0-7, squeezenet1.0-8, squeezenet1.0-3, squeezenet1.0-9, squeezenet1.0-12, densenet-8, densenet-9, densenet-3, densenet-6, densenet-7, resnet50-v1-7, resnet101-v1-7, resnet50-caffe2-v1-9, resnet50-caffe2-v1-3, resnet34-v1-7, resnet50-caffe2-v1-6, resnet50-caffe2-v1-7, resnet152-v2-7, resnet50-caffe2-v1-8, resnet18-v1-7, resnet18-v2-7, resnet34-v2-7, resnet50-v1-12-int8, resnet50-v2-7, resnet101-v2-7, resnet152-v1-7, resnet50-v1-12, efficientnet-lite4-11-int8, efficientnet-lite4-11, bvlcalexnet-9, bvlcalexnet-7, bvlcalexnet-12, bvlcalexnet-6, bvlcalexnet-3, bvlcalexnet-12-int8, bvlcalexnet-8, vgg16-12-int8, vgg19-bn-7, vgg19-caffe2-3, vgg19-caffe2-7, vgg19-7, vgg16-7, vgg16-bn-7, vgg19-caffe2-9, vgg16-12, vgg19-caffe2-6, vgg19-caffe2-8, shufflenet-3, shufflenet-v2-10, shufflenet-v2-12, shufflenet-9, shufflenet-6, shufflenet-v2-12-int8, shufflenet-7, shufflenet-8, yolov3-10, FasterRCNN-12-int8, FasterRCNN-12, FasterRCNN-10, fcn-resnet50-12, fcn-resnet50-11, fcn-resnet101-11, fcn-resnet50-12-int8, yolov4, ssd-12, ssd-12-int8, ssd-10, ResNet101-DUC-7, retinanet-9, tinyyolov2-7, tinyyolov2-8, ssd_mobilenet_v1_10, ssd_mobilenet_v1_12, ssd_mobilenet_v1_12-int8, MaskRCNN-10, tiny-yolov3-11, udnie-9, pointilism-9, mosaic-9, udnie-8, candy-8, pointilism-8, rain-princess-8, rain-princess-9, mosaic-8, candy-9 102 models passed: gpt2-10, gpt2-lm-head-10, t5-decoder-with-lm-head-12, t5-encoder-12, bertsquad-12, bertsquad-10, roberta-sequence-classification-9, roberta-base-11, super-resolution-10, arcfaceresnet100-8, emotion-ferplus-8, emotion-ferplus-7, inception-v1-7, inception-v1-6, inception-v1-9, inception-v1-12, inception-v1-8, googlenet-3, googlenet-9, googlenet-12, googlenet-8, googlenet-6, googlenet-7, inception-v2-8, inception-v2-9, inception-v2-7, mnist-7, mnist-8, rcnn-ilsvrc13-9, rcnn-ilsvrc13-7, rcnn-ilsvrc13-8, rcnn-ilsvrc13-6, zfnet512-7, zfnet512-12, zfnet512-8, zfnet512-9, caffenet-9, caffenet-12, caffenet-7, caffenet-6, caffenet-8, mobilenetv2-7, squeezenet1.1-7, squeezenet1.0-6, squeezenet1.0-7, squeezenet1.0-8, squeezenet1.0-3, squeezenet1.0-9, squeezenet1.0-12, densenet-8, densenet-9, densenet-6, densenet-7, resnet50-v1-7, resnet101-v1-7, resnet50-caffe2-v1-9, resnet34-v1-7, resnet50-caffe2-v1-6, resnet50-caffe2-v1-7, resnet152-v2-7, resnet50-caffe2-v1-8, resnet18-v1-7, resnet18-v2-7, resnet34-v2-7, resnet50-v2-7, resnet101-v2-7, resnet152-v1-7, resnet50-v1-12, efficientnet-lite4-11, bvlcalexnet-9, bvlcalexnet-7, bvlcalexnet-12, bvlcalexnet-6, bvlcalexnet-8, vgg19-caffe2-7, vgg16-bn-7, vgg19-caffe2-9, vgg16-12, vgg19-caffe2-6, vgg19-caffe2-8, shufflenet-v2-10, shufflenet-v2-12, shufflenet-9, shufflenet-6, shufflenet-7, shufflenet-8, yolov3-10, yolov4, retinanet-9, tinyyolov2-7, tinyyolov2-8, tiny-yolov3-11, udnie-9, pointilism-9, mosaic-9, udnie-8, candy-8, pointilism-8, rain-princess-8, rain-princess-9, mosaic-8, candy-9 |
@tungld can we filter out the models that are too old? Presumably some of the models that don't compile are also because there are data types we don't handle? Ideally, we would have a way to have label for each benchmarks (e.g. opset, use fp16, ... ) and then we can pull a set that has/has not certain characteristics on a per test machine architecture basis |
Results when filtering out old models and int models: There are 155 models in the ONNX model zoo where 31 models are not checked because of old opsets or quantization. 124 models tested: mnist-7, bvlcalexnet-9, caffenet-8, mosaic-9, yolov3-12, squeezenet1.0-12, vgg16-12, bvlcalexnet-8, bertsquad-12, MaskRCNN-12, udnie-8, inception-v2-8, shufflenet-7, zfnet512-7, googlenet-7, resnet101-v1-7, ssd_mobilenet_v1_10, densenet-12, arcfaceresnet100-8, MaskRCNN-10, rcnn-ilsvrc13-7, roberta-base-11, candy-8, resnet18-v2-7, emotion-ferplus-8, tiny-yolov3-11, pointilism-9, googlenet-9, resnet50-v2-7, inception-v1-8, shufflenet-6, tinyyolov2-7, ResNet101-DUC-7, caffenet-9, t5-encoder-12, t5-decoder-with-lm-head-12, squeezenet1.0-8, inception-v1-12, fcn-resnet50-12, inception-v1-6, ssd_mobilenet_v1_12, 102 models passed: mnist-7, bvlcalexnet-9, caffenet-8, mosaic-9, yolov3-12, squeezenet1.0-12, vgg16-12, bvlcalexnet-8, bertsquad-12, udnie-8, shufflenet-7, inception-v2-8, zfnet512-7, googlenet-7, resnet101-v1-7, densenet-12, arcfaceresnet100-8, rcnn-ilsvrc13-7, roberta-base-11, candy-8, resnet18-v2-7, emotion-ferplus-8, tiny-yolov3-11, pointilism-9, googlenet-9, resnet50-v2-7, inception-v1-8, shufflenet-6, tinyyolov2-7, caffenet-9, squeezenet1.0-8, inception-v1-12, inception-v1-6, inception-v1-7, resnet18-v1-7, gpt2-10, rain-princess-8, resnet50-v1-7, squeezenet1.0-6, resnet34-v2-7, resnet50-caffe2-v1-7, vgg16-bn-7, efficientnet-lite4-11, mnist-8, zfnet512-9, bertsquad-10, yolov3-10, inception-v1-9, shufflenet-v2-12, resnet50-caffe2-v1-8, resnet101-v2-7, rcnn-ilsvrc13-8, tinyyolov2-8, resnet152-v1-7, bvlcalexnet-7, squeezenet1.0-7, bvlcalexnet-6, resnet34-v1-7, gpt2-lm-head-10, densenet-8, resnet50-caffe2-v1-9, emotion-ferplus-7, mosaic-8, shufflenet-9, inception-v2-7, rain-princess-9, googlenet-6, googlenet-8, caffenet-7, resnet50-v1-12, retinanet-9, super-resolution-10, roberta-sequence-classification-9, vgg19-caffe2-8, zfnet512-8, zfnet512-12, udnie-9, googlenet-12, mobilenetv2-7, squeezenet1.0-9, shufflenet-8, googlenet-3, yolov4, rcnn-ilsvrc13-9, densenet-9, vgg19-caffe2-6, resnet50-caffe2-v1-6, vgg19-caffe2-9, squeezenet1.0-3, bvlcalexnet-12, inception-v2-9, caffenet-6, pointilism-8, densenet-6, shufflenet-v2-10, vgg19-caffe2-7, rcnn-ilsvrc13-6, resnet152-v2-7, squeezenet1.1-7, densenet-7, candy-9, caffenet-12 22 model failed: fcn-resnet50-12, ssd_mobilenet_v1_12, bidaf-9, fcn-resnet101-11, FasterRCNN-10, zfnet512-6, ssd-12, MaskRCNN-12, ssd-10, ssd_mobilenet_v1_10, vgg16-7, MaskRCNN-10, mobilenetv2-12, inception-v2-6, FasterRCNN-12, vgg19-bn-7, ResNet101-DUC-7, bertsquad-8, vgg19-7, fcn-resnet50-11, t5-encoder-12, t5-decoder-with-lm-head-12 |
For some of the failures like T5, I've got the ability to help us move them over to successful for our users. Once I find some time I'm going to make a data prep script based off the onnxt5 benchmark notebook to give the community good data prepared by onnxruntime. |
Great. Thanks! I am closing this memo because now we can see a live status on the homepage of https://github.com/onnx/onnx-mlir. |
Merge onnx/onnx-mlir 5aca454 into zosdev/onnx-mlir metis
This issue is meant as an ongoing discussion about the onnx-mlir coverage of the ONNX model zoo and any other models of interest. Some of the models we have tried and issues found are below.
Supported:
-[x] MNIST
-[x] ResNet
In progress:
-[] ShuffleNet: slight result inconsistency being investigated but all operations are supported
Missing Ops:
-[] DenseNet: missing GlobalAveragePool operation
-[] AlexNet: missing LRN operation
-[] SqueezeNet: missing Dropout operation
-[] CaffeNet: missing LRN operation
Errors:
bertsquad8:
bidaf:
The text was updated successfully, but these errors were encountered: