Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

whisper-base pytorch model support #269

Open
AmosLewis opened this issue Jun 21, 2024 · 0 comments
Open

whisper-base pytorch model support #269

AmosLewis opened this issue Jun 21, 2024 · 0 comments
Labels

Comments

@AmosLewis
Copy link
Collaborator

iree-compiler 20240620.930
iree-runtime 20240620.930

python runmodel.py  --torchmlirimport fximport --todtype default --mode onnx --outfileprefix whisper-base 1> model-run.log 2>&1
iree-import-onnx whisper-base.default.onnx -o whisper-base.default.pytorch.linalg.mlir 1> onnx-import.log 2>&1
iree-compile --iree-input-demote-i64-to-i32 --iree-hal-target-backends=llvm-cpu  whisper-base.default.pytorch.linalg.mlir > whisper-base.default.vmfb 2>iree-compile.log
whisper-base.default.pytorch.linalg.mlir:139:12: error: 'linalg.generic' op inferred input/output operand #1 has shape's dimension #1 to be 8, but found 1
    %135 = torch.operator "onnx.Add"(%134, %106) : (!torch.vtensor<[1,8,10,10],f32>, !torch.vtensor<[?,?,10,10],f32>) -> !torch.vtensor<[?,8,10,10],f32> 
           ^
whisper-base.default.pytorch.linalg.mlir:139:12: note: see current operation: 
%397 = "linalg.generic"(%395, %318, %396) <{indexing_maps = [affine_map<(d0, d1, d2, d3) -> (0, d1, d2, d3)>, affine_map<(d0, d1, d2, d3) -> (d0, d1, d2, d3)>, affine_map<(d0, d1, d2, d3) -> (d0, d1, d2, d3)>], iterator_types = [#linalg.iterator_type<parallel>, #linalg.iterator_type<parallel>, #linalg.iterator_type<parallel>, #linalg.iterator_type<parallel>], operandSegmentSizes = array<i32: 2, 1>}> ({
^bb0(%arg1182: f32, %arg1183: f32, %arg1184: f32):
  %1727 = "arith.addf"(%arg1182, %arg1183) <{fastmath = #arith.fastmath<none>}> : (f32, f32) -> f32
  "linalg.yield"(%1727) : (f32) -> ()
}) : (tensor<1x8x10x10xf32>, tensor<1x1x10x10xf32>, tensor<1x8x10x10xf32>) -> tensor<1x8x10x10xf32>

python ./run.py --tolerance 0.001 0.001 --cachedir ./huggingface_cache  -f pytorch -g models --mode onnx --report  --torchtolinalg -j 12
Starting e2eshark tests. Using 12 processes
Cache Directory: /home/chi/src/SHARK-TestSuite/e2eshark/huggingface_cache
Tolerance for comparing floating point (atol, rtol) = (0.001, 0.001)
Note: No Torch MLIR build provided using --torchmlirbuild. iree-import-onnx will be used to convert onnx to torch onnx mlir
IREE BUILD: IREE in PATH /home/chi/src/SHARK-TestSuite/e2eshark/e2e_venv/bin will be used
Test run directory: /home/chi/src/SHARK-TestSuite/e2eshark/test-run
Framework:pytorch mode=onnx backend=llvm-cpu runfrom=model-run runupto=inference
Test list: ['pytorch/models/gpt2', 'pytorch/models/opt-125M', 'pytorch/models/bart-large', 'pytorch/models/whisper-base', 'pytorch/models/beit-base-patch16-224-pt22k-ft22k', 'pytorch/models/deit-small-distilled-patch16-224', 'pytorch/models/resnet50', 'pytorch/models/llama2-7b-GPTQ', 'pytorch/models/mobilebert-uncased', 'pytorch/models/vicuna-13b-v1.3', 'pytorch/models/phi-1_5', 'pytorch/models/opt-350m', 'pytorch/models/vit-base-patch16-224', 'pytorch/models/phi-2', 'pytorch/models/dlrm', 'pytorch/models/gpt2-xl', 'pytorch/models/opt-125m-gptq', 'pytorch/models/whisper-medium', 'pytorch/models/stablelm-3b-4e1t', 'pytorch/models/opt-1.3b', 'pytorch/models/whisper-small', 'pytorch/models/t5-large', 'pytorch/models/gemma-7b', 'pytorch/models/miniLM-L12-H384-uncased', 'pytorch/models/mit-b0', 'pytorch/models/bert-large-uncased', 'pytorch/models/t5-base', 'pytorch/models/llama2-7b-hf', 'pytorch/models/bge-base-en-v1.5']
Test pytorch/models/resnet50 passed
Test pytorch/models/deit-small-distilled-patch16-224 passed
Test pytorch/models/whisper-base failed [iree-compile]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant