Skip to content

Commit

Permalink
[CPU] Fixed BF16 Matmul inference precision
Browse files Browse the repository at this point in the history
  • Loading branch information
dmitry-gorokhov committed Feb 21, 2024
1 parent 2fe53b1 commit eb634c4
Showing 1 changed file with 1 addition and 1 deletion.
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ static const TypeMapping dnnlFCTypeMapping {
{{_f32 | _bf16 | _f16, _any, _any, _i8 | _u8}, pt(bypass(), bypass(), use<0>(), use<0>())},
// compresses float weights which do not match input data precision
{{_f32, _half_float, _any, _any | _any}, pt(bypass(), bypass(), use<0>(), use<0>())},
{{_bf16, _f16, _any, _any | _any}, pt(bypass(), bypass(), use<0>(), use<0>())},
{{_bf16, _f16 | _f32, _any, _any | _any}, pt(bypass(), bypass(), use<0>(), use<0>())},
{{_f16, _bf16, _any, _any | _any}, pt(bypass(), bypass(), use<0>(), use<0>())},
// quantization configuration (@todo more strict requrements for output precision?)
{{_u8 | _i8, _i8, _any, _any}, pt(bypass(), bypass(), bypass(), use<3>())},
Expand Down

0 comments on commit eb634c4

Please sign in to comment.