Skip to content

Commit

Permalink
[GPU] Fix wrong dynamic convolution selection in SD1.5 dynamic dpas p…
Browse files Browse the repository at this point in the history
…latform (#24082)

_Dynamic convolutions with explicit padding run planar bfyx format in
clDNN.
But this limitation is only for clDNN NOT oneDNN. Because unexpected
format selection in dynamic convolution, SD1.5 in platform used dpas
always run clDNN and it caused bad performance._

### Tickets:
 - *138632*

---------

Signed-off-by: hyunback <[email protected]>
  • Loading branch information
hyunback authored Apr 22, 2024
1 parent 8bf2ee9 commit f043f38
Showing 1 changed file with 7 additions and 5 deletions.
12 changes: 7 additions & 5 deletions src/plugins/intel_gpu/src/graph/layout_optimizer.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1140,9 +1140,14 @@ format layout_optimizer::get_expected_format(convolution_node const& node) {
return format::adjust_to_rank(format::bfyx, output_layout.get_partial_shape().size());
}

// Use planar bfyx format for dynamic convolutions with explicit padding
if (node.is_dynamic() && output_layout.get_partial_shape().size() == 4 && node.use_explicit_padding() && !i8_u8_input)
bool onednn_valid_post_ops = get_post_ops_count(node) <= 32;
bool use_onednn_impls = _optimization_attributes.use_onednn_impls && input_layout.data_type != data_types::f32;

// Use planar bfyx format for dynamic convolutions with explicit padding in clDNN
if (node.is_dynamic() && output_layout.get_partial_shape().size() == 4 && node.use_explicit_padding() && !i8_u8_input &&
!(use_onednn_impls && onednn_valid_post_ops)) {
return format::bfyx;
}

if (input_layout.is_dynamic() || output_layout.is_dynamic()) {
if (input_layout.get_partial_shape().size() <= 4)
Expand All @@ -1154,9 +1159,6 @@ format layout_optimizer::get_expected_format(convolution_node const& node) {

const float cond_denom = _total_conv > 0 ? 1.0f / static_cast<float>(_total_conv) : 1.0f;

bool onednn_valid_post_ops = get_post_ops_count(node) <= 32;
bool use_onednn_impls = _optimization_attributes.use_onednn_impls && input_layout.data_type != data_types::f32;

if (use_onednn_impls && onednn_valid_post_ops && node.get_preferred_output_fmt() != format::any) {
expected_format = node.get_preferred_output_fmt();
} else {
Expand Down

0 comments on commit f043f38

Please sign in to comment.