-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New Gather op reference implementation. #3633
New Gather op reference implementation. #3633
Conversation
2bf197d
to
9947bbe
Compare
9947bbe
to
59455a3
Compare
@vladislav-volkov , please check |
ngraph/core/reference/include/ngraph/runtime/reference/gather.hpp
Outdated
Show resolved
Hide resolved
ngraph/core/reference/include/ngraph/runtime/reference/gather.hpp
Outdated
Show resolved
Hide resolved
ngraph/core/reference/include/ngraph/runtime/reference/gather.hpp
Outdated
Show resolved
Hide resolved
ngraph/core/reference/include/ngraph/runtime/reference/gather.hpp
Outdated
Show resolved
Hide resolved
ngraph/core/reference/include/ngraph/runtime/reference/gather.hpp
Outdated
Show resolved
Hide resolved
ngraph/core/reference/include/ngraph/runtime/reference/gather.hpp
Outdated
Show resolved
Hide resolved
ngraph/core/reference/include/ngraph/runtime/reference/gather.hpp
Outdated
Show resolved
Hide resolved
Create span.hpp for common implementation of span.
@vladislav-volkov , Can you look on review once again. |
ngraph/core/reference/include/ngraph/runtime/reference/gather.hpp
Outdated
Show resolved
Hide resolved
const auto reminder_part_shape = span(params_shape).subspan(axis + 1); | ||
|
||
const auto found_out_shape = | ||
join(params_axes_part, span(indices_shape), reminder_part_shape); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can extend span class and aggregate multiple containers and spans in one superspan. :D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure we can thing about Superspan
or Multispan
but follow std::span
, Sapn
point to continues part of memory.
|
||
/// @brief Span should mimic std::span | ||
template <typename Element> | ||
class Span |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not a request to change but I only should mention that this class can operate only with vectors and arrays.
We cannot use any random access iterator because the container must be in a contiguous space in memory (like std::vector or C array).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is the idea to work on continues part of memory but the idea, to create some Multispan
, is worth to consider.
ngraph/core/reference/include/ngraph/runtime/reference/gather.hpp
Outdated
Show resolved
Hide resolved
const auto copy_size = shape_size(reminder_part_shape); | ||
|
||
const size_t copy_round_in_batch = | ||
indices_shape.empty() ? 1 : shape_size(indices_shape) / indices_shape.back(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if indices_shape.back()
is equal to 0?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll add check here.
What's the general approach for shape with 0
on axes? How we should proceed this because shape_size
will return 0?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general we should be ready to get a 0
in dimension. At the current moment I think we can throw an exception or return the error...
But as I know 0
is a valid value in the TensorFlow models, and I think in the future we should support to work with such dimensions.
@lazarevevgeny Could you advise or add something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it is possible that the dimension will be equal to 0 (meaning empty tensor). In this case the corresponding output dimension should be also 0. It will be great if we can handle this case in this PR, but I don't want to insist on this since the 0 dimension tensors are not yet supported and we don't have models with such a case now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For clarifying. Is possible to have tensor with rank > 1
and one of axes[i] == 0
and is this mean 'empty tensor'?
How tensor with rank == 0
is distinguish from tensor with rank > 0
and some axiex == 0
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it is possible. Consider the model where you calculate bounding boxes for objects and then filter them by some probability threshold. There could be a situation when all the boxes have probability lower than threshold so the dimension size will be 0. Situation when rank == 0
means the the number of dimensions is 0, so the tensor is scalar and its shape is []
. The empty tensor is a tensor of arbitrary rank ND but one of the dimension is equal to 0, for example, the possible shape of empty tensor could be [100, 0, 123, 454]
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Is empty tensor means 'no data' (for now
shape_size
forshape
of empty tensor return0
)? - In gather op (and probably many others ops) if output tensor is empty tensor operator has noting to do?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Yes
- Yes
ngraph/core/reference/include/ngraph/runtime/reference/gather.hpp
Outdated
Show resolved
Hide resolved
* New Gather op reference implementation. * Unify span implementation for gather and gather_nd. Create span.hpp for common implementation of span. * Move span to utils directory. * Address review comments. * update span * Address PR comments. Co-authored-by: Patryk Elszkowski <[email protected]>
Ticket: 37440