Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixing #321 issue #341

Merged
merged 4 commits into from
May 11, 2022
Merged

Fixing #321 issue #341

merged 4 commits into from
May 11, 2022

Conversation

ipdemes
Copy link
Contributor

@ipdemes ipdemes commented May 10, 2022

  • Adding logic for dispatch for cunumeric-specific types
  • Adding logic for the copy operation on cunumeric types (PointN)
  • Adding logic in advanced indexing for copies with the indirect field of shape (1,)

This PR takes another approach for fixing Issue #321

@ipdemes ipdemes requested a review from magnatelee May 10, 2022 03:49
- Adding logic for dispatch for cunumeric-specific types
- Adding logic for the copy operation on cunumeric types (PointN)
- Adding logic in advanced indexing for copies with the indirect field of shape (1,)
def _convert_future_to_store(self, a):
store = self.context.create_store(
a.dtype,
shape=(1,),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason you don't pass a.shape here? I suspect doing this could potentially require broadcasting later if the shape was not (1,) but some other degenerate case (e.g., (1,1,)).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have assumed that shape for the Future is always (1,). I will change it to a.shape

index_array = self._convert_future_to_store(index_array)
result_store = self.context.create_store(
self.dtype,
shape=(1,),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same question here: why can't we just use index_array.shape?

dim1, dim2, std::forward<Functor>(f), std::forward<Fnargs>(args)...);
}

// template <typename Functor, typename... Fnargs>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove these comments.

return f.template operator()<CuNumericTypeCodes::CUNUMERIC_TYPE_POINT4, DIM>(
std::forward<Fnargs>(args)...);
}
case CuNumericTypeCodes::CUNUMERIC_TYPE_POINT5: {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this dispatch should abide by LEGION_MAX_DIM and avoid instantiating types that will never arise for a given LEGION_MAX_DIM. I guess you could check code <= MAX_TYPE_NUMBER early on so that you can fall back to the core's dispatch more quickly.

#endif
#if LEGION_MAX_DIM >= 2
case 2: {
return inner_type_dispatch_fn<2>{}(code, f, std::forward<Fnargs>(args)...);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this not cunumeric::inner_type_dispatch_fn? (same question to other cases)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it should have cunumeric:: in front.

}

template <typename Functor, typename... Fnargs>
constexpr decltype(auto) double_dispatch(int dim1, int dim2, Functor f, Fnargs&&... args)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where are you using this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not using this, but might in the future. Would you recommend removing currently unused functions?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what use case do you have in mind?

// }

template <typename TypeCode, typename Functor, typename... Fnargs>
constexpr decltype(auto) type_dispatch(TypeCode code, Functor f, Fnargs&&... args)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this really used in the code? if so, this has the same issue as the inner dispatch of not abiding by LEGION_MAX_DIM.

@@ -64,6 +69,32 @@ struct UnaryOpImpl {
{
assert(false);
}

template <CuNumericTypeCodes CODE, int DIM>
void operator()(UnaryOpArgs& args) const
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we move this to a separate functor instead of adding it to the existing one? I want use to avoid any accidental instantiations of this template.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good

@@ -72,7 +103,7 @@ struct UnaryOpDispatch {
void operator()(UnaryOpArgs& args) const
{
auto dim = std::max(args.in.dim(), 1);
double_dispatch(dim, args.in.code(), UnaryOpImpl<KIND, OP_CODE>{}, args);
cunumeric::double_dispatch(dim, args.in.code(), UnaryOpImpl<KIND, OP_CODE>{}, args);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can do better than this by checking the op code from the outside and choosing a point copy functor if the op code was COPY. Then, you don't even need to do this custom double dispatch for all the other ops.

*
*/

#pragma once
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is used only in the unary op, I rather have it there than here. This hasn't become a general utility yet.


namespace cunumeric {

template <CuNumericTypeCodes CODE>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here. if you don't use this anywhere other than the unary op, keep this there.

@magnatelee magnatelee self-requested a review May 11, 2022 03:27
@ipdemes ipdemes merged commit 26a9951 into nv-legate:branch-22.05 May 11, 2022
@manopapad
Copy link
Contributor

@ipdemes I have added some comments above. I suggest creating a new PR to address them.

@manopapad
Copy link
Contributor

@ipdemes One more thing to note: Clang is throwing some warnings such as:

./cunumeric/unary/unary_op.h:80:12: warning: comparison of different enumeration types in switch statement ('legate_core_type_code_t' and 'CuNumericTypeCodes') [-Wenum-compare-switch]
      case CuNumericTypeCodes::CUNUMERIC_TYPE_POINT4: {
           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
./cunumeric/unary/unary_op.h:131:14: note: in instantiation of function template specialization 'cunumeric::inner_type_dispatch_fn<1>::operator()<legate_core_type_code_t, cunumeric::UnaryCopyImpl<cunumeric::VariantKind::CPU>, cunumeric::UnaryOpArgs &>'
      requested here
      return cunumeric::inner_type_dispatch_fn<1>{}(code, f, std::forward<Fnargs>(args)...);
             ^
./cunumeric/unary/unary_op_template.inl:169:18: note: in instantiation of function template specialization 'cunumeric::double_dispatch<legate_core_type_code_t, cunumeric::UnaryCopyImpl<cunumeric::VariantKind::CPU>, cunumeric::UnaryOpArgs &>' requested
      here
      cunumeric::double_dispatch(dim, args.in.code(), UnaryCopyImpl<KIND>{}, args);
                 ^
./cunumeric/unary/unary_op_util.h:84:25: note: in instantiation of function template specialization 'cunumeric::UnaryOpDispatch<cunumeric::VariantKind::CPU>::operator()<cunumeric::UnaryOpCode::ABSOLUTE>' requested here
      return f.template operator()<UnaryOpCode::ABSOLUTE>(std::forward<Fnargs>(args)...);
                        ^
cunumeric/unary/unary_op.cc:106:3: note: in instantiation of function template specialization 'cunumeric::unary_op_template<cunumeric::VariantKind::CPU>' requested here
  unary_op_template<VariantKind::CPU>(context);
  ^

and

./cunumeric/unary/unary_op.h:68:12: warning: case value not in enumerated type 'legate_core_type_code_t' [-Wswitch]
      case CuNumericTypeCodes::CUNUMERIC_TYPE_POINT2: {
           ^

@ipdemes
Copy link
Contributor Author

ipdemes commented May 12, 2022

Thank you! I will try to fix it today

@ipdemes ipdemes deleted the cunumeric_321 branch May 26, 2022 20:20
manopapad added a commit to manopapad/cunumeric that referenced this pull request Sep 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants