Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Numpy compatible dtype inference for tvm.convert and tvm.const #3861

Merged
merged 11 commits into from
Sep 9, 2019

Conversation

sxjscience
Copy link
Member

Thanks for contributing to TVM! Please refer to guideline https://docs.tvm.ai/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from Reviewers.

@sxjscience
Copy link
Member Author

I find that a large portion of the existing tests relies on the assumption that tvm.convert(1) will have dtype = int32. So I'm going to convert int to int32 and float to float32. Would it be better to ensure full-numpy compatiblity later? @yzhliu

@sxjscience sxjscience changed the title Numpy compatible dtype inference for scalar Numpy compatible dtype inference for tvm.convert and tvm.const Aug 30, 2019
@sxjscience
Copy link
Member Author

@tqchen Currently, I convert int to int32 and float to float32, which is incompatible to python but should be user-friendlier since int32 types are more common in C++ as the loop variable. Also, all numpy dtypes, like np.int32, np.float32, are converted to have the same dtype.

@yzhliu
Copy link
Member

yzhliu commented Aug 30, 2019

I think it is good.

@junrushao
Copy link
Member

Wait a moment...my question is, why bother? It seems to me that one can always manually set up dtype without issue...

@sxjscience
Copy link
Member Author

@junrushao1994 tvm.convert(np.float64(1)) will not return the correct dtype and it will add additional burden to the user.

@junrushao
Copy link
Member

I see. Is there a more general approach in dealing with this issue?

@sxjscience
Copy link
Member Author

@junrushao1994 I think for const and convert the current one should be general enough. It mainly deals with the situation when the dtype of the input is not know beforehand and we need to inference dtype base on the data.

@junrushao
Copy link
Member

Sounds good :-)

@junrushao
Copy link
Member

junrushao commented Sep 1, 2019

Just took a look at your errors with TFLite,

======================================================================
ERROR: Pooling
----------------------------------------------------------------------

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/nose/case.py", line 198, in runTest
    self.test(*self.arg)
  File "/workspace/tests/python/frontend/tflite/test_forward.py", line 276, in test_forward_pooling
    strides=[1, 1])
  File "/workspace/tests/python/frontend/tflite/test_forward.py", line 250, in _test_pooling
    _test_pooling_iteration(input_shape, **kwargs)
  File "/workspace/tests/python/frontend/tflite/test_forward.py", line 246, in _test_pooling_iteration
    compare_tflite_with_tvm(x,'Placeholder:0', [in_data], [out])
  File "/workspace/tests/python/frontend/tflite/test_forward.py", line 149, in compare_tflite_with_tvm
    num_output=len(out_names), out_names=out_names)
  File "/workspace/tests/python/frontend/tflite/test_forward.py", line 75, in run_tvm_graph
    graph, lib, params = relay.build(mod, target, params=params)
  File "/workspace/python/tvm/relay/build_module.py", line 207, in build
    graph_json, mod, params = bld_mod.build(func, target, target_host, params)
  File "/workspace/python/tvm/relay/build_module.py", line 108, in build
    self._build(func, target, target_host)
  File "tvm/_ffi/_cython/./function.pxi", line 310, in tvm._ffi._cy3.core.FunctionBase.__call__
  File "tvm/_ffi/_cython/./function.pxi", line 245, in tvm._ffi._cy3.core.FuncCall
  File "tvm/_ffi/_cython/./function.pxi", line 234, in tvm._ffi._cy3.core.FuncCall3
  File "tvm/_ffi/_cython/./base.pxi", line 171, in tvm._ffi._cy3.core.CALL

TypeError: Traceback (most recent call last):
  [bt] (8) /workspace/build/libtvm.so(tvm::relay::ScheduleGetter::VisitExpr_(tvm::relay::CallNode const*)+0x9ce) [0x7f3b7b29db4e]
  [bt] (7) /workspace/build/libtvm.so(std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), void tvm::runtime::TypedPackedFunc<tvm::Array<tvm::Tensor, void> (tvm::Attrs const&, tvm::Array<tvm::Tensor, void> const&, tvm::relay::Type const&, tvm::Target const&)>::AssignTypedLambda<tvm::Array<tvm::Tensor, void> (*)(tvm::Attrs const&, tvm::Array<tvm::Tensor, void> const&, tvm::relay::Type const&, tvm::Target const&)>(tvm::Array<tvm::Tensor, void> (*)(tvm::Attrs const&, tvm::Array<tvm::Tensor, void> const&, tvm::relay::Type const&, tvm::Target const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0xea) [0x7f3b7b01d5fa]
  [bt] (6) /workspace/build/libtvm.so(tvm::Array<tvm::Tensor, void> tvm::relay::Pool2DCompute<tvm::relay::AvgPool2DAttrs, (topi::nn::PoolType)0>(tvm::Attrs const&, tvm::Array<tvm::Tensor, void> const&, tvm::relay::Type const&, tvm::Target const&)+0x5f7) [0x7f3b7b0ddc37]
  [bt] (5) /workspace/build/libtvm.so(topi::nn::pool_impl(tvm::Tensor const&, tvm::Array<tvm::Expr, void> const&, tvm::Array<tvm::Expr, void> const&, tvm::Array<tvm::Expr, void> const&, topi::nn::PoolType, bool, unsigned long, unsigned long, bool)+0x1517) [0x7f3b7b0d09e7]
  [bt] (4) /workspace/build/libtvm.so(tvm::compute(tvm::Array<tvm::Expr, void>, std::function<tvm::Expr (tvm::Array<tvm::Var, void> const&)>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tvm::Map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tvm::NodeRef, void, void>)+0x4de) [0x7f3b7afbb8de]
  [bt] (3) /workspace/build/libtvm.so(std::_Function_handler<tvm::Expr (tvm::Array<tvm::Var, void> const&), topi::nn::pool_impl(tvm::Tensor const&, tvm::Array<tvm::Expr, void> const&, tvm::Array<tvm::Expr, void> const&, tvm::Array<tvm::Expr, void> const&, topi::nn::PoolType, bool, unsigned long, unsigned long, bool)::{lambda(tvm::Array<tvm::Var, void> const&)#3}>::_M_invoke(std::_Any_data const&, tvm::Array<tvm::Var, void> const&)+0x20) [0x7f3b7b0c8460]
  [bt] (2) /workspace/build/libtvm.so(topi::nn::pool_impl(tvm::Tensor const&, tvm::Array<tvm::Expr, void> const&, tvm::Array<tvm::Expr, void> const&, tvm::Array<tvm::Expr, void> const&, topi::nn::PoolType, bool, unsigned long, unsigned long, bool)::{lambda(tvm::Array<tvm::Var, void> const&)#3}::operator()(tvm::Array<tvm::Var, void> const&) const+0x50d) [0x7f3b7b0c7bcd]
  [bt] (1) /workspace/build/libtvm.so(tvm::ir::BinaryOpNode<tvm::ir::Min>::make(tvm::Expr, tvm::Expr)+0xeb) [0x7f3b7acca3cb]
  [bt] (0) /workspace/build/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x32) [0x7f3b7ac8e462]
  File "/workspace/include/tvm/ir.h", line 134
TypeError: Check failed: a.type() == b.type(): mismatched types

The stack trace here indicates that in the implementation of pooling in TOPI, there are some type mismatches (I would assume something like i32 vs i64). Could you fix this?

Copy link
Member

@junrushao junrushao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did another round of review. Some nits and one issue:

I suggest not to make tvm.const's dtype optional. Rather, let's do stricter check to confirm if dtype is correct.

@@ -86,7 +103,7 @@ def const(value, dtype=None):
value : int or float
The input value

dtype : str
dtype : str or None, optional
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure which one would be better

Suggested change
dtype : str or None, optional
dtype : Optional[str]

elif isinstance(value, float):
# We intentionally convert the float to float32 since it's more common in DL.
dtype = 'float32'
elif isinstance(value, int):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you like to check overflow?

@@ -73,22 +74,24 @@ def max_value(dtype):
return _api_internal._max_value(dtype)


def const(value, dtype):
def const(value, dtype=None):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey, why make it optional? As a compiler, we should be more serious about which type we would feed into the IR framework.

@junrushao
Copy link
Member

Okay I chat with @sxjscience offline, and agreed that we should make the behavior of tvm.const and tvm.convert consistent.

for (size_t i = 0; i < t->shape.size(); ++i) {
if (i >= pad_before.size()) {
output_shape.push_back(t->shape[i]);
} else {
output_shape.push_back(
tvm::ir::Simplify(t->shape[i] + pad_before[i] + pad_after[i]));
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find that there are some data type mismatch problems in the implementation of TOPI. For example, here, pad_before and pad_after can be int64, but the t->shape[i] is int32. Due to the automatic type conversion logic in https://github.com/dmlc/tvm/blob/e8c6adc6fb700c6dd181b5b3f059c135d5fee6d5/src/lang/expr_operator.cc#L39-L50, the t->shape[i] + pad_before[i] + pad_after[i] will have dtype=int64. This causes the output_shape to have both int32 and int64 types and will cause some error in the generated llvm code.

out_shape.push_back(shape[0]);
out_shape.push_back(shape[1]);
out_shape.push_back(cast(Int(32), shape[0]));
out_shape.push_back(cast(Int(32), shape[1]));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so these shape could be int64 when passed in?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, they can be int64 because we only constrain them to be Expr.

auto out_height = output_size[0];
auto out_width = output_size[1];
auto out_height = cast(Int(32), output_size[0]);
auto out_width = cast(Int(32), output_size[1]);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fyi @anijain2305 Is it good in quantization?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is good. We can cast back to x->dtype if it is not FP32 just before the divide factor.

@yzhliu
Copy link
Member

yzhliu commented Sep 9, 2019

Thanks @sxjscience @junrushao1994 @anijain2305

@yzhliu yzhliu merged commit 63a91eb into apache:master Sep 9, 2019
wweic pushed a commit to wweic/tvm that referenced this pull request Sep 16, 2019
…pache#3861)

* numpy compatible type inference

* update

* try to fix

* fix

* try to fix

* fix lint

* Update nn.h

* cast to int32

* try to fix

* fix again

* retrigger ci
wweic pushed a commit to wweic/tvm that referenced this pull request Sep 16, 2019
…pache#3861)

* numpy compatible type inference

* update

* try to fix

* fix

* try to fix

* fix lint

* Update nn.h

* cast to int32

* try to fix

* fix again

* retrigger ci
wweic pushed a commit to neo-ai/tvm that referenced this pull request Sep 16, 2019
…pache#3861)

* numpy compatible type inference

* update

* try to fix

* fix

* try to fix

* fix lint

* Update nn.h

* cast to int32

* try to fix

* fix again

* retrigger ci
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants