-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RELAY] BiasAdd, MLP, Resnet testing #1969
Conversation
@@ -275,6 +296,9 @@ def __repr__(self): | |||
return ("TupleWrapper(" + self.tuple_value.__repr__() + | |||
", " + self.size + ")") | |||
|
|||
def astype(self, _): | |||
raise TypeError("astype cannot be used on tuple") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tuple -> tuplewrapper?
python/tvm/relay/expr.py
Outdated
|
||
Returns | ||
------- | ||
result : relay.Expr |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tvm.relay.Expr
bias : relay.Expr | ||
The bias to be added. | ||
|
||
axis : int, optional |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Optional[int]
factor_type: str, optional | ||
Can be ``'avg'``, ``'in'``, or ``'out'``. | ||
|
||
magnitude: float, optional |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Optional[float]
rnd_type: str, optional | ||
Random generator type, can be ``'gaussian'`` or ``'uniform'``. | ||
|
||
factor_type: str, optional |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Optional[str]
python/tvm/relay/testing/init.py
Outdated
|
||
Parameters | ||
---------- | ||
net : relay.Function |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tvm.relay.Function
|
||
Parameters | ||
---------- | ||
data : relay.Expr |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tvm.relay.Expr
data : relay.Expr | ||
The input expression. | ||
|
||
weight : relay.Expr |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tvm.relay.Expr
image_shape : tuple, optional | ||
The input image shape | ||
|
||
dtype : str, optional |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Optional[str]
python/tvm/relay/testing/resnet.py
Outdated
Bottle neck channels factor with regard to num_filter | ||
stride : tuple | ||
Stride used in convolution | ||
dim_match : Boolean |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bool
include/tvm/relay/pass.h
Outdated
@@ -102,35 +102,26 @@ bool AlphaEqual(const Type& t1, const Type& t2); | |||
*/ | |||
bool WellFormed(const Expr& e); | |||
|
|||
/*! \brief Get free variables from expression e. | |||
/*! \brief Get free Vars from expression expr in PostDFS order. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is expression expr
a typo?
Looks good to me 👍, removed similar code from the RTS PR. |
Thanks @jroesch @MarisaKirisame for reviews, this is now merged |
raise ValueError("Incorrect factor type") | ||
# Hack for mobilenet, because there is less connectivity | ||
if "depthwise" in name: | ||
factor = 3 * 3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we delete this?
strides=stride, | ||
padding=(0, 0), | ||
name=name + '_conv1') | ||
bn2 = layers.batch_norm_infer(data=conv1, epsilon=2e-5, name=name + '_bn2') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These APIs are not as consistent as the old ones in nnvm. Some start with layers.
, some start with relu.
, and others start with relu.nn
. Some ends with _infer
, while the others do not.
We should provide a more consistent API for front-end users.
In this front-end api, we can hide all variable declerations and things like bias_add
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True, so far the layers API is a simple extension to the original API and things are for automatic variable creation. As an IR and compiler, we don't need a deep layered API, but I agree that we can use a better API(like gluon) if we want to present a frontend.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am looking at porting mobilenet; one difference is that it seems before we named relu ops in the graph, but now relay.relu
does not take in a name parameter. Not sure if this is a big deal (it seems conv2d and batch norm have wrappers that take in a name parameter).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The name parameter is used to construct weight's name. For now, it is not a big problem in our end
bool BatchNormRel(const Array<Type>& types, | ||
int num_inputs, | ||
const Attrs& attrs, | ||
const TypeReporter& reporter) { | ||
CHECK_EQ(types.size(), 6); | ||
const auto* data = types[0].as<TensorTypeNode>(); | ||
if (data == nullptr) return false; | ||
if (data->shape.size() == 0) return false; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't this mean that we get an out of bounds if the axis is -1 and the data shape is of rank 0? In that case, axis
would be equal to data->shape.size() - 1
, or -1, so on line 453, axis_size
would be data->shape[-1]
. (Sorry for not reviewing before it was merged.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good catch, I think it is better to directly do a boundary checking (after axis is transformed back by axis + data->shape.size() to avoid this problem
Add initial version of evaluator and tests WIP Work towards simple examples in the evaluator Requires implementation of lowering ops and monomorph Evaluator now works on simple cases Restore Function case in Evaluator WIP Fix rebase issues working towards working version RTS is now working again RTS can add numbers now Fix some rebase issues Fix up tests post rebase WIP Issue type checking MLP Remove dead file Clean up evaluator Remove accidental change Reset changes from apache#1962 Rename Evaluator A little clean up WIP Clean up tests WIP WIP Repair from rebase and refactor to not use MM Remove testing code which is now in apache#1969 WIP
* [RELAY] BiasAdd, MLP, Resnet testing * fix review comments
* [RELAY] BiasAdd, MLP, Resnet testing * fix review comments
* [RELAY] BiasAdd, MLP, Resnet testing * fix review comments
This PR recovers the testing API to construct the normal workloads in relay. It contains the following changes