Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RELAY] BiasAdd, MLP, Resnet testing #1969

Merged
merged 2 commits into from
Oct 24, 2018
Merged

[RELAY] BiasAdd, MLP, Resnet testing #1969

merged 2 commits into from
Oct 24, 2018

Conversation

tqchen
Copy link
Member

@tqchen tqchen commented Oct 24, 2018

This PR recovers the testing API to construct the normal workloads in relay. It contains the following changes

  • Refactor ir_pass.free_vars to return vars in post-dfs order @MarisaKirisame
    • This allows us to directly use this to list unbound vars, which corresponds to old nnvm.symbol.list_arguments
  • Enable batch_norm to infer the type of the gamma, beta, moving_mean @slyubomirsky
  • Disable node generic inheritance TupleWrapper
    • This is because batch_norm API now returns a tuple, which differs from traditional batch_norm in the inference API
    • This is a reasonable change as it makes batch_norm functional and opens for more optimizations
    • The decision is to allow the system to eagerly throw when TupleWrapper is mistakenly passed into operators
    • For cases when tuple is needed, astuple must be explicitly called.
  • Introduce relay.testing, which now includes a mlp and resnet

@tqchen
Copy link
Member Author

tqchen commented Oct 24, 2018

cc @joshpoll @jroesch @yzhliu @zhiics @eqy @merrymercy @Huyuwei please review

@@ -275,6 +296,9 @@ def __repr__(self):
return ("TupleWrapper(" + self.tuple_value.__repr__() +
", " + self.size + ")")

def astype(self, _):
raise TypeError("astype cannot be used on tuple")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tuple -> tuplewrapper?


Returns
-------
result : relay.Expr
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tvm.relay.Expr

bias : relay.Expr
The bias to be added.

axis : int, optional
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Optional[int]

factor_type: str, optional
Can be ``'avg'``, ``'in'``, or ``'out'``.

magnitude: float, optional
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Optional[float]

rnd_type: str, optional
Random generator type, can be ``'gaussian'`` or ``'uniform'``.

factor_type: str, optional
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Optional[str]


Parameters
----------
net : relay.Function
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tvm.relay.Function


Parameters
----------
data : relay.Expr
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tvm.relay.Expr

data : relay.Expr
The input expression.

weight : relay.Expr
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tvm.relay.Expr

image_shape : tuple, optional
The input image shape

dtype : str, optional
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Optional[str]

Bottle neck channels factor with regard to num_filter
stride : tuple
Stride used in convolution
dim_match : Boolean
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bool

@@ -102,35 +102,26 @@ bool AlphaEqual(const Type& t1, const Type& t2);
*/
bool WellFormed(const Expr& e);

/*! \brief Get free variables from expression e.
/*! \brief Get free Vars from expression expr in PostDFS order.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is expression expr a typo?

@jroesch
Copy link
Member

jroesch commented Oct 24, 2018

Looks good to me 👍, removed similar code from the RTS PR.

@tqchen tqchen merged commit c76fce9 into apache:master Oct 24, 2018
@tqchen
Copy link
Member Author

tqchen commented Oct 24, 2018

Thanks @jroesch @MarisaKirisame for reviews, this is now merged

raise ValueError("Incorrect factor type")
# Hack for mobilenet, because there is less connectivity
if "depthwise" in name:
factor = 3 * 3
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we delete this?

strides=stride,
padding=(0, 0),
name=name + '_conv1')
bn2 = layers.batch_norm_infer(data=conv1, epsilon=2e-5, name=name + '_bn2')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These APIs are not as consistent as the old ones in nnvm. Some start with layers., some start with relu., and others start with relu.nn. Some ends with _infer, while the others do not.

We should provide a more consistent API for front-end users.
In this front-end api, we can hide all variable declerations and things like bias_add

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True, so far the layers API is a simple extension to the original API and things are for automatic variable creation. As an IR and compiler, we don't need a deep layered API, but I agree that we can use a better API(like gluon) if we want to present a frontend.

Copy link
Contributor

@eqy eqy Oct 24, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am looking at porting mobilenet; one difference is that it seems before we named relu ops in the graph, but now relay.relu does not take in a name parameter. Not sure if this is a big deal (it seems conv2d and batch norm have wrappers that take in a name parameter).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The name parameter is used to construct weight's name. For now, it is not a big problem in our end

bool BatchNormRel(const Array<Type>& types,
int num_inputs,
const Attrs& attrs,
const TypeReporter& reporter) {
CHECK_EQ(types.size(), 6);
const auto* data = types[0].as<TensorTypeNode>();
if (data == nullptr) return false;
if (data->shape.size() == 0) return false;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't this mean that we get an out of bounds if the axis is -1 and the data shape is of rank 0? In that case, axis would be equal to data->shape.size() - 1, or -1, so on line 453, axis_size would be data->shape[-1]. (Sorry for not reviewing before it was merged.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good catch, I think it is better to directly do a boundary checking (after axis is transformed back by axis + data->shape.size() to avoid this problem

jroesch added a commit to jroesch/tvm that referenced this pull request Oct 25, 2018
Add initial version of evaluator and tests

WIP

Work towards simple examples in the evaluator

Requires implementation of lowering ops and monomorph

Evaluator now works on simple cases

Restore Function case in Evaluator

WIP

Fix rebase issues

working towards working version

RTS is now working again

RTS can add numbers now

Fix some rebase issues

Fix up tests post rebase

WIP

Issue type checking MLP

Remove dead file

Clean up evaluator

Remove accidental change

Reset changes from apache#1962

Rename Evaluator

A little clean up

WIP

Clean up tests

WIP

WIP

Repair from rebase and refactor to not use MM

Remove testing code which is now in apache#1969

WIP
FrozenGene pushed a commit to FrozenGene/tvm that referenced this pull request Dec 27, 2018
* [RELAY] BiasAdd, MLP, Resnet testing

* fix review comments
wweic pushed a commit to neo-ai/tvm that referenced this pull request Feb 20, 2019
* [RELAY] BiasAdd, MLP, Resnet testing

* fix review comments
wweic pushed a commit to neo-ai/tvm that referenced this pull request Feb 20, 2019
* [RELAY] BiasAdd, MLP, Resnet testing

* fix review comments
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants