Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Relay][Frontend][TensorFlow] Support BatchMatMul with input dimensions larger than 3 #3732

Merged
merged 7 commits into from
Aug 14, 2019

Conversation

soiferj
Copy link
Contributor

@soiferj soiferj commented Aug 8, 2019

This change adds support for BatchMatMul with n-dimensional inputs. For example, batch matrix multiplication of (2, 3, 4, 5) x (2, 3, 5, 4).

@soiferj soiferj closed this Aug 8, 2019
@soiferj soiferj reopened this Aug 8, 2019
@soiferj soiferj force-pushed the soiferj/batchmatmul4d branch from 2f6066c to 89a9a7b Compare August 8, 2019 05:55
@soiferj soiferj marked this pull request as ready for review August 8, 2019 06:15
@soiferj
Copy link
Contributor Author

soiferj commented Aug 8, 2019

@srkreddy1238 and @alexeyr would you mind taking a look?

@soiferj soiferj changed the title [Relay][TensorFlow] Support BatchMatMul with input dimensions larger than 3 [Relay][Frontend][TensorFlow] Support BatchMatMul with input dimensions larger than 3 Aug 8, 2019
outer_dims = [orig_shape_x[i] for i in range(0, len(orig_shape_x) - 2)]
num_outer_elts = 1
for outer_dim in outer_dims:
num_outer_elts *= outer_dim
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this just call np.prod?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated

_test_batch_matmul((2, 3, 4, 5, 6), (2, 3, 4, 6, 5), 'int32')
_test_batch_matmul((2, 3, 4, 5, 6), (2, 3, 4, 6, 5), 'float32', True, True)
_test_batch_matmul((2, 3, 4, 5, 6), (2, 3, 4, 5, 6), 'int32', True, False)
_test_batch_matmul((2, 3, 4, 5, 6), (2, 3, 4, 5, 6), 'float32', False, True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer to have tests with different numbers of outer dimensions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated

@MarisaKirisame
Copy link
Contributor

@soiferj how hard is it to make dense/batchmatmul to support more dimension instead?

@soiferj
Copy link
Contributor Author

soiferj commented Aug 10, 2019

From looking around the code, it seems like a good amount of work. All of computes would have to change, and we would probably want to change all of the schedules as well. We would also have to find and remove all asserts that verify the number of dimensions. Is this a preferred solution? I don't think the reshapes will have much overhead.

@MarisaKirisame
Copy link
Contributor

I see. Can you make this function universal then? (move this into relay/op, and allow other ppl to use it as well). I need this function badly.

@soiferj
Copy link
Contributor Author

soiferj commented Aug 10, 2019

Sure! I can work on that.

@soiferj
Copy link
Contributor Author

soiferj commented Aug 10, 2019

@alexeyr what do you think of the proposed change? Should we make this functionality a little more generic? If so, where do you think it should go?

Edit: never mind, I’m not going to make this change. Feel free to review as-is.

@soiferj
Copy link
Contributor Author

soiferj commented Aug 11, 2019

@MarisaKirisame would you mind explaining what you want the interface to look like? Because I don’t know how all of the front ends look, I could make a method called ConvertTo3d. Then, converting back to the original dimensions would be handled by the front end (since it’s just one function call).

@MarisaKirisame
Copy link
Contributor

@soiferj nevermind, I dont need it anymore, this is good to me.

@alexeyr
Copy link
Contributor

alexeyr commented Aug 12, 2019

@soiferj I'd reply that I can't evaluate at the moment anyway :) But it looks good to me.

Copy link
Contributor

@MarisaKirisame MarisaKirisame left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@soiferj
Copy link
Contributor Author

soiferj commented Aug 13, 2019

@tqchen @tmoreau89 would one of you be able to merge?

@tqchen tqchen merged commit 83bef9f into apache:master Aug 14, 2019
@tqchen
Copy link
Member

tqchen commented Aug 14, 2019

Thanks @soiferj @alexeyr @MarisaKirisame !

wweic pushed a commit to neo-ai/tvm that referenced this pull request Aug 16, 2019
…ns larger than 3 (apache#3732)

* Support BatchMatMul with shapes greater than length 3

* Fixes

* Add tests

* Remove dependency on Python3

* Clean up

* Merge with master

* Resolve comments
@soiferj soiferj deleted the soiferj/batchmatmul4d branch August 22, 2019 16:31
anijain2305 pushed a commit to anijain2305/tvm that referenced this pull request Aug 22, 2019
…ns larger than 3 (apache#3732)

* Support BatchMatMul with shapes greater than length 3

* Fixes

* Add tests

* Remove dependency on Python3

* Clean up

* Merge with master

* Resolve comments
wweic pushed a commit to neo-ai/tvm that referenced this pull request Sep 6, 2019
…ns larger than 3 (apache#3732)

* Support BatchMatMul with shapes greater than length 3

* Fixes

* Add tests

* Remove dependency on Python3

* Clean up

* Merge with master

* Resolve comments
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants