Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add sparse attention example #36

Merged
merged 2 commits into from
Sep 2, 2020
Merged

Add sparse attention example #36

merged 2 commits into from
Sep 2, 2020

Conversation

jeffra
Copy link
Contributor

@jeffra jeffra commented Sep 2, 2020

No description provided.

arashashari and others added 2 commits September 1, 2020 19:25
* update bing_bert example to use sparse transformer

* Updated teh BertSparseSelfAttention example based on the ST updates

* updated bing_bert example based on final updates for Sparse Attention; also added un/pad of Bert layer input

* updated based on Tunji's comment: added a separate script for SA

* fixed a typo

* added an exception when both transformer kernel and SA are set together.
@jeffra jeffra merged commit 896831c into master Sep 2, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants