- Implemented a deep learning transformer model using multi-head self-attention, positional encoding, and feed-forward networks.
- Trained the model on the Tiny Shakespeare dataset, featuring 40,000 lines from various plays, to fine-tune language processing capabilities.
-
Notifications
You must be signed in to change notification settings - Fork 0
sjd9021/Attention-Transformer
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Deep learning transformer built from scratch to learn and generate new shakespeare like text
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published