Skip to content

Deep learning transformer built from scratch to learn and generate new shakespeare like text

Notifications You must be signed in to change notification settings

sjd9021/Attention-Transformer

Repository files navigation

  • Implemented a deep learning transformer model using multi-head self-attention, positional encoding, and feed-forward networks.
  • Trained the model on the Tiny Shakespeare dataset, featuring 40,000 lines from various plays, to fine-tune language processing capabilities.

About

Deep learning transformer built from scratch to learn and generate new shakespeare like text

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published