Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pre-Training LLama3.1 on AWS Trainium using Ray and PyTorch Lightning #725

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

sindhupalakodety
Copy link
Contributor

What does this PR do?

Example showing a combination of technologies such as Ray + PTL + Neuron for pre-training llama3.1 model on Trn1 instances. This example was requested by multiple customers.

The integration of Ray, PyTorch Lightning (PTL), and AWS Neuron combines PTL's intuitive model development API, Ray Train's robust distributed computing capabilities for seamless scaling across multiple nodes, and AWS Neuron's hardware optimization for Trainium, significantly simplifying the setup and management of distributed training environments for large-scale AI projects, particularly those involving computationally intensive tasks like large language models.

Motivation

Issue: #724

More

  • [x ] Yes, I have tested the PR using my local account setup (Provide any test evidence report under Additional Notes)
  • [ x] Mandatory for new blueprints. Yes, I have added a example to support my blueprint PR
  • [ x] Mandatory for new blueprints. Yes, I have updated the website/docs or website/blog section for this feature
  • Yes, I ran pre-commit run -a with this PR. Link for installing pre-commit locally

For Moderators

  • [ x] E2E Test successfully complete before merge?

Additional Notes

We tested this out for a customer use-case and even demoed the solution to the customer.
The customer was impressed with the results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant