You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our current AI subnet T2I and I2I pipelines only support inference on base models, but we know there's more potential to unlock. Fine-tuning methods like LoRA and Dreambooth allow researchers to build on these base models and adapt them for specific use cases. With our team focused on main subnet core development improvements, we're turning to the builders in the community to take on the exciting challenge of adding support for LoRA to the I2I and T2I pipelines 🔧.
Why LoRA first? Compared to Dreambooth, LoRA is more suited for efficient domain adaptation, while Dreambooth excels in highly specific personalization tasks. By integrating LoRA, we aim to provide developers with more flexible models and give orchestrators a more diverse range of income opportunities, as LoRA can be dynamically downloaded and layered on top of existing base models advertised by orchestrators.
This is a significant improvement that benefits both developers and orchestrators. Are you up for the challenge? If you're ready to take on this important task, let us know below, and we will assign you. Let's enhance our pipelines together and push the boundaries of what's possible! 🌟
To successfully complete this bounty, the participant should:
Dynamic Integration: Design a method to dynamically download and load LoRA weights onto existing base models.
Efficiency: Ensure the process is carried out in an efficient manner to minimize performance overhead.
Abstraction: Abstract this process away from the orchestrators as much as possible, ensuring ease of use and minimal disruption.
Implementation: Deliver a fully functional end-to-end implementation of this pipeline on the Go-Livepeer side.
This bounty does NOT include:
Implementation of other fine-tuned models like Dreambooth.
Testing requirements for the resulting pull request, as tests are not yet implemented.
Implementation Tips
Utilize Developer Documentation: Check out our developer documentation for the worker and runner. These resources provide valuable tips for speeding up your development process by mocking pipelines and enabling direct debugging.
Generate OpenAPI Spec: Run the runner/gen_openapi.py file to generate the updated OpenAPI spec.
Generate Go-Livepeer Bindings: In the main repository folder, run the make command to generate the necessary go-livepeer bindings, ensuring your implementation works seamlessly with the go-livepeer repository.
How to Apply
Express Your Interest: Comment on this issue to indicate your interest and explain why you're the ideal candidate for the task.
Wait for Review: Our team will review expressions of interest and select the best candidate.
Get Assigned: If selected, we'll assign the GitHub issue to you.
Start Working: Dive into your task! If you need assistance or guidance, comment on the issue or join the discussions in the #🛋│developer-lounge channel on our Discord server.
Submit Your Work: Create a pull request in the relevant repository and request a review.
Notify Us: Comment on this GitHub issue when your pull request is ready for review.
Receive Your Bounty: We'll arrange the bounty payment once your pull request is approved.
Gain Recognition: Your valuable contributions will be showcased in our project's changelog.
Thank you for your interest in contributing to our project 💛!
Warning
Please wait for the issue to be assigned to you before starting work. To prevent duplication of effort, submissions for unassigned issues will not be accepted.
The text was updated successfully, but these errors were encountered:
@stronk-dev, as discussed, I've assigned this bounty to you since you've already done most of the preliminary work. Thanks again for taking this on and relieving the workload for my team 🙏🏻. If you have any questions or need further clarification, please don't hesitate to reach out.
rickstaa
changed the title
Add Lora Support to T2I and Image to Image pipelines [70 LPT]
Add Lora Support to T2I and I2I pipelines [70 LPT]
Jul 12, 2024
Overview
Our current AI subnet T2I and I2I pipelines only support inference on base models, but we know there's more potential to unlock. Fine-tuning methods like LoRA and Dreambooth allow researchers to build on these base models and adapt them for specific use cases. With our team focused on main subnet core development improvements, we're turning to the builders in the community to take on the exciting challenge of adding support for LoRA to the I2I and T2I pipelines 🔧.
Why LoRA first? Compared to Dreambooth, LoRA is more suited for efficient domain adaptation, while Dreambooth excels in highly specific personalization tasks. By integrating LoRA, we aim to provide developers with more flexible models and give orchestrators a more diverse range of income opportunities, as LoRA can be dynamically downloaded and layered on top of existing base models advertised by orchestrators.
This is a significant improvement that benefits both developers and orchestrators. Are you up for the challenge? If you're ready to take on this important task, let us know below, and we will assign you. Let's enhance our pipelines together and push the boundaries of what's possible! 🌟
Required Skillset
Bounty Level: advanced
Bounty Requirements
Bounty Requirements
To successfully complete this bounty, the participant should:
This bounty does NOT include:
Implementation Tips
runner/gen_openapi.py
file to generate the updated OpenAPI spec.make
command to generate the necessary go-livepeer bindings, ensuring your implementation works seamlessly with the go-livepeer repository.How to Apply
Thank you for your interest in contributing to our project 💛!
Warning
Please wait for the issue to be assigned to you before starting work. To prevent duplication of effort, submissions for unassigned issues will not be accepted.
The text was updated successfully, but these errors were encountered: