You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since its implementation, the stable-fast optimization has enhanced the image-to-video pipeline by providing a 50% speed-up. The primary drawback is that it utilizes dynamic pre-tracing to optimize requests, causing a longer response time on the first try. However, if the first request involves the largest possible image size, all smaller image sizes will also be pre-traced.
This optimization feature was initially not added to the text-to-image, image-to-image, and upscale pipelines due to the support for batch requests at that time, which required pre-tracing each batch size. Since then, we've transitioned from batch requests to sequential requests to mitigate memory issues and improve performance. Consequently, pre-tracing for these pipelines has become feasible again 🚀.
We are seeking a dedicated community member to apply the existing code from this pull request to the text-to-image, image-to-image, and upscale pipelines. Completing this bounty will significantly enhance network performance by enabling orchestrators to achieve a 50% speed-up in these pipelines ⚡🙏.
For each pipeline (text-to-image, image-to-image, and upscale), ensure that when orchestrators have the SFAST setting enabled, the largest image size request is pre-traced during startup.
Implement this by setting SFAST_WARMUP=true, ensuring orchestrators achieve the quickest response times.
Performance Comparison Table:
Create a comparison table for each pipeline.
The table should include:
Pre-trace time
Resulting inference time
Comparison with non-optimized time
Implementation Tips
Utilize Earlier Work: Don't forget to review the code already provided by @rickstaa in an earlier pull requestand the pre-tracing code already present in the image-to-video pipeline. This can provide valuable insights and a foundation for your work.
Express Your Interest: Comment on this issue to indicate your interest and explain why you're the ideal candidate for the task.
Wait for Review: Our team will review expressions of interest and select the best candidate.
Get Assigned: If selected, we'll assign the GitHub issue to you.
Start Working: Dive into your task! If you need assistance or guidance, comment on the issue or join the discussions in the #🛋│developer-lounge channel on our Discord server.
Submit Your Work: Create a pull request in the relevant repository and request a review.
Notify Us: Comment on this GitHub issue when your pull request is ready for review.
Receive Your Bounty: We'll arrange the bounty payment once your pull request is approved.
Gain Recognition: Your valuable contributions will be showcased in our project's changelog.
Thank you for your interest in contributing to our project 💛!
Warning
Please wait for the issue to be assigned to you before starting work. To prevent duplication of effort, submissions for unassigned issues will not be accepted.
The text was updated successfully, but these errors were encountered:
Overview
Since its implementation, the stable-fast optimization has enhanced the image-to-video pipeline by providing a 50% speed-up. The primary drawback is that it utilizes dynamic pre-tracing to optimize requests, causing a longer response time on the first try. However, if the first request involves the largest possible image size, all smaller image sizes will also be pre-traced.
This optimization feature was initially not added to the text-to-image, image-to-image, and upscale pipelines due to the support for batch requests at that time, which required pre-tracing each batch size. Since then, we've transitioned from batch requests to sequential requests to mitigate memory issues and improve performance. Consequently, pre-tracing for these pipelines has become feasible again 🚀.
We are seeking a dedicated community member to apply the existing code from this pull request to the text-to-image, image-to-image, and upscale pipelines. Completing this bounty will significantly enhance network performance by enabling orchestrators to achieve a 50% speed-up in these pipelines ⚡🙏.
Required Skillset
Bounty Requirements
Bounty Requirements
SFAST_WARMUP=true
, ensuring orchestrators achieve the quickest response times.Implementation Tips
How to Apply
Thank you for your interest in contributing to our project 💛!
Warning
Please wait for the issue to be assigned to you before starting work. To prevent duplication of effort, submissions for unassigned issues will not be accepted.
The text was updated successfully, but these errors were encountered: