Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make responses start faster by removing unnecessary cleanup calls #6625

Merged
merged 2 commits into from
Jan 1, 2025

Conversation

oobabooga
Copy link
Owner

The clear_torch_cache() function takes about 0.08 seconds to run because it includes a call to gc.collect(). Previously, this function was called twice before each generation to address memory leaks in Transformers during text streaming.

Changes made:

  1. Removed all clear_torch_cache() calls for loaders other than Transformers, saving approximately 0.2 seconds per generation and making replies start faster both in the UI and the API.
  2. Reduced the calls to clear_torch_cache() for Transformers from two to one, cutting the time spent on this function by half.

@oobabooga oobabooga merged commit 7b88724 into dev Jan 1, 2025
@oobabooga oobabooga deleted the faster-reply branch January 5, 2025 14:59
jfmherokiller pushed a commit to jfmherokiller/text-generation-webui that referenced this pull request Jan 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant