forked from mlc-ai/mlc-llm
-
Notifications
You must be signed in to change notification settings - Fork 8
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* add new model for evaluating logits over multiple queries using KV cache * add test * clean * Only the number of past tokens is needed * fix build * fix * correctly handle num_past_tokens > sliding_window case * wip * blac * wip * wip * remove cancel call back in eviction * Create MultiQueryDecodeRequest * only the number of past tokens is needed * wip * wip * wip * fix * wip * wip * wip * wip * working? * remove dbg print * multi gpu works * fixed sliding window logic * remove dbug print * clean and fix * mypy * generate signature update * more * fix mypy * fix * fix * mypy fix * refactor * fix * rename * Disallow preempting when a request has generated more than max_num_batched_tokens
- Loading branch information
Showing
7 changed files
with
498 additions
and
185 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.