Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

“用 TensorRT-LLM & Triton Server 部署2B模型”请问这样部署是同步接口吗?貌似不能异步调用? #132

Open
dongteng opened this issue Mar 19, 2024 · 1 comment

Comments

@dongteng
Copy link

同时请求会发生堵塞,一个请求结束之后另一个才出答案

@zhaoxudong01
Copy link
Collaborator

目前还不支持inflight-batching,我们会持续更新

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants