You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I repeatedly generated it, the following error occurred.
from vllm import LLM,SamplingParams
from transformers import StoppingCriteria,StoppingCriteriaList,AutoTokenizer
llm=LLM("EleutherAI/polyglot-ko-12.8b",tensor_parallel_size=1,seed=42)
samplingparams=SamplingParams(max_tokens=200)
while True:
text=input("질문을 입력해주세요:")
formatted_input="### 질문:" + text + "\n\n### 답변:"
data=llm.generate(prompts=formatted_input,sampling_params=samplingparams)
texts=[output.text for output in data[0].outputs]
I fixed getting the number of tokens from the tokenizer instead of the config.json file, but I'm still getting the error.
Is there a way to resolve this error?
The text was updated successfully, but these errors were encountered:
When I repeatedly generated it, the following error occurred.
I fixed getting the number of tokens from the tokenizer instead of the config.json file, but I'm still getting the error.
Is there a way to resolve this error?
The text was updated successfully, but these errors were encountered: