Why some models get a '500 internal server error'? #2068
Unanswered
Firmopython
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi everyone,
I am using privategpt with the Gradio UI and I have experimented with running different models via Ollama. So far the ones that seem to work fine are llama3.1 and mistral, but when I tried with mistral-nemo and gemma2 both got me a "500 internal server error" both when I asked questions with the LLM Chat (no context from the files).
I guess the error might be related with the /api/chat.. however please treat me like the most ignorant person ever when giving me your answers, since I am not exactly an expert with coding but rather an amateur who uses tools like privategpt more for working and doing research.
I will attach here the server.log from Ollama when I got the error last time with gemma2.
server.log
(again, with the mistral model or with llama3.1 I get no errors and everything runs as it should)
Thanks in advance!!
Beta Was this translation helpful? Give feedback.
All reactions