Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"No model selected" error when using "Custom API" #931

Open
davedawkins opened this issue Dec 18, 2024 · 12 comments
Open

"No model selected" error when using "Custom API" #931

davedawkins opened this issue Dec 18, 2024 · 12 comments

Comments

@davedawkins
Copy link

Obsidian 1.7.7
Smart Connections 2.3.45
MacOS Sequoia

image

Custom API settings

image

@davedawkins
Copy link
Author

I did try making sure that the OpenAI settings had a model selected first, even though that shouldn't be related

@brianpetro
Copy link
Owner

@davedawkins does the validation warning persist when you hit the "+" (plus) button to create a new chat?

Do any errors occur when you try to submit a chat message?

Thanks for the screenshots and your help in solving this 🌴

@davedawkins
Copy link
Author

davedawkins commented Dec 18, 2024

Yes, each new chat window gives the error.

If I then go on to try and enter a request:

image

@davedawkins
Copy link
Author

However, no evidence in DevTools/Network tab that any connection is attempted

@davedawkins
Copy link
Author

image

@brianpetro
Copy link
Owner

@davedawkins thanks for the screenshots.

The lack of requests in the network tab is probably because it's using Obsidian internals to make the request, and those don't get logged to the console.

It might be worth trying the "enable cors" setting (currently toggled off) in the Lm Studio settings.

If that doesn't work, I'll probably need to get an instance running myself locally to debug.

🌴

@brianpetro
Copy link
Owner

@davedawkins I added an LM Studio adapter in the latest version. This way you don't have to configure the custom API endpoint. It also automatically imports available models. Let me know how it works 🌴

@davedawkins
Copy link
Author

davedawkins commented Dec 19, 2024 via email

@davedawkins
Copy link
Author

YES !!
LM Adapter in the new version works (with CORS toggled on). Thank you. I will find a way to donate.

@brianpetro
Copy link
Owner

@davedawkins happy to hear that it's working 😊

Which model are you using? I was having trouble triggering the lookup action on the models I was testing (llama-3.2-3b, Qwen-15b).
🌴

@davedawkins
Copy link
Author

Model: meta-llama-3.1-8b-instruct

I had to tell it to summarize my test page before it would answer questions about a subject I introduced in that page. Until then it would give me unrelated answers. I thought it was because I wasn't prompting correctly.

I'm still learning about LLMs, and my understanding of how embeddings work is minimal. My guess is that the embeddings aren't being "sent" to LM studio. If I'm correct, I am using a local "embedding" model in Smart Connections and I should use a "normal" LLM in LM Studio.

See screenshots below

My Chat Log

image

Configuration for embeddings

image

Configuration for Smart Chat

image

Configuration for LM Studio

image

@davedawkins davedawkins reopened this Dec 19, 2024
@brianpetro
Copy link
Owner

@davedawkins thanks for the follow-up.

The chat should've triggered lookup after the first "based on my notes" message. This is the issue I was referring to before.

The lookup tool is being included when sent to LM Studio, with both a parameter (tool_choice), that's supposed to force using the tool, and a system prompt that further requests tool usage (built specific for the adapter based on a similar implementation for Ollama) yet the model still fails to properly call the tool.

This probably has to do with the model not natively supporting tool use. I tried finding models in lm studio that explicitly support tool use, but it wasn't very straightforward (I couldn't find any without referencing outside resources).

Mentioning notes specifically in messages, like in the second message in the screenshots, should always work because no tool calling is required.

If you can find a local model that "natively" (this is the language used in the LM Studio docs) supports tools, then it might be more likely to call the tool as expected.

Besides that, there would need to be special logic added for non-tool-calling models. This is something I decided against adding to the time being since tool calling is becoming more ubiquitous 🌴

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants