Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Add support for LiteLLM #2690

Open
tan-yong-sheng opened this issue Jul 20, 2024 · 1 comment
Open

[Feature Request] Add support for LiteLLM #2690

tan-yong-sheng opened this issue Jul 20, 2024 · 1 comment
Assignees
Labels
enhancement New feature or request

Comments

@tan-yong-sheng
Copy link

Is your feature request related to a problem? Please describe

Hi admin

I have commented here: Filimoa/open-parse#10 (comment), but after that, I found here could be a better place to put this comment:

I would like to suggest adding support for Litellm.

Litellm is an open source project, which unifies the API call of 100+ LLMs (including Anthropic, Cohere, and Ollama, etc) in an OpenAI compatible format: https://github.com/BerriAI/litellm

I believe integrating Litellm would be a fantastic enhancement because people could choose to switch or use their preferred embedding model api instead of OpenAI's ones only when dealing with semantic processing. Thanks.

Describe the solution you'd like

For example, if they used litellm python client and without self hosting litellm proxy, then their code could be like this (which is very consistent with OpenAI python client format):

image

Reference: https://github.com/BerriAI/litellm

if someone self hosted litellm proxy, which they can call LLM API in an OpenAI compatible format via llmlite proxy, you could see the code could be as follows:

image

Reference: https://litellm.vercel.app/docs/providers/azure_ai#passing-additional-params---max_tokens-temperature

You can see if someone self host the litellm proxy, he will only need to change the openai's base url, and all the other code are similar as openai's one...

Reference: https://litellm.vercel.app/docs/proxy/user_keys
import openai
client = openai.OpenAI(
    api_key="anything",
    base_url="http://0.0.0.0:4000"
)

# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages = [
        {
            "role": "user",
            "content": "this is a test request, write a short poem"
        }
    ],
    extra_body={ # pass in any provider-specific param, if not supported by openai, https://docs.litellm.ai/docs/completion/input#provider-specific-params
        "metadata": { # 👈 use for logging additional params (e.g. to langfuse)
            "generation_name": "ishaan-generation-openai-client",
            "generation_id": "openai-client-gen-id22",
            "trace_id": "openai-client-trace-id22",
            "trace_user_id": "openai-client-user-id2"
        }
    }
)

print(response)

There are also quite a few projects that used litellm: https://litellm.vercel.app/docs/project to call models from different providers on LiteLLM.

Hope for consideration, thanks.

Related component

Libraries

Describe alternatives you've considered

No response

Additional context

No response

@tan-yong-sheng tan-yong-sheng added enhancement New feature or request untriaged labels Jul 20, 2024
@dblock dblock transferred this issue from opensearch-project/OpenSearch Jul 20, 2024
@b4sjoo b4sjoo moved this to Backlog in ml-commons projects Jul 30, 2024
@b4sjoo b4sjoo moved this from Backlog to Untriaged in ml-commons projects Jul 30, 2024
@b4sjoo b4sjoo moved this from Untriaged to Backlog in ml-commons projects Jul 30, 2024
@dblock
Copy link
Member

dblock commented Aug 12, 2024

[Catch All Triage - 1, 2, 3]

@dblock dblock removed the untriaged label Aug 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: Backlog
Development

No branches or pull requests

3 participants