Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding Unify AI Support #1446

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open

Adding Unify AI Support #1446

wants to merge 3 commits into from

Conversation

paras55
Copy link

@paras55 paras55 commented Aug 12, 2024

Features

  • Adding Unify AI for using all the AI Models with 1 API key

Feature Docs

Influence

Result

Other

@codecov-commenter
Copy link

⚠️ Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

Attention: Patch coverage is 1.49254% with 66 lines in your changes missing coverage. Please review.

Project coverage is 62.35%. Comparing base (ab846f6) to head (25bcaa2).

Files Patch % Lines
metagpt/provider/unify.py 0.00% 66 Missing ⚠️

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1446      +/-   ##
==========================================
- Coverage   62.59%   62.35%   -0.24%     
==========================================
  Files         287      288       +1     
  Lines       17589    17656      +67     
==========================================
+ Hits        11009    11010       +1     
- Misses       6580     6646      +66     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@paras55
Copy link
Author

paras55 commented Aug 23, 2024

@geekan, Can you please approve this PR? I added support for Unify AI, with the help of which MetaGPT users will now be able to use all open-source and closed-source LLM Models with just one API key.

@geekan geekan requested a review from better629 October 11, 2024 09:17
api_type: "unify"
model: "llama-3-8b-chat@together-ai" # or Get a list of models here: https://docs.unify.ai/python/utils#list-models
base_url: "https://api.unify.ai/v0"
api_key: "Enter your Unify API key here" # or Get your API key from https://console.unify.ai
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

keep some value with YOUR_API_KEY

self.model = self.config.model
self.client = Unify(
api_key=self.config.api_key,
endpoint=f"{self.config.model}@{self.config.provider}",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no provider field in LLMConfig

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggest to only add async client

def _const_kwargs(self, messages: list[dict], stream: bool = False) -> dict:
return {
"messages": messages,
"max_tokens": self.config.max_token,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

return resp
return resp.choices[0].message.content if resp.choices else ""

def _update_costs(self, usage: dict):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no need to add due to implemented under BaseLLM

async def _achat_completion(self, messages: list[dict], timeout=USE_CONFIG_TIMEOUT) -> ChatCompletion:
try:
response = await self.async_client.generate(
messages=messages,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_const_kwargs not used ?


async def _achat_completion_stream(self, messages: list[dict], timeout=USE_CONFIG_TIMEOUT) -> str:
try:
stream = self.client.generate(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should use async_client

"finish_reason": "stop",
}],
usage=CompletionUsage(
prompt_tokens=count_message_tokens(messages, self.model),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

currently main branch has no count_message_tokens func. and suggest to support unify common models not only from openai.

def __init__(self, config: LLMConfig):
self.config = config
self._init_client()
self.cost_manager = CostManager(token_costs=OPENAI_TOKEN_COSTS) # Using OpenAI costs as Unify is compatible
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about non-openai models?

@geekan
Copy link
Owner

geekan commented Oct 20, 2024

So is it a similar component to oneapi? I don't see the need to write another class independent of OpenAI. It seems that directly registering an LLMType is enough?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants