Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introducing OpenAdapter: Now benchmark any OpenAI-Model on PARROT using OpenAdapter #11

Open
wants to merge 17 commits into
base: main
Choose a base branch
from

Conversation

HarshaLLM
Copy link
Collaborator

@HarshaLLM HarshaLLM commented Oct 14, 2024

OpenAdapter for benchmarking OpenAI models

This PR is for the new adapter and test cases written for OpenAdapter, making integrating and testing OpenAI’s models through a custom module.


File Structure

  • __init__.py: Initializes the package and serves as the entry point for parrot.
  • parrot_openai.py: Contains the adapter interface for connecting with OpenAI.
  • test_cases.py: A set of test cases to validate the OpenAdapter functionality and ensure it performs as expected.

Adapter: parrot_openai.OpenAdapter

parrot.parrot_openai houses the main adapter, OpenAdapter, designed to simplify interactions with OpenAI’s models. Here’s a quick overview of the features:

  1. Setting Up OpenAI API:

    • OpenAdapter takes care of establishing the connection with OpenAI’s API using your credentials. Takes the same parameters as OllamaAdapter, in addition to api_key from openAI.
    • Built-in functions handle authentication and error handling, so it’s straightforward to use without worrying about the setup.
  2. Making Requests:

    • OpenAdapter processes and formats the response internally.
  3. Response Handling:

    • The responses are processed and introduced to the Datasets as a new column.

Usage: To use this adapter, initialize OpenAdapter and call the perform_inference function with a specific prompt. This returns the modified datasets object containing the latest data_frame with candidate responses.


Test Cases: test_cases.py

New Unit tests introduced for OpenAdapter:

  1. Basic Functionality Checks:

    • Simple tests check that OpenAdapter can send requests and receive responses without errors, with valid credentials.
    • Ensures your adapter is communicating correctly with OpenAI’s API.
  2. Error Handling Tests:

    • Edge cases, such as invalid API keys or request timeouts, are tested here.
  3. Response Validity:

    • Tests verify that the responses are correctly parsed and meet expected output formats.
    • Makes sure your adapter stays compatible with OpenAI's response structures, even if they change.

@HarshaLLM HarshaLLM added documentation Improvements or additions to documentation enhancement New feature or request adapters This enables model inference testing Involves testing labels Oct 14, 2024
@HarshaLLM HarshaLLM self-assigned this Oct 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
adapters This enables model inference documentation Improvements or additions to documentation enhancement New feature or request testing Involves testing
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants