Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[python] Add run_with_dependencies to API #664

Closed
rholinshead opened this issue Dec 29, 2023 · 12 comments · Fixed by #923
Closed

[python] Add run_with_dependencies to API #664

rholinshead opened this issue Dec 29, 2023 · 12 comments · Fixed by #923

Comments

@rholinshead
Copy link
Contributor

In typescript, the AIConfig API has runWithDependencies. In python, the run_with_dependencies is supported by passing kwargs to the run method. Instead of supporting this arbitrary kwargs data we should add the run_with_dependencies implementation as a method on the config to be consistent with typescript

@rossdanlm
Copy link
Contributor

rossdanlm commented Dec 29, 2023

See comment thread in #661 (comment) for details

There are two parts to this:

  1. Instead of adding run_with_dependencies to API, we should just re-use run() method but pass in an explicit boolean argument (run_with_dependencies) instead of relying on kwarg
  2. We should also delete kwargs argument from the run() API method

@rholinshead
Copy link
Contributor Author

Instead of adding run_with_dependencies to API, we should just re-use run() method but pass in an explicit boolean argument (run_with_dependencies) instead of relying on kwarg

Should this be part of inference options instead?

Also, note that this requires the same change to typescript side as well to remain consistent

@rossdanlm
Copy link
Contributor

Should this be part of inference options instead?

I don't think. Conceptually, inference options (things like setting streaming to true) are more like global functions you want to set for your entire session, not just 1 single action. I don't think automatically making every prompt required to re-run all it's dependencies is desirable

@sp6370
Copy link
Contributor

sp6370 commented Jan 3, 2024

I would like to pick this up! 🚀

@rossdanlm
Copy link
Contributor

cc @sp6370 please follow instructions for this in #664 (comment)

Right now this function exists in typescript in

public async runWithDependencies(
, but I think it's better to remove it and simply make the run_with_dependencies an explicit method arg in the run() method

@sp6370
Copy link
Contributor

sp6370 commented Jan 4, 2024

@rholinshead Clarification: Both Python and Type Script implementations need to be updated?

@rholinshead
Copy link
Contributor Author

rholinshead commented Jan 4, 2024

@rholinshead Clarification: Both Python and Type Script implementations need to be updated?

@sp6370 ideally both python and typescript implementations will match but it's ok to update the python implementation first and we can create an issue for the typescript side to match the python implementation afterwards, if you are primarily interested in the python side.

@rossdanlm's comment above outlines the python side changes and that's all that should be needed to close out this issue

@sp6370
Copy link
Contributor

sp6370 commented Jan 5, 2024

See comment thread in #661 (comment) for details

There are two parts to this:

  1. Instead of adding run_with_dependencies to API, we should just re-use run() method but pass in an explicit boolean argument (run_with_dependencies) instead of relying on kwarg
  2. We should also delete kwargs argument from the run() API method

@rossdanlm I think we might not be able to remove kwargs from the function directly.

AIConfigRuntime.run() is dependent on ParameterizedModelParser.run() supporting kwargs. I can add another optional parameter called callback_manager to ParameterizedModelParser.run() to make elimination of kwargs possible.

Dependency: https://github.com/sp6370/aiconfig/blob/e47ce1ff59bd383cdda2c2682528d858adc376cd/python/src/aiconfig/Config.py#L271

Effect of directly eliminating kwargs:

pytest -s -v tests/parsers/test_openai_util.py::test_get_output_text
==================================================================================================================== test session starts =====================================================================================================================
platform linux -- Python 3.12.0, pytest-7.4.3, pluggy-1.3.0 -- /home/sp/.conda/envs/aiconfig/bin/python
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase(PosixPath('/home/sp/Github/aiconfig/python/.hypothesis/examples'))
rootdir: /home/sp/Github/aiconfig/python
plugins: mock-3.12.0, anyio-4.2.0, asyncio-0.23.3, cov-4.1.0, hypothesis-6.91.0
asyncio: mode=Mode.STRICT
collected 1 item                                                                                                                                                                                                                                             

tests/parsers/test_openai_util.py::test_get_output_text FAILED

========================================================================================================================== FAILURES ==========================================================================================================================
____________________________________________________________________________________________________________________ test_get_output_text ____________________________________________________________________________________________________________________

set_temporary_env_vars = None

    @pytest.mark.asyncio
    async def test_get_output_text(set_temporary_env_vars):
        with patch.object(openai.chat.completions, "create", side_effect=mock_openai_chat_completion):
            config_relative_path = "../aiconfigs/basic_chatgpt_query_config.json"
            config_absolute_path = get_absolute_file_path_from_relative(__file__, config_relative_path)
            aiconfig = AIConfigRuntime.load(config_absolute_path)
    
>           await aiconfig.run("prompt1", {})

tests/parsers/test_openai_util.py:41: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = AIConfigRuntime(name='exploring nyc through chatgpt config', schema_version='latest', metadata=ConfigMetadata(paramete...onfigs/basic_chatgpt_query_config.json', callback_manager=<aiconfig.callback.CallbackManager object at 0x7816e2351b50>)
prompt_name = 'prompt1', params = {}, options = None, kwargs = {}, event = <aiconfig.callback.CallbackEvent object at 0x7816e2351730>
prompt_data = Prompt(name='prompt1', input='Hi! Tell me 10 cool things to do in NYC.', metadata=PromptMetadata(model=ModelMetadata(name='gpt-3.5-turbo', settings={}), tags=None, parameters={}, remember_chat_context=True), outputs=[])
model_name = 'gpt-3.5-turbo', model_provider = <aiconfig.default_parsers.openai.DefaultOpenAIParser object at 0x7816e3932360>

    async def run(
        self,
        prompt_name: str,
        params: Optional[dict] = None,
        options: Optional[InferenceOptions] = None,
        **kwargs,
    ):
        """
        Executes the AI model with the resolved parameters and returns the API response.
    
        Args:
            parameters (dict): The resolved parameters to use for inference.
            prompt_name (str): The identifier of the prompt to be used.
    
        Returns:
            object: The response object returned by the AI-model's API.
        """
        event = CallbackEvent(
            "on_run_start",
            __name__,
            {
                "prompt_name": prompt_name,
                "params": params,
                "options": options,
                "kwargs": kwargs,
            },
        )
        await self.callback_manager.run_callbacks(event)
    
        if not params:
            params = {}
    
        if prompt_name not in self.prompt_index:
            raise IndexError(f"Prompt '{prompt_name}' not found in config, available prompts are:\n {list(self.prompt_index.keys())}")
    
        prompt_data = self.prompt_index[prompt_name]
        model_name = self.get_model_name(prompt_data)
        model_provider = AIConfigRuntime.get_model_parser(model_name)
    
        # Clear previous run outputs if they exist
        self.delete_output(prompt_name)
    
>       response = await model_provider.run(
            prompt_data,
            self,
            options,
            params,
            callback_manager=self.callback_manager,
            **kwargs,
        )
E       TypeError: ParameterizedModelParser.run() got an unexpected keyword argument 'callback_manager'

src/aiconfig/Config.py:266: TypeError
--------------------------------------------------------------------------------------------------------------------- Captured log call ----------------------------------------------------------------------------------------------------------------------
INFO     my-logger:callback.py:140 Callback called. event
: name='on_run_start' file='aiconfig.Config' data={'prompt_name': 'prompt1', 'params': {}, 'options': None, 'kwargs': {}} ts_ns=1704483078795626702
====================================================================================================================== warnings summary ======================================================================================================================
../../../.conda/envs/aiconfig/lib/python3.12/site-packages/pydantic/_internal/_config.py:267
../../../.conda/envs/aiconfig/lib/python3.12/site-packages/pydantic/_internal/_config.py:267
../../../.conda/envs/aiconfig/lib/python3.12/site-packages/pydantic/_internal/_config.py:267
../../../.conda/envs/aiconfig/lib/python3.12/site-packages/pydantic/_internal/_config.py:267
../../../.conda/envs/aiconfig/lib/python3.12/site-packages/pydantic/_internal/_config.py:267
../../../.conda/envs/aiconfig/lib/python3.12/site-packages/pydantic/_internal/_config.py:267
  /home/sp/.conda/envs/aiconfig/lib/python3.12/site-packages/pydantic/_internal/_config.py:267: PydanticDeprecatedSince20: Support for class-based `config` is deprecated, use ConfigDict instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.4/migration/
    warnings.warn(DEPRECATION_MESSAGE, DeprecationWarning)

../../../.conda/envs/aiconfig/lib/python3.12/site-packages/pydantic/_internal/_fields.py:128
  /home/sp/.conda/envs/aiconfig/lib/python3.12/site-packages/pydantic/_internal/_fields.py:128: UserWarning: Field "model_parsers" has conflict with protected namespace "model_".
  
  You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
    warnings.warn(

<frozen importlib._bootstrap>:488
  <frozen importlib._bootstrap>:488: DeprecationWarning: Type google._upb._message.MessageMapContainer uses PyType_Spec with a metaclass that has custom tp_new. This is deprecated and will no longer be allowed in Python 3.14.

<frozen importlib._bootstrap>:488
  <frozen importlib._bootstrap>:488: DeprecationWarning: Type google._upb._message.ScalarMapContainer uses PyType_Spec with a metaclass that has custom tp_new. This is deprecated and will no longer be allowed in Python 3.14.

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
================================================================================================================== short test summary info ===================================================================================================================
FAILED tests/parsers/test_openai_util.py::test_get_output_text - TypeError: ParameterizedModelParser.run() got an unexpected keyword argument 'callback_manager'
=============================================================================================================== 1 failed, 9 warnings in 0.53s ================================================================================================================

@sp6370
Copy link
Contributor

sp6370 commented Jan 5, 2024

How should I go about it?

@rossdanlm
Copy link
Contributor

rossdanlm commented Jan 11, 2024

Thanks for the investigation Sudhanshu, we should

  1. Check exactly what callback_manager does and how it's being used. If it's not referenced in any of the callsites we can simply avoid passing it into the model_provider.run() callsite
  2. If indeed callback_manager is needed, we should make it an explicit param (just like run_with_dependencies) which can be defaulted to None if not provided as a argument from callsites

Also giving you a heads up to fix an issue, for now I temporarily had to add kwargs to every callsite, but this isn't the best solution and ideally we should be very explicit on every argument that we pass in: #882

rossdanlm pushed a commit that referenced this issue Jan 11, 2024
This is an extension of #881

Now once this is removed, only the `run-with_dependencies` kwarg should be used and this will unblock Sudhanshu from completing #664
@rossdanlm
Copy link
Contributor

Hey sorry just a heads up, I went ahead and just removed callback_manager since it's never being used: #886

rossdanlm added a commit that referenced this issue Jan 11, 2024
Delete unused callback_manager argument from run commands

This is an extension of #881

Now once this is removed, only the `run-with_dependencies` kwarg should
be used and this will unblock Sudhanshu from completing
#664

---
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with
[ReviewStack](https://reviewstack.dev/lastmile-ai/aiconfig/pull/886).
* #887
* __->__ #886
@sp6370
Copy link
Contributor

sp6370 commented Jan 11, 2024

Thanks! @rossdanlm
In that case I will go ahead and raise a PR by tomorrow with a test.

rossdanlm added a commit that referenced this issue Mar 27, 2024
…ndencies parameter (#923)

Context: Currently `ParameterizedModelParser` run method utilized
`kwargs` to decide if to execute prompt with dependencies or not. This
pull request eliminates the usage of `kwargs` and introduces a new
optional prameter for `run` method to pass information about running the
prompt with dependencies.

Closes: #664 

Test Plan:
- [x] Add a new test that invokes `run` with
`run_with_dependencies=True` for a prompt which requires output from
another prompt.
- [x] Ensure that all the prompts in the hugging face cookbook work.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants