-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[python] Add run_with_dependencies to API #664
Comments
See comment thread in #661 (comment) for details There are two parts to this:
|
Should this be part of inference options instead? Also, note that this requires the same change to typescript side as well to remain consistent |
I don't think. Conceptually, inference options (things like setting streaming to true) are more like global functions you want to set for your entire session, not just 1 single action. I don't think automatically making every prompt required to re-run all it's dependencies is desirable |
I would like to pick this up! 🚀 |
cc @sp6370 please follow instructions for this in #664 (comment) Right now this function exists in typescript in aiconfig/typescript/lib/config.ts Line 456 in e47ce1f
run_with_dependencies an explicit method arg in the run() method
|
@rholinshead Clarification: Both Python and Type Script implementations need to be updated? |
@sp6370 ideally both python and typescript implementations will match but it's ok to update the python implementation first and we can create an issue for the typescript side to match the python implementation afterwards, if you are primarily interested in the python side. @rossdanlm's comment above outlines the python side changes and that's all that should be needed to close out this issue |
@rossdanlm I think we might not be able to remove
Effect of directly eliminating pytest -s -v tests/parsers/test_openai_util.py::test_get_output_text
==================================================================================================================== test session starts =====================================================================================================================
platform linux -- Python 3.12.0, pytest-7.4.3, pluggy-1.3.0 -- /home/sp/.conda/envs/aiconfig/bin/python
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase(PosixPath('/home/sp/Github/aiconfig/python/.hypothesis/examples'))
rootdir: /home/sp/Github/aiconfig/python
plugins: mock-3.12.0, anyio-4.2.0, asyncio-0.23.3, cov-4.1.0, hypothesis-6.91.0
asyncio: mode=Mode.STRICT
collected 1 item
tests/parsers/test_openai_util.py::test_get_output_text FAILED
========================================================================================================================== FAILURES ==========================================================================================================================
____________________________________________________________________________________________________________________ test_get_output_text ____________________________________________________________________________________________________________________
set_temporary_env_vars = None
@pytest.mark.asyncio
async def test_get_output_text(set_temporary_env_vars):
with patch.object(openai.chat.completions, "create", side_effect=mock_openai_chat_completion):
config_relative_path = "../aiconfigs/basic_chatgpt_query_config.json"
config_absolute_path = get_absolute_file_path_from_relative(__file__, config_relative_path)
aiconfig = AIConfigRuntime.load(config_absolute_path)
> await aiconfig.run("prompt1", {})
tests/parsers/test_openai_util.py:41:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = AIConfigRuntime(name='exploring nyc through chatgpt config', schema_version='latest', metadata=ConfigMetadata(paramete...onfigs/basic_chatgpt_query_config.json', callback_manager=<aiconfig.callback.CallbackManager object at 0x7816e2351b50>)
prompt_name = 'prompt1', params = {}, options = None, kwargs = {}, event = <aiconfig.callback.CallbackEvent object at 0x7816e2351730>
prompt_data = Prompt(name='prompt1', input='Hi! Tell me 10 cool things to do in NYC.', metadata=PromptMetadata(model=ModelMetadata(name='gpt-3.5-turbo', settings={}), tags=None, parameters={}, remember_chat_context=True), outputs=[])
model_name = 'gpt-3.5-turbo', model_provider = <aiconfig.default_parsers.openai.DefaultOpenAIParser object at 0x7816e3932360>
async def run(
self,
prompt_name: str,
params: Optional[dict] = None,
options: Optional[InferenceOptions] = None,
**kwargs,
):
"""
Executes the AI model with the resolved parameters and returns the API response.
Args:
parameters (dict): The resolved parameters to use for inference.
prompt_name (str): The identifier of the prompt to be used.
Returns:
object: The response object returned by the AI-model's API.
"""
event = CallbackEvent(
"on_run_start",
__name__,
{
"prompt_name": prompt_name,
"params": params,
"options": options,
"kwargs": kwargs,
},
)
await self.callback_manager.run_callbacks(event)
if not params:
params = {}
if prompt_name not in self.prompt_index:
raise IndexError(f"Prompt '{prompt_name}' not found in config, available prompts are:\n {list(self.prompt_index.keys())}")
prompt_data = self.prompt_index[prompt_name]
model_name = self.get_model_name(prompt_data)
model_provider = AIConfigRuntime.get_model_parser(model_name)
# Clear previous run outputs if they exist
self.delete_output(prompt_name)
> response = await model_provider.run(
prompt_data,
self,
options,
params,
callback_manager=self.callback_manager,
**kwargs,
)
E TypeError: ParameterizedModelParser.run() got an unexpected keyword argument 'callback_manager'
src/aiconfig/Config.py:266: TypeError
--------------------------------------------------------------------------------------------------------------------- Captured log call ----------------------------------------------------------------------------------------------------------------------
INFO my-logger:callback.py:140 Callback called. event
: name='on_run_start' file='aiconfig.Config' data={'prompt_name': 'prompt1', 'params': {}, 'options': None, 'kwargs': {}} ts_ns=1704483078795626702
====================================================================================================================== warnings summary ======================================================================================================================
../../../.conda/envs/aiconfig/lib/python3.12/site-packages/pydantic/_internal/_config.py:267
../../../.conda/envs/aiconfig/lib/python3.12/site-packages/pydantic/_internal/_config.py:267
../../../.conda/envs/aiconfig/lib/python3.12/site-packages/pydantic/_internal/_config.py:267
../../../.conda/envs/aiconfig/lib/python3.12/site-packages/pydantic/_internal/_config.py:267
../../../.conda/envs/aiconfig/lib/python3.12/site-packages/pydantic/_internal/_config.py:267
../../../.conda/envs/aiconfig/lib/python3.12/site-packages/pydantic/_internal/_config.py:267
/home/sp/.conda/envs/aiconfig/lib/python3.12/site-packages/pydantic/_internal/_config.py:267: PydanticDeprecatedSince20: Support for class-based `config` is deprecated, use ConfigDict instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.4/migration/
warnings.warn(DEPRECATION_MESSAGE, DeprecationWarning)
../../../.conda/envs/aiconfig/lib/python3.12/site-packages/pydantic/_internal/_fields.py:128
/home/sp/.conda/envs/aiconfig/lib/python3.12/site-packages/pydantic/_internal/_fields.py:128: UserWarning: Field "model_parsers" has conflict with protected namespace "model_".
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
warnings.warn(
<frozen importlib._bootstrap>:488
<frozen importlib._bootstrap>:488: DeprecationWarning: Type google._upb._message.MessageMapContainer uses PyType_Spec with a metaclass that has custom tp_new. This is deprecated and will no longer be allowed in Python 3.14.
<frozen importlib._bootstrap>:488
<frozen importlib._bootstrap>:488: DeprecationWarning: Type google._upb._message.ScalarMapContainer uses PyType_Spec with a metaclass that has custom tp_new. This is deprecated and will no longer be allowed in Python 3.14.
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
================================================================================================================== short test summary info ===================================================================================================================
FAILED tests/parsers/test_openai_util.py::test_get_output_text - TypeError: ParameterizedModelParser.run() got an unexpected keyword argument 'callback_manager'
=============================================================================================================== 1 failed, 9 warnings in 0.53s ================================================================================================================ |
How should I go about it? |
Thanks for the investigation Sudhanshu, we should
Also giving you a heads up to fix an issue, for now I temporarily had to add |
Hey sorry just a heads up, I went ahead and just removed |
Delete unused callback_manager argument from run commands This is an extension of #881 Now once this is removed, only the `run-with_dependencies` kwarg should be used and this will unblock Sudhanshu from completing #664 --- Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/lastmile-ai/aiconfig/pull/886). * #887 * __->__ #886
Thanks! @rossdanlm |
…ndencies parameter (#923) Context: Currently `ParameterizedModelParser` run method utilized `kwargs` to decide if to execute prompt with dependencies or not. This pull request eliminates the usage of `kwargs` and introduces a new optional prameter for `run` method to pass information about running the prompt with dependencies. Closes: #664 Test Plan: - [x] Add a new test that invokes `run` with `run_with_dependencies=True` for a prompt which requires output from another prompt. - [x] Ensure that all the prompts in the hugging face cookbook work.
In typescript, the AIConfig API has
runWithDependencies
. In python, therun_with_dependencies
is supported by passing kwargs to the run method. Instead of supporting this arbitrary kwargs data we should add therun_with_dependencies
implementation as a method on the config to be consistent with typescriptThe text was updated successfully, but these errors were encountered: