-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updated ParameterizedModelParser run method for handing run_with_dependencies parameter #923
Conversation
Hi @rossdanlm! Could I get some feedback on the test for the PR? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR! Back to your queue for now
python/tests/test_run_config.py
Outdated
@@ -48,3 +50,23 @@ async def test_load_parametrized_data_config(set_temporary_env_vars): | |||
}, | |||
], | |||
} | |||
|
|||
@pytest.mark.xfail |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we adding a test we expect to fail?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As mentioned by @Ankush-lastmile here. There have been changes to OpenAI model paeser as such new way to for Mocking OpenAI API is required.
The new test introduced works fines based on the earlier mocking approach. Unless a mocking strategy is implemented we would have to use the work around as used here as well https://github.com/sp6370/aiconfig/blob/9617bd9d54e3bff3bb7eef2889af9f30adf39602/python/tests/test_run_config.py#L11.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's good to add, we can edit it later
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have e-enabled the test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is good, thanks for the fix, pls ping me when this lands!
python/tests/test_run_config.py
Outdated
@@ -48,3 +50,23 @@ async def test_load_parametrized_data_config(set_temporary_env_vars): | |||
}, | |||
], | |||
} | |||
|
|||
@pytest.mark.xfail |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's good to add, we can edit it later
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually wait sorry, can you pls also remove kwargs from
**kwargs, # TODO: Remove this, just a hack for now to ensure that it doesn't break |
See more comments and details in #882
Hi @rossdanlm! This is GitHub issue for ffmpeg missing error. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pls ensure that when testing it's linked to the local pacakge
…encies instead of **kwargs for run
…cies for run method
for input_params, response in response_map_list: | ||
if kwargs == input_params: | ||
return response | ||
raise Exception("Unexpected arguments:\n {}".format(kwargs)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably shoudl remove now to show that it works now that kwargs is removed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally lgtm, just make sure that we've chcked all the default model parsers too:
aiconfig/python/src/aiconfig/default_parsers/.*
Cross checked the default model parsers |
python/tests/test_run_config.py
Outdated
new=mock_openai, | ||
): | ||
config_relative_path = ( | ||
"aiconfigs/tarvel_gpt_prompts_with_dependency.json" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: travel typo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! Thanks for the fixes! I'll follow up later with extensions + publishing requirements!
This is from a while ago, dismissing to land this
@rossdancraig I share the info on the test plan later today. |
Tests:
|
Thank you for the super detailed test plans!!! |
Context: Currently
ParameterizedModelParser
run method utilizedkwargs
to decide if to execute prompt with dependencies or not. This pull request eliminates the usage ofkwargs
and introduces a new optional prameter forrun
method to pass information about running the prompt with dependencies.Closes: #664
Test Plan:
run
withrun_with_dependencies=True
for a prompt which requires output from another prompt.