-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test: reduce code redundancy in openai based tests #54
Conversation
667296c
to
ca489fc
Compare
@michaelneale I scrubbed out the model change. this branch helps me in testing azure in another PR. PTAL? |
I think need to pull in change to make it more deterministic |
Signed-off-by: Adrian Cole <[email protected]>
ca489fc
to
f8044b7
Compare
PTAL I also made a small internal change which allows me to in a subsequent PR re-use the openai impl for azure |
brought in some generic changes from the dependent PR #60 |
Signed-off-by: Adrian Cole <[email protected]>
fc8ab2c
to
c5dc482
Compare
Signed-off-by: Adrian Cole <[email protected]>
2f972bd
to
095ecb3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks ok to me - @baxen probably needs blessing as touches openai provider - just as extra eyeballs
print("Completion content from OpenAI:", reply[0].content) | ||
|
||
|
||
@pytest.mark.vcr() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
while these tests down look redundant to the ones in the parent dir, what's nice is the non_integration ones are using recorded requests. so we can see exactly what should be happening. This has been helpful figuring out what should work in azure openai, because it works in normal openai, but doesn't
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
client.get("") | ||
# The OpenAI API is defined after "v1/", so we need to join it here. | ||
client.base_url = client.base_url.join("v1/") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: i'd put this above on line 37
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ill do a quick fix in the 0.9.3 release so we can get this in
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will do on next change thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh it is subtle. the health check is ollama, so we need to test the base of ollama before appending "v1". It would be better if we didn't need to health check imho, especially as we aren't in any other provider. lemme know your thoughts on that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
found a way that may work fine in #60
This reduces fixture redundancy in openai by sharing functions. This also clarifies the openai base endpoint for future consolidation with azure.
note: in order to be able to run just integration -k openai and not also run ollama, I needed to get rid of the openai subdir in the tests (as it was before I started)