-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Broken OpenAI LLM implementation for node version 18 #1010
Comments
I've been struggling with that one, thanks for the issue report. |
This solve an issue described here langchain-ai#1010
@DarylRodrigo |
@DarylRodrigo @JeremyFabrikapp I just ran the exact code snippet above using node |
This solve an issue described here #1010
@nfcampos it seems like the issue doesn't occur on every 18.x version, i've tried the followings and so far found only one incompatibility :
I'm using NVM. |
Using Node v18.16.0 and still getting same issue
|
@dqbd another one to look into, I cannot reproduce this |
@jacobcwright can you confirm it's working on your side with node 20.x ? |
I'm using nvm |
The issue stems from The issue seems to be gone in LangchainJS 0.0.71 and later, where the fetch adapter is not used when NodeJS is detected #1144 and the issue goes away when using other NodeJS 18.x versions. Consider upgrading either of them to fix this issue. Closing this issue for now. Note: if the issue occurs again, it might make sense to patch |
Thanks @dqbd for the coverage! |
Issue
When running the vanilla OpenAI LLM function, it returns a 400 for node version 18.
To reproduce
The following code was used to create this error.
Error from the response body
As seen, even though a model parameter is provided it still returns an error asking to provide one.
Additional notes
This issue is not present in node v19 and v20.
The text was updated successfully, but these errors were encountered: