-
-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NoBinaryFoundError for Windows in Electron when Upgrading from 3.0.0-beta44 -> 3.2.0 #381
Comments
@bitterspeed Can you please attach the console logs from the main process of your app? npx --yes node-llama-cpp inspect gpu |
Running the app unbundled works fine with 3.2.0. When I The below works great in 3.0.0-beta44 for Mac + Win, but fails for Win (not Mac) in 3.2.0. This is after making.
npx --yes node-llama-cpp inspect gpu
Electron app logs when calling getLlama inside main process
|
You haven't experienced this issue before since in Lines 214 to 226 in 6405ee9
The reason you get this error seems to be that the prebuilt binaries under the
I also recommend you to take a look at the Electron documentation of |
Thanks, this is helpful, but shouldn't the optional dependencies be inside the Note that when running In Win, I had to explicitly install those dependencies for it to be bundled inside EDIT: I had a nohoist setting in package.json, which was causing the issue. All good now, thanks! |
npm attempts to flatten the modules in the When installing To check whether your machine is compatible with the prebuilt binaries, run these commands: npx --no node-llama-cpp source clear
npx --no node-llama-cpp chat --prompt 'Hi there!' And then select some model. If it attempts to build from source, you'll see logs for the build progress, which means the prebuilt binaries failed the test and aren't compatible with your machine. You shouldn't manually install anything under the |
I appreciate the explanation. I am using NPM 10.2.4. A cleaner solution to the above that worked for me: Since i'm using Electron Forge I had to manually copy over the node_modules/@node-llama-cpp optional dependency for the Electron app to work (no more NoBinaryFoundError!). Inside
With that said, I cannot get CUDA to work on Windows even if it's properly bundled in ASAR unpacked.
Shows as That is, I seem to be able to use the I am able to use |
Try to inspect the resource directory of the prod Electron app to ensure You can also download a build of the example electron app template from the latest release of npx asar extract app.asar app.content |
Thanks for the tips. My I downloaded the electron template app, installed node-llama-cpp-electron-example.Windows.3.2.0.x64.exe. First, I compared the template app's I ran The original Electron Forge's |
I think the issue you're facing is related to how the asar is packed in your build. Try to run the Also, since you mentioned that it works in dev mode but not in a packaged app, it's possible that the CUDA version installed on your machine is too old or incompatible with the prebuilt CUDA binaries, so it builds from source when running in dev mode (since you have |
Even though I manually copy the @node-llama-cpp packages, this is technically during the packaging step (L 159 is where all the other node_modules are copied. I need to use By putting in the This is the strange part for me why it works withwin-x64 but not win-x64-cuda.
Yes, running asar extract with the command above outputting to app.content on my app does show the win-x64-cuda directory in there. My CUDA version:
There are no local build files inside in dev directory: |
I think the reason the I think you may not have marked To debug, try to import the modules of the prebuilt binaries directly and see whether the import is successful in the final build: try {
const importRes = await import("@node-llama-cpp/win-x64-cuda");
const {binsDir, packageVersion} = importRes.getBinsDir();
console.log("CUDA binaries module binsDir", binsDir);
console.log("CUDA binaries module version", packageVersion);
// if the version doesn't match the version of `node-llama-cpp`, then this module won't be used
} catch(err) {
console.log("CUDA binaries module is not found", err);
} This code should help us understand whether the prebuilt binaries are accessible to be imported in the final build.
This indeed seems strange; I'd love to find out why it happens so I can fix it, or document better how to fix it. Also, try using |
Including this code
causes this error when running
|
Can you please create a minimal reproducible example repo that I can use to poke around? I suspect that the code of The code snippet you included in your build is not supposed to fail the build, so since it did, then it means there's some transpilation that happens on the |
Greatly appreciate the help. Here's the repo: In
To create this repo, I ran:
Using this repo, CUDA works great in Windows development. Mac works great in Prod + Development. In Windows prod, it says it's missing some dependencies (from node-llama-cpp):
Perhaps if I individually copied each of node-llama-cpp's dependencies in Note this error message is different behavior from what I posted above, where |
Issue description
NoBinaryFoundError for Windows when Upgrading from 3.0.0-beta44 -> 3.2.0
Expected Behavior
I'd expect the WIndows x64 prebuilt binary to be available for use so users do not have to build locally.
Actual Behavior
A NoBinaryFoundError is thrown when trying to start
getLlama
Steps to reproduce
Btw, prebuilt binary was found and worked great in 3.0.0-beta44
My Environment
node-llama-cpp
versionAdditional Context
No response
Relevant Features Used
Are you willing to resolve this issue by submitting a Pull Request?
Yes, I have the time, but I don't know how to start. I would need guidance.
The text was updated successfully, but these errors were encountered: