-
I am currently trying to interface with a PyTorch model, AIMNet2 ( https://github.com/isayevlab/AIMNet2/ ). I am using djl version 0.24.0 with the following code:
When running this, I see:
Are there any clues as to what I am doing wrong? Many thanks, Mark |
Beta Was this translation helpful? Give feedback.
Replies: 6 comments 2 replies
-
@mjw99 |
Beta Was this translation helpful? Give feedback.
-
Dear Frank, Thank you for looking at this. The following protocol will install AIMNet2 using Mamba and carry out an optimisation on a molecule of water:
This should yield:
As an aside, the code responsible for packaging up the data to the model can be found here: https://github.com/isayevlab/AIMNet2/blob/3ce06f4002eb35a5bf4a2b9daf8b4f09be9aabb8/calculators/aimnet2ase.py#L34 |
Beta Was this translation helpful? Give feedback.
-
Also seeing this with djl 0.25.0 |
Beta Was this translation helpful? Give feedback.
-
Also, tried to force the use of pytorch 2.0 via:
But, again no joy. |
Beta Was this translation helpful? Give feedback.
-
An updated minimal standalone Python example that can be used with just PyTorch and the model's jpt file:
|
Beta Was this translation helpful? Give feedback.
-
I tried your model (https://raw.githubusercontent.com/isayevlab/AIMNet2/main/models/aimnet2_wb97m-d3_ens.jpt), and I got different error:
If I turn on inference model with python code I got the same error:
Looks like when you trace your model, it's run in training mode. In DJL, we assume, you run inference model, and autograd is turn off in DJL. Can you turn off training model when you trace the model? |
Beta Was this translation helpful? Give feedback.
Dear Frankfliu,
Apologies for the delay in replying to this.
I am not sure all of the model is available on the AIMNet2 repo, hence I am not sure how this can be done. But, thank you for looking at this; it is appreciated.