-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Roadmap and ideas for Molly.jl development #2
Comments
To do: Add module for calculating Free Energy via Alchemical change ;) |
"Abstracting the forcefield from the integration to allow different forcefields and integrators to be combined" and "Allow definition of custom forcefields" are now done. |
It may be interesting to see whether you can either directly use https://github.com/libAtoms/JuLIP.jl as a library (also https://github.com/libAtoms/NeighbourLists.jl), or at least be inspired by some of the code therein. |
Thanks Jarvist, I have had a brief look at the code there but will dive a bit deeper. |
Awhile ago I wrote some Julia code for EWALDS. It is verified for energy calculations against a NIST database.
https://www.nist.gov/mml/csd/chemical-informatics-research-group/spce-water-reference-calculations-10%C3%A5-cutoff
[https://www.nist.gov/sites/default/files/styles/480_x_480_limit/public/images/mml/csd/informatics_research/spce_schematic_3.jpg?itok=OTb11Rb_]<https://www.nist.gov/mml/csd/chemical-informatics-research-group/spce-water-reference-calculations-10%C3%A5-cutoff>
SPC/E Water Reference Calculations - 10Å cutoff | NIST<https://www.nist.gov/mml/csd/chemical-informatics-research-group/spce-water-reference-calculations-10%C3%A5-cutoff>
In this section, we provide sample configurations of SPC/E Water molecules[1] and report the various energy and force calculations for those configurations, where the coulombic contributions are computed using the traditional Ewald Summation Method[2]. These sample configurations and reference ...
www.nist.gov
Would you be interested in it? I have not taken its derivative yet to calculate force. It has been on my list of things to do, but I just don't seem to have time :S
I have also added Rattle to my code for bond constraints... it is fairly straight forward too. Once your code has EWALD and Rattle, it will be a "full" code in my opinion. You would probably want to upgrade it to particle mesh ewald for MD, but that is purely performance.
Braden
…________________________________
From: Joe Greener <[email protected]>
Sent: Wednesday, August 14, 2019 9:00 AM
To: jgreener64/Molly.jl <[email protected]>
Cc: Braden Kelly <[email protected]>; Comment <[email protected]>
Subject: Re: [jgreener64/Molly.jl] Roadmap and ideas for Molly.jl development (#2)
Thanks Jarvist, I have had a brief look at the code there but will dive a bit deeper.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<#2>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AECO7UOWNMZMPDMOU6A5VMDQEP6XBANCNFSM4GADVQTQ>.
|
Yes, definitely. If you could open a PR when you get a chance that would be great. I understand how it's hard to find time though, I only seem to get a few hours a month on this package these days. |
There is a Julia interface to Chemfiles - building Molly on top of those types would allow easy reading and writing. |
what are your thoughts on GPU acceleration or at least multi-core in CPU? In python it's really easy with joblib, cupy, jax... Don't know about the julia set of tools for that but might be worth an exploration |
GPU acceleration is feasible using CuArrays.jl - it would probably involve refactoring into a style that uses broadcasting rather than for loops, but that's no bad thing anyway. Multi-core CPU support has improved greatly in the recent Julia v1.3 release so this is another avenue to explore, though I personally don't have much experience there. Another point to add here is the ability to differentiate through molecular simulations using Zygote.jl, which is an exciting use case and something that Julia is well-suited for. |
I was unfortunately unaware of this package until now. (Or maybe I saw it and have forgotten...) Anyhow, I wonder whether there is any chance to combine Molly and JuLIP, or at least make them compatible. With JuLIP I deliberately stayed quite close to ASE, to make sure that I can have a relatively thin interface. But this has led to some design decision that I‘ve since regretted. So in principle at least I‘m quite open to restructuring JuLIP to make it work with Molly. Another possibility would be to make JuLIP really library for potentials, but let Molly focus on managing configurations, MD, geometry optimisation, etc ... But maybe it is easiest to leave them separate and not worry too much about this. I‘m curious though what people think. |
Regarding the neighbourlist - this is surely something that we can share. I believe my list is fairly fast, it is based on I‘ve also started playing around with implementing various iterators over the neighbourlist to have a more functional style in implementing potentials. I‘m less sure how well that worked. |
Thanks for commenting @cortner. Once I get some time for this package, diving into JuLIP is definitely on the list of priorities to see what can be combined. Certainly with things you have optimised, like the neighbour list, there is room for Molly to use or build on that. |
or extract shared functionality so that JuLIP and Molly become effectively frontends... |
Hi! I observed some things in your code which could be optimized or written the "julia way".
Kind regards, Max |
Some of this I do in JuLIP, though not yet as well as I’d like, it needs more cleanup to get proper type-agnostic code eg. - Another reason maybe to join forces at some pointZ |
Thanks for the feedback @AlfTetzlaff, I would definitely like to get some of this in. I have tried a few of the suggestions previously, for example the positions and velocities are currently mutable static vectors (sub-types of The types should be made parametric and it would help with GPU compatibility, though at the minute the code is not written in a GPU-friendly way so that would need to change first. I would like to use the DifferentialEquations integrators in the long-term since they make so many available, but for the proof-of-concept here I just implemented a simple velocity Verlet. I have opened a Discourse post with a specific MWE based on the current non-bonded force code to solicit feedback - please feel free to comment more there. |
I did a whole bunch of timing on many different possible LJ calculations for force and at the end of it, using SVector was the fastest.
Also, simple things like image separation could be done in a vectorized for loop rather than sequentially, and there was some time saving there too. Not enough to restructure your code just for that, but none the less, some.
I have put my Molecular Dynamics coding on hold and have started doing Monte Carlo (I just like MC better, and find statistical mechanics more interesting).
So far my MC code can do rigid SPC/E water. So, somewhat of a toy system as well, but, It can do Wolf summation or Ewalds for the electrostatic. There is nothing toy about Ewalds. Once I have it cleaned up, I will upload it and share with you guys. The most important thing is that it uses OffsetArrays.jl which is a wonderful package. It allows indices to take on any value, not just 1 and up. This just makes the fourier piece nice since the lattice vectors range from -kmax:kmax.
Julia does Ewalds really well. The fourier part is bloody fast and the "hard" part if there is one. The real part is simple and similar to a bare coulomb calculation.
I am also planning on making some ewald unit tests for energy. Force would also be nice, but that is in the future.
So, now that I am back part time on the Julia wagon, I will try the ewald contributions to you guys soon.
NOTE: --------------------------------------
Here is some fortran code that does force ewald force routines. It will take a good day to go over what they did and figure out how to make it work, which is why I have been putting off the force calculation and stuck with the energy.
http://zeolites.cqe.northwestern.edu/Music/electrostatic/ewald.htm
zeolites.cqe.northwestern.edu<http://zeolites.cqe.northwestern.edu/Music/electrostatic/ewald.htm>
This modules contains routines for those nasty ewald summations.! Good reference for ewald: Allen and Tildesley, Computer Simulation of ! Liquids, p 155-162; Amit's notes for doing part of the calculation in the
zeolites.cqe.northwestern.edu
Braden
…________________________________
From: Joe Greener <[email protected]>
Sent: Friday, May 8, 2020 5:47 PM
To: jgreener64/Molly.jl <[email protected]>
Cc: Braden Kelly <[email protected]>; Comment <[email protected]>
Subject: Re: [jgreener64/Molly.jl] Roadmap and ideas for Molly.jl development (#2)
CAUTION: This email originated from outside of the University of Guelph. Do not click links or open attachments unless you recognize the sender and know the content is safe. If in doubt, forward suspicious emails to [email protected]
Thanks for the feedback @AlfTetzlaff<https://github.com/AlfTetzlaff>, I would definitely like to get some of this in. I have tried a few of the suggestions previously, for example the positions and velocities are currently mutable static vectors (sub-types of FieldVector) which should hopefully be giving some of the speedups you talk about.
The types should be made parametric and it would help with GPU compatibility, though at the minute the code is not written in a GPU-friendly way so that would need to change first. I would like to use the DifferentialEquations integrators in the long-term since they make so many available, but for the proof-of-concept here I just implemented a simple velocity Verlet.
I have opened a Discourse post<https://discourse.julialang.org/t/help-make-my-ideal-gas-simulation-ideally-fast/39141> with a specific MWE based on the current non-bonded force code to solicit feedback - please feel free to comment more there.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<#2 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AECO7UOAPLRP4FM36MI4VDLRQR4WBANCNFSM4GADVQTQ>.
|
Great, thanks for the input. |
Could it be that you confused us with ms in the above graph by accident? |
It's ms but for a 1000-step simulation, so it would be us per step. The blue dot for 100 atoms is the simulation benchmark time from the Discourse post. |
Some changes I added this weekend:
|
Cool! Have you noticed any single threaded speedup due to SVectors? |
Yes, the Discourse thread benchmark went from 45 ms to 36 ms using Making The cell neighbour list would be a big algorithmic speedup (as discussed above). |
My rough feature list for a v0.1 release is:
|
May I have your thoughts on the parallelization via domain decomposition in Julia? |
It's not really my area of expertise, but I guess it can be done and it looks like it could be a useful algorithm. I don't know how well that type of simulation would work in the Molly framework currently though. |
It would be an interesting proposition and a small step towards replacing lammps |
I thought I'd just update this issue with the state of the project. In the last few months Molly has been under heavy development and has seen many new features including Unitful.jl support, more interactions, an OpenMM XML force field reader, AtomsBase.jl integration, differentiable simulation with Zygote.jl on CPU and GPU, a new general interaction type, implicit solvation models, more simulators, energy minimisation, more analysis functions and better test coverage. See the release notes for more details. Future development directions include:
A few groups are interested in using Molly for a diverse range of simulations, so the intention is that it will stay general enough to be widely useful whilst also reaching high performance. My personal research uses Molly for differentiable simulations of proteins, so this will continue to dictate the features I work on. |
May be of interest a native Julia (as in no-external-dependencies) .xtc reader and writer? |
That would be great. Currently we are using Chemfiles for that kind of thing but a fast native Julia approach would be welcome. It could even be its own package? |
At the moment it is just a file with a bunch of functions. It started as a proof of concept, learning Julia, while showing its potential to both speed and low level pesky tasks with the far future eventual goal to assemble something resembling the bad copy of MDAnalysis or MDtraj. I'm, at the moment, too ignorant of Julia packaging system, and of git and github machinery, to be able to provide it. However with some help I'd be glad to contribute. My original goal was to be able to natively read and write trajectory in the formats I use, namely gromacs .xtc, charmm namd .dcd, amber .nc and desmond .dtr. After the unexpected (at least speedwise) success with .xtc I got quickly stranded by netcdf3 underlying .nc and it all went on temporary hold. The the functions are pretty solid as I tested low level ones against the c implementation while I was able to read and write real .xtc file with no diff from original and to compare the read coordinates with MDtraj's. It would be a pity not to use them, also given the effort done in understanding and optimizing the original c code (c implementation slowness is obviously due to implementation itself and not to the language) I'm writing too much here is the thing https://github.com/uliano/MDTools.jl I'm willing to help in my spare time but keep in mind that at the moment I'm not proficient enough in both Julia and github packages machinery to be autonomous. |
I think it looks pretty good! It wouldn't be too much effort to rename your repo 'xtcreader.jl' or similar & then add the boilerplate to make it an independent package. |
It actually writes, also! |
So I did my homework and now we have a Julia package with xtc read and write functionality https://github.com/uliano/MDTrajectoryFiles.jl I added minimal docstrings on the exported functions and a very basic test: reads and writes then compares. I will eventually implement other file formats so I choose a more generic name than XTC. |
This is the last comment for a while, I promise! Foreseeable changes in the next few weeks:
|
Using Metal.jl, I was able to swap all the CuArrays to MtlArrays for better performance on Apple M-series chips. Might be something to look into as a 'low hanging fruit.' |
That's interesting, thanks. Once the See #99 for work towards generic array types. |
Hi, I notice that Molly uses Zygote for AD. Not sure if it's causing issue performance-wise. If so, a question may be: Will there be any plans to switching to alternatives like Enzyme.jl? Are there any significant road-blocks that will require an extensive work on this subject? |
Yes, see the Hoping to merge and release soon, just waiting on some docs cleanup and a new Enzyme.jl release. |
v0.15.0 is out with many improvements including GPU speed and memory usage. Priorities for future development are:
Hopefully there will be a GSoC student(s) working on Molly this summer too. |
Pressure coupling added via the Monte Carlo barostat. @JaydevSR will be looking at GPU performance this summer as part of JSoC. |
This issue is a roadmap for Molly.jl development. Feel free to discuss things here or submit a PR. Bear in mind that significant refactoring will probably occur as the package develops.
Low hanging fruit
Want to get involved? These issues might be the place to start:
Bigger things to tackle
Projects that will require more planning or will cause significant changes
Okay, so this is less of a roadmap and more of a list of ideas to work on. But most or all of the above would be required to make this a package suitable for general use, so in a sense this is the roadmap.
The text was updated successfully, but these errors were encountered: