-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simple job running API #68
Comments
Basic calculationsI have done some work on this in the yutility package . The approach I take is to make all settings be set using Job class methods. For example, we can run a simple geometry optimization job using the ADFJob class: from tcutility.job import ADFJob
with ADFJob() as job:
job.molecule('./test/xyz/NH3BH3.xyz')
job.rundir = 'tmp/NH3BH3'
job.name = 'GeometryOpt'
job.sbatch(p='tc', ntasks_per_node=15)
job.optimization()
job.functional('r2SCAN')
job.basis_set('TZ2P') This small script will run a geometry optimization on the molecule stored in './test/xyz/NH3BH3.xyz' at the r2SCAN/TZ2P level of theory. It will be run and stored in './tmp/NH3BH3/GeometryOpt'. Furthermore, the job will be submitted using sbatch with 15 cores and in the tc partition. The current approach allows us to very quickly set up a calculation in only 8 lines of code. Doing everything using plams would easily be close to 100 lines of code, including slurm settings. Dependent jobsOne important feature of the approach is that we can define dependencies between jobs. For example, we can do an optimization at a lower level of theory and then do a single point calculation at a high level of theory. from tcutility.jobs import ADFJob
with ADFJob() as opt_job:
opt_job.molecule('./test/xyz/SN2_TS.xyz')
opt_job.charge(-1)
opt_job.rundir = 'tmp/SN2'
opt_job.name = 'TS_OPT'
opt_job.sbatch(p='tc', ntasks_per_node=15)
opt_job.functional('OLYP')
opt_job.basis_set('DZP')
opt_job.transition_state()
with ADFJob() as sp_job:
sp_job.dependency(opt_job) # this job will only run when opt_job finishes
sp_job.molecule(j(opt_job.workdir, 'output.xyz'))
sp_job.charge(-1)
sp_job.rundir = 'tmp/SN2'
sp_job.name = 'SP_M062X'
sp_job.sbatch(p='tc', ntasks_per_node=15)
sp_job.functional('M06-2X')
sp_job.basis_set('TZ2P') sp_job will wait for opt_job to finish before starting. We can also directly take the molecule file that opt_job will produce. This file does not exist yet when sp_job is submitted, but it will be read when opt_job finishes. Fragment calculationsOne consequence of the dependency feature is the ease of setting up fragment base calculations. I have implemented a small class that implements this (ADFFragmentJob). from tcutility.jobs import ADFFragmentJob
mol = plams.Molecule('./test/xyz/radadd.xyz')
with ADFFragmentJob() as job:
job.add_fragment(mol.atoms[:15], 'Substrate')
job.add_fragment(mol.atoms[15:], 'Radical')
job.Radical.spin_polarization(1)
job.rundir = 'tmp/RA'
job.sbatch(p='tc', ntasks_per_node=15)
job.functional('BLYP-D3(BJ)')
job.basis_set('TZ2P') We first load the molecule and then add the fragments by accessing the atoms from it. For this system, atoms 1-15 are part of the substrate and 16-20 is the methyl radical. Using the add_fragment method we can provide the fragment geometries (a list of atoms in this case) and the name of the fragment. |
Looks good! Actually quite a handy way of doing calculations. I like how you can specify the name of the fragment and consequently use that name to change settings. I assume it's case-sensitive? I further wonder how the complex is called that you can use for accessing information? Or do you plan to just use the The dependent job mechanism is very interesting. If it indeeds works as you would think then it becomes very easy to chain calculations such as for NMR, or entire workflows |
|
Adding a simple job running API would be very helpfull for automation tasks. There are of course many possible ways of implementing this. API calls might look like:
Or, a little more verbose, but more flexible:
The text was updated successfully, but these errors were encountered: