From 60966c809718e6fc5d085f13a040f068ba45e6fc Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Wed, 20 Sep 2023 22:39:54 +0000 Subject: [PATCH] build based on 990b254 --- previews/PR53/.documenter-siteinfo.json | 2 +- previews/PR53/api/index.html | 76 ++++++++++++------------- previews/PR53/index.html | 2 +- 3 files changed, 40 insertions(+), 40 deletions(-) diff --git a/previews/PR53/.documenter-siteinfo.json b/previews/PR53/.documenter-siteinfo.json index d79725da..376b2f02 100644 --- a/previews/PR53/.documenter-siteinfo.json +++ b/previews/PR53/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.8.5","generation_timestamp":"2023-09-20T19:59:14","documenter_version":"1.0.1"}} \ No newline at end of file +{"documenter":{"julia_version":"1.8.5","generation_timestamp":"2023-09-20T22:39:46","documenter_version":"1.0.1"}} \ No newline at end of file diff --git a/previews/PR53/api/index.html b/previews/PR53/api/index.html index 4f940c25..3e2f5cf4 100644 --- a/previews/PR53/api/index.html +++ b/previews/PR53/api/index.html @@ -2,45 +2,45 @@ API · PotentialLearning.jl

API Reference

This page provides a list of all documented types and functions and in PotentialLearning.jl.

PotentialLearning.ActiveSubspaceType
ActiveSubspace{T<:Real} <: DimensionReducer
     Q :: Function 
     ∇Q :: Function (gradient of Q)
-    tol :: T

Use the theory of active subspaces, with a given quantity of interest (expressed as the function Q) which takes a Configuration as an input and outputs a real scalar. ∇Q should input a Configuration and output an appropriate gradient. If tol is a float then the number of components to keep is determined by the smallest n such that relative percentage of variance explained by keeping the leading n principle components is greater than 1 - tol. If tol is an int, then we return the components corresponding to the tol largest eigenvalues.

source
PotentialLearning.AtomicDataType
AtomicData <: Data

Abstract type declaring the type of information that is unique to a particular atom (instead of a whole configuration).

source
PotentialLearning.ConfigurationMethod
Configuration(data::Union{AtomsBase.FlexibleSystem, ConfigurationData} )

A Configuration is a data struct that contains information unique to a particular configuration of atoms (Energy, LocalDescriptors, ForceDescriptors, and a FlexibleSystem) in a dictionary. Example: '''julia e = Energy(-0.57, u"eV") ld = LocalDescriptors(...) c = Configuration(e, ld) '''

Configurations can be added together, which merges the data dictionaries '''julia c1 = Configuration(e) # Contains energy c2 = Configuration(f) # contains forces c = c1 + c2 # c <: Configuration, contains energy and forces '''

source
PotentialLearning.CorrelationMatrixType
CorrelationMatrix 
-    α :: Vector{Float64} # weights

CorrelationMatrix produces a global descriptor that is the correlation matrix of the local descriptors. In other words, it is mean(bi'*bi for bi in B).

source
PotentialLearning.CovariateLinearProblemType

struct CovariateLinearProblem{T<:Real} <: LinearProblem{T} e::Vector f::Vector{Vector{T}} B::Vector{Vector{T}} dB::Vector{Matrix{T}} β::Vector{T} β0::Vector{T} σe::Vector{T} σf::Vector{T} Σ::Symmetric{T,Matrix{T}} end

A CovariateLinearProblem is a linear problem in which we are fitting energies and forces using both descriptors and their gradients (B and dB, respectively). When this is the case, the solution is not available analytically and must be solved using some iterative optimization proceedure. In the end, we fit the model coefficients, β, standard deviations corresponding to energies and forces, σe and σf, and the covariance Σ.

source
PotentialLearning.DBSCANSelectorType
struct DBSCANSelector <: SubsetSelector
+    tol :: T

Use the theory of active subspaces, with a given quantity of interest (expressed as the function Q) which takes a Configuration as an input and outputs a real scalar. ∇Q should input a Configuration and output an appropriate gradient. If tol is a float then the number of components to keep is determined by the smallest n such that relative percentage of variance explained by keeping the leading n principle components is greater than 1 - tol. If tol is an int, then we return the components corresponding to the tol largest eigenvalues.

source
PotentialLearning.AtomicDataType
AtomicData <: Data

Abstract type declaring the type of information that is unique to a particular atom (instead of a whole configuration).

source
PotentialLearning.ConfigurationMethod
Configuration(data::Union{AtomsBase.FlexibleSystem, ConfigurationData} )

A Configuration is a data struct that contains information unique to a particular configuration of atoms (Energy, LocalDescriptors, ForceDescriptors, and a FlexibleSystem) in a dictionary. Example: '''julia e = Energy(-0.57, u"eV") ld = LocalDescriptors(...) c = Configuration(e, ld) '''

Configurations can be added together, which merges the data dictionaries '''julia c1 = Configuration(e) # Contains energy c2 = Configuration(f) # contains forces c = c1 + c2 # c <: Configuration, contains energy and forces '''

source
PotentialLearning.CorrelationMatrixType
CorrelationMatrix 
+    α :: Vector{Float64} # weights

CorrelationMatrix produces a global descriptor that is the correlation matrix of the local descriptors. In other words, it is mean(bi'*bi for bi in B).

source
PotentialLearning.CovariateLinearProblemType

struct CovariateLinearProblem{T<:Real} <: LinearProblem{T} e::Vector f::Vector{Vector{T}} B::Vector{Vector{T}} dB::Vector{Matrix{T}} β::Vector{T} β0::Vector{T} σe::Vector{T} σf::Vector{T} Σ::Symmetric{T,Matrix{T}} end

A CovariateLinearProblem is a linear problem in which we are fitting energies and forces using both descriptors and their gradients (B and dB, respectively). When this is the case, the solution is not available analytically and must be solved using some iterative optimization proceedure. In the end, we fit the model coefficients, β, standard deviations corresponding to energies and forces, σe and σf, and the covariance Σ.

source
PotentialLearning.DBSCANSelectorType
struct DBSCANSelector <: SubsetSelector
     clusters
     eps
     minpts
     sample_size
-end

Definition of the type DBSCANSelector, a subselector based on the clustering method DBSCAN.

source
PotentialLearning.DBSCANSelectorMethod
function DBSCANSelector(
     ds::DataSet,
     eps,
     minpts,
     sample_size
-)

Constructor of DBSCANSelector based on the atomic configurations in ds, the DBSCAN params eps and minpts, and the sample size sample_size.

source
PotentialLearning.DataSetType
DataSet

Struct that holds vector of configuration. Most operations in PotentialLearning are built around the DataSet structure.

source
PotentialLearning.DistanceType
Distance
+)

Constructor of DBSCANSelector based on the atomic configurations in ds, the DBSCAN params eps and minpts, and the sample size sample_size.

source
PotentialLearning.DataSetType
DataSet

Struct that holds vector of configuration. Most operations in PotentialLearning are built around the DataSet structure.

source
PotentialLearning.DistanceType
Distance
 
-A struct of abstract type Distance produces the distance between two `global` descriptors, or features. Not all distances might be compatible with all types of features.
source
PotentialLearning.DotProductType
DotProduct <: Kernel 
+A struct of abstract type Distance produces the distance between two `global` descriptors, or features. Not all distances might be compatible with all types of features.
source
PotentialLearning.DotProductType
DotProduct <: Kernel 
     α :: Power of DotProduct kernel 
 
 
 Computes the dot product kernel between two features, i.e.,
 
-cos(θ) = ( A ⋅ B / (||A||^2||B||^2) )^α
source
PotentialLearning.EnergyType
Energy <: ConfigurationData
     d :: Real
-    u :: Unitful.FreeUnits

Convenience struct that holds energy information (and corresponding units). Default unit is eV

source
PotentialLearning.EuclideanType
Euclidean <: Distance 
+    u :: Unitful.FreeUnits

Convenience struct that holds energy information (and corresponding units). Default unit is eV

source
PotentialLearning.EuclideanType
Euclidean <: Distance 
     Cinv :: Covariance Matrix 
 
-Computes the squared euclidean distance with weight matrix Cinv, the inverse of some covariance matrix.
source
PotentialLearning.FeatureType
Feature

A struct of abstract type Feature represents a function that takes in a set of local descriptors corresponding to some atomic environment and produce a global descriptor.

source
PotentialLearning.ForceType
Force <: AtomicData 
+Computes the squared euclidean distance with weight matrix Cinv, the inverse of some covariance matrix.
source
PotentialLearning.FeatureType
Feature

A struct of abstract type Feature represents a function that takes in a set of local descriptors corresponding to some atomic environment and produce a global descriptor.

source
PotentialLearning.ForceType
Force <: AtomicData 
     f :: Vector{<:Real}
-    u :: Unitful.FreeUnits

Contains the force with (x,y,z)-components in f with units u. Default unit is "eV/Å".

source
PotentialLearning.ForcesType
Forces <: ConfigurationData
-    f :: Vector{force}

Forces is a struct that contains all force information in a configuration.

source
PotentialLearning.ForstnerType
Forstner <: Distance 
+    u :: Unitful.FreeUnits

Contains the force with (x,y,z)-components in f with units u. Default unit is "eV/Å".

source
PotentialLearning.ForcesType
Forces <: ConfigurationData
+    f :: Vector{force}

Forces is a struct that contains all force information in a configuration.

source
PotentialLearning.ForstnerType
Forstner <: Distance 
     α :: Regularization parameter
 
-Computes the squared Forstner distance between two positive semi-definite matrices.
source
PotentialLearning.KernelType
Kernel
 
-A struct of abstract type Kernel is function that takes in two features and produces a semi-definite scalar representing the similarity between the two features.
source
PotentialLearning.LAMMPSType
struct LAMMPS <: IO
+A struct of abstract type Kernel is function that takes in two features and produces a semi-definite scalar representing the similarity between the two features.
source
PotentialLearning.LearningProblemType

struct LearningProblem{T<:Real} <: AbstractLearningProblem ds::DataSet logprob::Function ∇logprob::Function params::Vector{T} end

Generic LearningProblem that allows the user to pass a logprob(y::params, ds::DataSet) function and its gradient. The gradient should return a vector of logprob with respect to it's params. If the user does not have a gradient function available, then Flux can provide one for it (provided that logprob is of the form above).

source
PotentialLearning.LinearProblemMethod

function LinearProblem( ds::DataSet; T = Float64 )

Construct a LinearProblem by detecting if there are energy descriptors and/or force descriptors and construct the appropriate LinearProblem (either Univariate, if only a single type of descriptor, or Covariate, if there are both types).

source
PotentialLearning.PCAType
PCA <: DimensionReducer
-    tol :: Float64

Use SVD to compute the PCA of the design matrix of descriptors. (using Force descriptors TBA)

If tol is a float then the number of components to keep is determined by the smallest n such that relative percentage of variance explained by keeping the leading n principle components is greater than 1 - tol. If tol is an int, then we return the components corresponding to the tol largest eigenvalues.

source
PotentialLearning.LearningProblemType

struct LearningProblem{T<:Real} <: AbstractLearningProblem ds::DataSet logprob::Function ∇logprob::Function params::Vector{T} end

Generic LearningProblem that allows the user to pass a logprob(y::params, ds::DataSet) function and its gradient. The gradient should return a vector of logprob with respect to it's params. If the user does not have a gradient function available, then Flux can provide one for it (provided that logprob is of the form above).

source
PotentialLearning.LinearProblemMethod

function LinearProblem( ds::DataSet; T = Float64 )

Construct a LinearProblem by detecting if there are energy descriptors and/or force descriptors and construct the appropriate LinearProblem (either Univariate, if only a single type of descriptor, or Covariate, if there are both types).

source
PotentialLearning.PCAType
PCA <: DimensionReducer
+    tol :: Float64

Use SVD to compute the PCA of the design matrix of descriptors. (using Force descriptors TBA)

If tol is a float then the number of components to keep is determined by the smallest n such that relative percentage of variance explained by keeping the leading n principle components is greater than 1 - tol. If tol is an int, then we return the components corresponding to the tol largest eigenvalues.

source
PotentialLearning.RBFType
RBF <: Kernel 
     d :: Distance function 
     α :: Reguarlization parameter 
     ℓ :: Length-scale parameter
@@ -49,59 +49,59 @@
 
 Computes the squared exponential kernel, i.e.,
 
- k(A, B) = β xp( -rac{1}{2} d(A,B)/ℓ^2 ) + α δ(A, B)
source
PotentialLearning.RandomSelectorType
struct Random
     num_configs :: Int 
     batch_size  :: Int 
-end

A convenience function that allows the user to randomly select indices uniformly over [1, num_configs].

source
PotentialLearning.UnivariateLinearProblemType

struct UnivariateLinearProblem{T<:Real} <: LinearProblem{T} ivdata::Vector dvdata::Vector β::Vector{T} β0::Vector{T} σ::Vector{T} Σ::Symmetric{T,Matrix{T}} end

A UnivariateLinearProblem is a linear problem in which there is only 1 type of independent variable / dependent variable. Typically, that means we are either only fitting energies or only fitting forces. When this is the case, the solution is available analytically and the standard deviation, σ, and covariance, Σ, of the coefficients, β, are computable.

source
PotentialLearning.YAMLType
YAML <: IO
+end

A convenience function that allows the user to randomly select indices uniformly over [1, num_configs].

source
PotentialLearning.UnivariateLinearProblemType

struct UnivariateLinearProblem{T<:Real} <: LinearProblem{T} ivdata::Vector dvdata::Vector β::Vector{T} β0::Vector{T} σ::Vector{T} Σ::Symmetric{T,Matrix{T}} end

A UnivariateLinearProblem is a linear problem in which there is only 1 type of independent variable / dependent variable. Typically, that means we are either only fitting energies or only fitting forces. When this is the case, the solution is available analytically and the standard deviation, σ, and covariance, Σ, of the coefficients, β, are computable.

source
PotentialLearning.kDPPType
struct kDPP
     K :: EllEnsemble
-end

A convenience function that allows the user access to a k-Determinantal Point Process through Determinantal.jl. All that is required to construct a kDPP is a similarity kernel, for which the user must provide a LinearProblem and two functions to compute descriptor (1) diversity and (2) quality.

source
PotentialLearning.kDPPMethod
kDPP(ds::Dataset, f::Feature, k::Kernel)

A convenience function that allows the user access to a k-Determinantal Point Process through Determinantal.jl. All that is required to construct a kDPP is a dataset, a method to compute features, and a kernel. Optional arguments include batch size and type of descriptor (default LocalDescriptors).

source
PotentialLearning.kDPPMethod
kDPP(features::Union{Vector{Vector{T}}, Vector{Symmetric{T, Matrix{T}}}}, k::Kernel)

A convenience function that allows the user access to a k-Determinantal Point Process through Determinantaljl. All that is required to construct a kDPP are features (either a vector of vector features or a vector of symmetric matrix features) and a kernel. Optional argument is batch_size (default length(features)).

source
PotentialLearning.KernelMatrixMethod
KernelMatrix(ds1::DataSet, ds2::DataSet, F::Feature, k::Kernel)
+end

A convenience function that allows the user access to a k-Determinantal Point Process through Determinantal.jl. All that is required to construct a kDPP is a similarity kernel, for which the user must provide a LinearProblem and two functions to compute descriptor (1) diversity and (2) quality.

source
PotentialLearning.kDPPMethod
kDPP(ds::Dataset, f::Feature, k::Kernel)

A convenience function that allows the user access to a k-Determinantal Point Process through Determinantal.jl. All that is required to construct a kDPP is a dataset, a method to compute features, and a kernel. Optional arguments include batch size and type of descriptor (default LocalDescriptors).

source
PotentialLearning.kDPPMethod
kDPP(features::Union{Vector{Vector{T}}, Vector{Symmetric{T, Matrix{T}}}}, k::Kernel)

A convenience function that allows the user access to a k-Determinantal Point Process through Determinantaljl. All that is required to construct a kDPP are features (either a vector of vector features or a vector of symmetric matrix features) and a kernel. Optional argument is batch_size (default length(features)).

source
PotentialLearning.KernelMatrixMethod
KernelMatrix(ds1::DataSet, ds2::DataSet, F::Feature, k::Kernel)
 
-Compute nonsymmetric kernel matrix K using features of the datasets ds1 and ds2 calculated using the Feature method F.
source
PotentialLearning.KernelMatrixMethod
KernelMatrix(ds::DataSet, F::Feature, k::Kernel)

Compute symmetric kernel matrix K using features of the dataset ds calculated using the Feature method F.

source
PotentialLearning.calc_centroidMethod
function calc_centroid(
+Compute nonsymmetric kernel matrix K using features of the datasets ds1 and ds2 calculated using the Feature method F.
source
PotentialLearning.KernelMatrixMethod
KernelMatrix(ds::DataSet, F::Feature, k::Kernel)

Compute symmetric kernel matrix K using features of the dataset ds calculated using the Feature method F.

source
PotentialLearning.calc_metricsMethod
calc_metrics(x_pred, x)

x_pred: vector of predicted values of a variable. E.g. energy. x: vector of true values of a variable. E.g. energy.

Returns MAE, RMSE, and RSQ.

source
PotentialLearning.compute_featuresMethod
compute_feature(ds::DataSet, f::Feature; dt = LocalDescriptors)

Computes features of the dataset ds using the feature method F on descriptors dt (default option are the LocalDescriptors, if available).

source
PotentialLearning.calc_metricsMethod
calc_metrics(x_pred, x)

x_pred: vector of predicted values of a variable. E.g. energy. x: vector of true values of a variable. E.g. energy.

Returns MAE, RMSE, and RSQ.

source
PotentialLearning.compute_featuresMethod
compute_feature(ds::DataSet, f::Feature; dt = LocalDescriptors)

Computes features of the dataset ds using the feature method F on descriptors dt (default option are the LocalDescriptors, if available).

source
PotentialLearning.fitFunction
fit(ds::DataSet, dr::DimensionReducer)

Fits a linear dimension reduction routine using information from DataSet. See individual types of DimensionReducers for specific details.

source
PotentialLearning.fitMethod
fit(ds::DataSet, as::ActiveSubspace)

Fits a linear dimension reduction routine using the eigendirections of the uncentered covariance of the function ∇Q(c::Configuration) over the configurations in ds. Primarily used to reduce the dimension of the descriptors.

source
PotentialLearning.fitMethod
fit(ds::DataSet, pca::PCA)

Fits a linear dimension reduction routine using PCA on the global descriptors in the dataset ds.

source
PotentialLearning.fit_transformMethod
fit_transform(ds::DataSet, dr::DimensionReducer)

Fits a linear dimension reduction routine using information from DataSet and performs dimension reduction on descriptors and force_descriptors (whichever are available). See individual types of DimensionReducers for specific details.

source
PotentialLearning.fitFunction
fit(ds::DataSet, dr::DimensionReducer)

Fits a linear dimension reduction routine using information from DataSet. See individual types of DimensionReducers for specific details.

source
PotentialLearning.fitMethod
fit(ds::DataSet, as::ActiveSubspace)

Fits a linear dimension reduction routine using the eigendirections of the uncentered covariance of the function ∇Q(c::Configuration) over the configurations in ds. Primarily used to reduce the dimension of the descriptors.

source
PotentialLearning.fitMethod
fit(ds::DataSet, pca::PCA)

Fits a linear dimension reduction routine using PCA on the global descriptors in the dataset ds.

source
PotentialLearning.fit_transformMethod
fit_transform(ds::DataSet, dr::DimensionReducer)

Fits a linear dimension reduction routine using information from DataSet and performs dimension reduction on descriptors and force_descriptors (whichever are available). See individual types of DimensionReducers for specific details.

source
PotentialLearning.get_batchesMethod
get_batches(n_batches, B_train, B_train_ext, e_train, dB_train, f_train,
-            B_test, B_test_ext, e_test, dB_test, f_test)

n_batches: no. of batches per dataset. B_train: descriptors of the energies used in training. B_train_ext: extendended descriptors of the energies used in training. Requiered to compute forces. e_train: energies used in training. dB_train: derivatives of the energy descritors used in training. f_train: forces used in training. B_test: descriptors of the energies used in test. B_test_ext: extendended descriptors of the energies used in test. Requiered to compute forces. e_test: energies used in test. dB_test: derivatives of the energy descritors used in test. f_test: forces used in test.

Returns the data loaders for training and test of energies and forces.

source
PotentialLearning.get_batchesMethod
get_batches(n_batches, B_train, B_train_ext, e_train, dB_train, f_train,
+            B_test, B_test_ext, e_test, dB_test, f_test)

n_batches: no. of batches per dataset. B_train: descriptors of the energies used in training. B_train_ext: extendended descriptors of the energies used in training. Requiered to compute forces. e_train: energies used in training. dB_train: derivatives of the energy descritors used in training. f_train: forces used in training. B_test: descriptors of the energies used in test. B_test_ext: extendended descriptors of the energies used in test. Requiered to compute forces. e_test: energies used in test. dB_test: derivatives of the energy descritors used in test. f_test: forces used in test.

Returns the data loaders for training and test of energies and forces.

source
PotentialLearning.get_clustersMethod
function get_clusters(
     ds,
     eps,
     minpts
-)

Computes clusters from the configurations in ds using DBSCAN with parameters eps and minpts.

source
PotentialLearning.get_dpp_modeMethod
get_dpp_mode(dpp::kDPP, batch_size::Int) <: Vector{Int64}

Access an approximate mode of the k-DPP as calculated by a greedy subset algorithm. See Determinantal.jl for details.

source
PotentialLearning.get_inclusion_probMethod
get_inclusion_prob(dpp::kDPP) <: Vector{Float64}

Access an approximation to the inclusion probabilities as calculated by Determinantal.jl (see package for details).

source
PotentialLearning.get_inputMethod
get_input(args)

args: vector of arguments (strings)

Returns an OrderedDict with the arguments. See https://github.com/cesmix-mit/AtomisticComposableWorkflows documentation for information about how to define the input arguments.

source
PotentialLearning.get_metricsMethod
get_metrics( e_train_pred, e_train, f_train_pred, f_train,
+)

Computes clusters from the configurations in ds using DBSCAN with parameters eps and minpts.

source
PotentialLearning.get_dpp_modeMethod
get_dpp_mode(dpp::kDPP, batch_size::Int) <: Vector{Int64}

Access an approximate mode of the k-DPP as calculated by a greedy subset algorithm. See Determinantal.jl for details.

source
PotentialLearning.get_inclusion_probMethod
get_inclusion_prob(dpp::kDPP) <: Vector{Float64}

Access an approximation to the inclusion probabilities as calculated by Determinantal.jl (see package for details).

source
PotentialLearning.get_inputMethod
get_input(args)

args: vector of arguments (strings)

Returns an OrderedDict with the arguments. See https://github.com/cesmix-mit/AtomisticComposableWorkflows documentation for information about how to define the input arguments.

source
PotentialLearning.get_metricsMethod
get_metrics( e_train_pred, e_train, f_train_pred, f_train,
              e_test_pred, e_test, f_test_pred, f_test,
-             B_time, dB_time, time_fitting)

e_train_pred: vector of predicted training energy values. e_train: vector of true training energy values. f_train_pred: vector of predicted training force values. f_train: vector of true training force values. e_test_pred: vector of predicted test energy values. e_test: vector of true test energy values. f_test_pred: vector of predicted test force values. f_test: vector of true test force values. B_time: elapsed time consumed by descriptors calculation. dB_time: elapsed time consumed by descriptor derivatives calculation. time_fitting: elapsed time consumed by fitting process.

Computes MAE, RMSE, and RSQ for training and testing energies and forces. Also add elapsed times about descriptors and fitting calculations. Returns an OrderedDict with the information above.

source
PotentialLearning.get_metricsMethod
get_metrics( e_train_pred, e_train, e_test_pred, e_test)

e_train_pred: vector of predicted training energy values. e_train: vector of true training energy values. e_test_pred: vector of predicted test energy values. e_test: vector of true test energy values.

Computes MAE, RMSE, and RSQ for training and testing energies. Returns an OrderedDict with the information above.

source
PotentialLearning.get_random_subsetFunction
function get_random_subset(
+             B_time, dB_time, time_fitting)

e_train_pred: vector of predicted training energy values. e_train: vector of true training energy values. f_train_pred: vector of predicted training force values. f_train: vector of true training force values. e_test_pred: vector of predicted test energy values. e_test: vector of true test energy values. f_test_pred: vector of predicted test force values. f_test: vector of true test force values. B_time: elapsed time consumed by descriptors calculation. dB_time: elapsed time consumed by descriptor derivatives calculation. time_fitting: elapsed time consumed by fitting process.

Computes MAE, RMSE, and RSQ for training and testing energies and forces. Also add elapsed times about descriptors and fitting calculations. Returns an OrderedDict with the information above.

source
PotentialLearning.get_metricsMethod
get_metrics( e_train_pred, e_train, e_test_pred, e_test)

e_train_pred: vector of predicted training energy values. e_train: vector of true training energy values. e_test_pred: vector of predicted test energy values. e_test: vector of true test energy values.

Computes MAE, RMSE, and RSQ for training and testing energies. Returns an OrderedDict with the information above.

source
PotentialLearning.get_random_subsetFunction
function get_random_subset(
     s::DBSCANSelector,
     batch_size = s.sample_size
-)

Returns a random subset of indexes composed of samples of size batch_size ÷ length(s.clusters) from each cluster in s.

source
PotentialLearning.get_random_subsetFunction
get_random_subset(r::Random, batch_size :: Int) <: Vector{Int64}

Access a random subset of the data as sampled from the provided k-DPP. Returns the indices of the random subset and the subset itself.

source
PotentialLearning.get_random_subsetMethod
get_random_subset(dpp::kDPP, batch_size :: Int) <: Vector{Int64}

Access a random subset of the data as sampled from the provided k-DPP. Returns the indices of the random subset and the subset itself.

source
PotentialLearning.get_systemMethod
get_system(c::Configuration) <: AtomsBase.AbstractSystem

Retrieves the AtomsBase system (if available) in the Configuration c.

source
PotentialLearning.kabschMethod
function kabsch(
+)

Returns a random subset of indexes composed of samples of size batch_size ÷ length(s.clusters) from each cluster in s.

source
PotentialLearning.get_random_subsetFunction
get_random_subset(r::Random, batch_size :: Int) <: Vector{Int64}

Access a random subset of the data as sampled from the provided k-DPP. Returns the indices of the random subset and the subset itself.

source
PotentialLearning.get_random_subsetMethod
get_random_subset(dpp::kDPP, batch_size :: Int) <: Vector{Int64}

Access a random subset of the data as sampled from the provided k-DPP. Returns the indices of the random subset and the subset itself.

source
PotentialLearning.get_systemMethod
get_system(c::Configuration) <: AtomsBase.AbstractSystem

Retrieves the AtomsBase system (if available) in the Configuration c.

source
PotentialLearning.kabschMethod
function kabsch(
     reference::Array{Float64,2},
     coords::Array{Float64,2}
-)

Input: two sets of points: reference, coords as Nx3 Matrices (so) Returns optimally rotated matrix

source
PotentialLearning.learn!Method

function learn!( iap::InteratomicPotentials.LinearBasisPotential, ds::DataSet, args... )

Learning dispatch function, common to ordinary and weghted least squares implementations.

source
PotentialLearning.learn!Method

function learn!( lp::CovariateLinearProblem, α::Real )

Fit a Gaussian distribution by finding the MLE of the following log probability: ℓ(β, σe, σf) = -0.5(e - A_e *β)'(e - Ae * β) / σe - 0.5*(f - Af β)'(f - A_f * β) / σf - log(σe) - log(σf)

through an optimization procedure.

source
PotentialLearning.learn!Method

function learn!( lp::CovariateLinearProblem, ss::SubsetSelector, α::Real; num_steps=100, opt=Flux.Optimise.Adam() )

Fit a Gaussian distribution by finding the MLE of the following log probability: ℓ(β, σe, σf) = -0.5(e - A_e *β)'(e - Ae * β) / σe - 0.5*(f - Af β)'(f - A_f * β) / σf - log(σe) - log(σf)

through an iterative batch gradient descent optimization proceedure where the batches are provided by the subset selector.

source
PotentialLearning.learn!Method

function learn!( lp::CovariateLinearProblem, ws::Vector, int::Bool )

Fit energies and forces using weighted least squares.

source
PotentialLearning.learn!Method

function learn!( lp::LearningProblem, ss::SubsetSelector; num_steps = 100::Int, opt = Flux.Optimisers.Adam() )

Attempts to fit the parameters lp.params in the learning problem lp using batch gradient descent with the optimizer opt and num_steps number of iterations. Batching is provided by the passed ss::SubsetSelector.

source
PotentialLearning.learn!Method

function learn!( lp::LearningProblem; num_steps=100::Int, opt=Flux.Optimisers.Adam() )

Attempts to fit the parameters lp.params in the learning problem lp using gradient descent with the optimizer opt and num_steps number of iterations.

source
PotentialLearning.learn!Method

function learn!( lp::UnivariateLinearProblem, α::Real )

Fit a univariate Gaussian distribution for the equation y = Aβ + ϵ, where β are model coefficients and ϵ ∼ N(0, σ). Fitting is done via SVD on the design matrix, A'*A (formed iteratively), where eigenvalues less than α are cut-off.

source
PotentialLearning.learn!Method

function learn!( lp::UnivariateLinearProblem, ss::SubsetSelector, α::Real; num_steps = 100, opt = Flux.Optimise.Adam() )

Fit a univariate Gaussian distribution for the equation y = Aβ + ϵ, where β are model coefficients and ϵ ∼ N(0, σ). Fitting is done via batched gradient descent with batches provided by the subset selector and the gradients are calculated using Flux.

source
PotentialLearning.learn!Method

function learn!( lp::UnivariateLinearProblem, ws::Vector, int::Bool )

Fit energies using weighted least squares.

source
PotentialLearning.learn!Method

function learn!( iap::InteratomicPotentials.LinearBasisPotential, ds::DataSet, args... )

Learning dispatch function, common to ordinary and weghted least squares implementations.

source
PotentialLearning.learn!Method

function learn!( lp::CovariateLinearProblem, α::Real )

Fit a Gaussian distribution by finding the MLE of the following log probability: ℓ(β, σe, σf) = -0.5(e - A_e *β)'(e - Ae * β) / σe - 0.5*(f - Af β)'(f - A_f * β) / σf - log(σe) - log(σf)

through an optimization procedure.

source
PotentialLearning.learn!Method

function learn!( lp::CovariateLinearProblem, ss::SubsetSelector, α::Real; num_steps=100, opt=Flux.Optimise.Adam() )

Fit a Gaussian distribution by finding the MLE of the following log probability: ℓ(β, σe, σf) = -0.5(e - A_e *β)'(e - Ae * β) / σe - 0.5*(f - Af β)'(f - A_f * β) / σf - log(σe) - log(σf)

through an iterative batch gradient descent optimization proceedure where the batches are provided by the subset selector.

source
PotentialLearning.learn!Method

function learn!( lp::CovariateLinearProblem, ws::Vector, int::Bool )

Fit energies and forces using weighted least squares.

source
PotentialLearning.learn!Method

function learn!( lp::LearningProblem, ss::SubsetSelector; num_steps = 100::Int, opt = Flux.Optimisers.Adam() )

Attempts to fit the parameters lp.params in the learning problem lp using batch gradient descent with the optimizer opt and num_steps number of iterations. Batching is provided by the passed ss::SubsetSelector.

source
PotentialLearning.learn!Method

function learn!( lp::LearningProblem; num_steps=100::Int, opt=Flux.Optimisers.Adam() )

Attempts to fit the parameters lp.params in the learning problem lp using gradient descent with the optimizer opt and num_steps number of iterations.

source
PotentialLearning.learn!Method

function learn!( lp::UnivariateLinearProblem, α::Real )

Fit a univariate Gaussian distribution for the equation y = Aβ + ϵ, where β are model coefficients and ϵ ∼ N(0, σ). Fitting is done via SVD on the design matrix, A'*A (formed iteratively), where eigenvalues less than α are cut-off.

source
PotentialLearning.learn!Method

function learn!( lp::UnivariateLinearProblem, ss::SubsetSelector, α::Real; num_steps = 100, opt = Flux.Optimise.Adam() )

Fit a univariate Gaussian distribution for the equation y = Aβ + ϵ, where β are model coefficients and ϵ ∼ N(0, σ). Fitting is done via batched gradient descent with batches provided by the subset selector and the gradients are calculated using Flux.

source
PotentialLearning.learn!Method

function learn!( lp::UnivariateLinearProblem, ws::Vector, int::Bool )

Fit energies using weighted least squares.

source
PotentialLearning.load_dataMethod
load_data(file::string, yaml::YAML)
 
 Load configurations from a yaml file into a Vector of Flexible Systems, with Energies and Force.
 Returns 
     ds - DataSet
-    t = Vector{Dict} (any miscellaneous info from yaml file)
source
PotentialLearning.load_datasetsMethod
load_datasets(input)

input: OrderedDict with input arguments. See get_defaults_args().

Returns training and test systems, energies, forces, and stresses.

source
PotentialLearning.load_datasetsMethod
load_datasets(input)

input: OrderedDict with input arguments. See get_defaults_args().

Returns training and test systems, energies, forces, and stresses.

source
PotentialLearning.periodic_rmsdMethod
function periodic_rmsd(
     p1::Array{Float64,2},
     p2::Array{Float64,2},
     box_lengths::Array{Float64,1}
-)

Calculates the RMSD between atom positions of two configurations taking into account the periodic boundaries.

source
PotentialLearning.rmsdMethod
function rmsd(
+)

Calculates the RMSD between atom positions of two configurations taking into account the periodic boundaries.

source
PotentialLearning.rmsdMethod
function rmsd(
     A::Array{Float64,2},
     B::Array{Float64,2}
-)

Calculate root mean square deviation of two matrices A, B. See http://en.wikipedia.org/wiki/Root-mean-squaredeviationofatomicpositions

source
PotentialLearning.sampleMethod
function sample(
+)

Calculate root mean square deviation of two matrices A, B. See http://en.wikipedia.org/wiki/Root-mean-squaredeviationofatomicpositions

source
PotentialLearning.translate_pointsMethod
function translate_points(
     P::Array{Float64,2},
     Q::Array{Float64,2}
-)

Translate P, Q so centroids are equal to the origin of the coordinate system Translation der Massenzentren, so dass beide Zentren im Ursprung des Koordinatensystems liegen

source
+)

Translate P, Q so centroids are equal to the origin of the coordinate system Translation der Massenzentren, so dass beide Zentren im Ursprung des Koordinatensystems liegen

source diff --git a/previews/PR53/index.html b/previews/PR53/index.html index 225515a1..4304bc22 100644 --- a/previews/PR53/index.html +++ b/previews/PR53/index.html @@ -1,2 +1,2 @@ -Home · PotentialLearning.jl

[WIP] PotentialLearning.jl

An open source Julia library for active learning of interatomic potentials in atomistic simulations of materials. It incorporates elements of bayesian inference, machine learning, differentiable programming, software composability, and high-performance computing. This package is part of a software suite developed for the CESMIX project.

Specific goals

  • Intelligent data subsampling: iteratively query a large pool of unlabeled data to extract a minimum number of training data that would lead to a supervised ML model with superior accuracy compared to a training model with educated handpicking.
  • Quantity of Interest based dimension reduction through the theory of Active Subspaces.
  • Inference of the optimal values and uncertainties of the model parameters, to propagate them through the atomistic simulation.
    • Interatomic potential hyper-parameter optimization. E.g. estimation of the optimum cutoff radius.
    • Interatomic potential fitting. The potentials addressed in this package are defined in InteratomicPotentials.jl and InteratomicBasisPotentials.jl. E.g. ACE, SNAP, Neural Network Potentials.
  • Measurement of QoI sensitivity to individual parameters.
  • Input data management and post-processing.
    • Process input data so that it is ready for training. E.g. read XYZ file with atomic configurations, linearize energies and forces, split dataset into training and testing, normalize data, transfer data to GPU, define iterators, etc.
    • Post-processing: computation of different metrics (MAE, RSQ, COV, etc), saving results, and plotting.

Leveraging Julia!

  • Software composability through multiple dispatch. A series of composable workflows is guiding our design and development. We analyzed three of the most representative workflows: classical molecular dynamics (MD), Ab initio MD, and classical MD with active learning. In addition, it facilitates the training of new potentials defined by the composition of neural networks with state-of-the-art interatomic potential descriptors.
  • Differentiable programming. Powerful automatic differentiation tools, such as Enzyme or Zygote, help to accelerate the development of new interatomic potentials by automatically calculating loss function gradients and forces.
  • SciML: Open Source Software for Scientific Machine Learning. It provides libraries, such as Optimization.jl, that bring together several optimization packages into one unified Julia interface.
  • Machine learning and HPC abstractions: Flux.jl makes parallel learning simple using the NVIDIA GPU abstractions of CUDA.jl. Mini-batch iterations on heterogeneous data, as required by a loss function based on energies and forces, can be handled by DataLoader.jl.

Examples

See AtomisticComposableWorkflows repository. It aims to gather easy-to-use CESMIX-aligned case studies, integrating the latest developments of the Julia atomistic ecosystem with state-of-the-art tools.

+Home · PotentialLearning.jl

[WIP] PotentialLearning.jl

An open source Julia library for active learning of interatomic potentials in atomistic simulations of materials. It incorporates elements of bayesian inference, machine learning, differentiable programming, software composability, and high-performance computing. This package is part of a software suite developed for the CESMIX project.

Specific goals

  • Intelligent data subsampling: iteratively query a large pool of unlabeled data to extract a minimum number of training data that would lead to a supervised ML model with superior accuracy compared to a training model with educated handpicking.
  • Quantity of Interest based dimension reduction through the theory of Active Subspaces.
  • Inference of the optimal values and uncertainties of the model parameters, to propagate them through the atomistic simulation.
    • Interatomic potential hyper-parameter optimization. E.g. estimation of the optimum cutoff radius.
    • Interatomic potential fitting. The potentials addressed in this package are defined in InteratomicPotentials.jl and InteratomicBasisPotentials.jl. E.g. ACE, SNAP, Neural Network Potentials.
  • Measurement of QoI sensitivity to individual parameters.
  • Input data management and post-processing.
    • Process input data so that it is ready for training. E.g. read XYZ file with atomic configurations, linearize energies and forces, split dataset into training and testing, normalize data, transfer data to GPU, define iterators, etc.
    • Post-processing: computation of different metrics (MAE, RSQ, COV, etc), saving results, and plotting.

Leveraging Julia!

  • Software composability through multiple dispatch. A series of composable workflows is guiding our design and development. We analyzed three of the most representative workflows: classical molecular dynamics (MD), Ab initio MD, and classical MD with active learning. In addition, it facilitates the training of new potentials defined by the composition of neural networks with state-of-the-art interatomic potential descriptors.
  • Differentiable programming. Powerful automatic differentiation tools, such as Enzyme or Zygote, help to accelerate the development of new interatomic potentials by automatically calculating loss function gradients and forces.
  • SciML: Open Source Software for Scientific Machine Learning. It provides libraries, such as Optimization.jl, that bring together several optimization packages into one unified Julia interface.
  • Machine learning and HPC abstractions: Flux.jl makes parallel learning simple using the NVIDIA GPU abstractions of CUDA.jl. Mini-batch iterations on heterogeneous data, as required by a loss function based on energies and forces, can be handled by DataLoader.jl.

Examples

See AtomisticComposableWorkflows repository. It aims to gather easy-to-use CESMIX-aligned case studies, integrating the latest developments of the Julia atomistic ecosystem with state-of-the-art tools.