Skip to content

Commit

Permalink
Merge pull request #78 from cpmech/remove-intel-dss-solver
Browse files Browse the repository at this point in the history
Remove Intel DSS
  • Loading branch information
cpmech authored Mar 8, 2024
2 parents 1b1a6f2 + 97abe32 commit 051f0f7
Show file tree
Hide file tree
Showing 24 changed files with 35 additions and 975 deletions.
3 changes: 1 addition & 2 deletions .vscode/c_cpp_properties.json
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@
"/usr/local/include/umfpack"
],
"defines": [
"WITH_INTEL_DSS",
"USE_INTEL_MKL"
],
"compilerPath": "/usr/bin/gcc",
Expand All @@ -21,4 +20,4 @@
}
],
"version": 4
}
}
1 change: 0 additions & 1 deletion .vscode/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,6 @@
"ifort",
"IIIₛ",
"ᵢⱼₖₗ",
"inteldss",
"iomp",
"irhs",
"jobvl",
Expand Down
3 changes: 1 addition & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Next, we recommend looking at the [russell_sparse](https://github.com/cpmech/rus
Available crates:

- [![Crates.io](https://img.shields.io/crates/v/russell_lab.svg)](https://crates.io/crates/russell_lab) [russell_lab](https://github.com/cpmech/russell/tree/main/russell_lab) Matrix-vector laboratory for linear algebra (with OpenBLAS or Intel MKL)
- [![Crates.io](https://img.shields.io/crates/v/russell_sparse.svg)](https://crates.io/crates/russell_sparse) [russell_sparse](https://github.com/cpmech/russell/tree/main/russell_sparse) Sparse matrix tools and solvers (with MUMPS, UMFPACK, and Intel DSS)
- [![Crates.io](https://img.shields.io/crates/v/russell_sparse.svg)](https://crates.io/crates/russell_sparse) [russell_sparse](https://github.com/cpmech/russell/tree/main/russell_sparse) Sparse matrix tools and solvers (with MUMPS and UMFPACK)
- [![Crates.io](https://img.shields.io/crates/v/russell_stat.svg)](https://crates.io/crates/russell_stat) [russell_stat](https://github.com/cpmech/russell/tree/main/russell_stat) Statistics calculations, probability distributions, and pseudo random numbers
- [![Crates.io](https://img.shields.io/crates/v/russell_tensor.svg)](https://crates.io/crates/russell_tensor) [russell_tensor](https://github.com/cpmech/russell/tree/main/russell_tensor) Tensor analysis structures and functions for continuum mechanics

Expand Down Expand Up @@ -365,7 +365,6 @@ fn main() -> Result<(), StrError> {
- [x] Implement the Compressed Sparse Column format (CSC)
- [x] Implement the Compressed Sparse Row format (CSC)
- [x] Improve the C-interface to UMFPACK and MUMPS
- [x] Implement the C-interface to Intel DSS
- [ ] Write the conversion from COO to CSC in Rust
- [ ] Possibly re-write (after benchmarking) the conversion from COO to CSR
- [ ] Re-study the possibility of wrapping SuperLU (see deleted branch)
Expand Down
3 changes: 3 additions & 0 deletions russell_ode/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,9 @@ readme = "README.md"
categories = ["mathematics", "science"]
keywords = ["differential equations", "numerical methods", "solver"]

[features]
intel_mkl = ["russell_lab/intel_mkl", "russell_sparse/intel_mkl"]

[dependencies]
russell_lab = { path = "../russell_lab", version = "0.8" }
russell_sparse = { path = "../russell_sparse", version = "0.8" }
Expand Down
2 changes: 1 addition & 1 deletion russell_sparse/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ keywords = ["matrix", "sparse", "solver"]

[features]
local_libs = []
intel_mkl = ["local_libs"]
intel_mkl = ["local_libs", "russell_lab/intel_mkl"]

[dependencies]
num-complex = { version = "0.4", features = ["serde"] }
Expand Down
8 changes: 4 additions & 4 deletions russell_sparse/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ _This crate is part of [Russell - Rust Scientific Library](https://github.com/cp

## <a name="introduction"></a> Introduction

This crate implements tools for handling sparse matrices and functions to solve large sparse systems using the best libraries out there, such as [UMFPACK (recommended)](https://github.com/DrTimothyAldenDavis/SuiteSparse) and [MUMPS (for very large systems)](https://mumps-solver.org). Optionally, you may want to use the [Intel DSS solver](https://www.intel.com/content/www/us/en/docs/onemkl/developer-reference-c/2023-2/direct-sparse-solver-dss-interface-routines.html).
This crate implements tools for handling sparse matrices and functions to solve large sparse systems using the best libraries out there, such as [UMFPACK (recommended)](https://github.com/DrTimothyAldenDavis/SuiteSparse) and [MUMPS (for very large systems)](https://mumps-solver.org).

We have three storage formats for sparse matrices:

Expand All @@ -26,7 +26,7 @@ Additionally, to unify the handling of the above sparse matrix data structures,

* SparseMatrix: Either a COO, CSC, or CSR matrix

The COO matrix is the best when we need to update the values of the matrix because it has easy access to the triples (i, j, aij). For instance, the repetitive access is the primary use case for codes based on the finite element method (FEM) for approximating partial differential equations. Moreover, the COO matrix allows storing duplicate entries; for example, the triple `(0, 0, 123.0)` can be stored as two triples `(0, 0, 100.0)` and `(0, 0, 23.0)`. Again, this is the primary need for FEM codes because of the so-called assembly process where elements add to the same positions in the "global stiffness" matrix. Nonetheless, the duplicate entries must be summed up at some stage for the linear solver (e.g., MUMPS, UMFPACK, and Intel DSS). These linear solvers also use the more memory-efficient storage formats CSC and CSR. See the [russell_sparse documentation](https://docs.rs/russell_sparse) for further information.
The COO matrix is the best when we need to update the values of the matrix because it has easy access to the triples (i, j, aij). For instance, the repetitive access is the primary use case for codes based on the finite element method (FEM) for approximating partial differential equations. Moreover, the COO matrix allows storing duplicate entries; for example, the triple `(0, 0, 123.0)` can be stored as two triples `(0, 0, 100.0)` and `(0, 0, 23.0)`. Again, this is the primary need for FEM codes because of the so-called assembly process where elements add to the same positions in the "global stiffness" matrix. Nonetheless, the duplicate entries must be summed up at some stage for the linear solver (e.g., MUMPS, UMFPACK). These linear solvers also use the more memory-efficient storage formats CSC and CSR. See the [russell_sparse documentation](https://docs.rs/russell_sparse) for further information.

This library also provides functions to read and write Matrix Market files containing (huge) sparse matrices that can be used in performance benchmarking or other studies. The [read_matrix_market()] function reads a Matrix Market file and returns a [CooMatrix]. To write a Matrix Market file, we can use the function [write_matrix_market()], which takes a [SparseMatrix] and, thus, automatically convert COO to CSC or COO to CSR, also performing the sum of duplicates. The `write_matrix_market` also writes an SMAT file (almost like the Matrix Market format) without the header and with zero-based indices. The SMAT file can be given to the fantastic [Vismatrix](https://github.com/cpmech/vismatrix) tool to visualize the sparse matrix structure and values interactively; see the example below.

Expand All @@ -38,7 +38,7 @@ See the documentation for further information:

## <a name="installation"></a> Installation

This crate depends on `russell_lab`, which, in turn, depends on an efficient BLAS library such as [OpenBLAS](https://github.com/OpenMathLib/OpenBLAS) and [Intel MKL](https://www.intel.com/content/www/us/en/docs/onemkl/developer-reference-c/2023-2/overview.html). This crate also depends on [UMFPACK](https://github.com/DrTimothyAldenDavis/SuiteSparse), [MUMPS](https://mumps-solver.org), and, optionally, on [Intel DSS](https://www.intel.com/content/www/us/en/docs/onemkl/developer-reference-c/2023-2/direct-sparse-solver-dss-interface-routines.html).
This crate depends on `russell_lab`, which, in turn, depends on an efficient BLAS library such as [OpenBLAS](https://github.com/OpenMathLib/OpenBLAS) and [Intel MKL](https://www.intel.com/content/www/us/en/docs/onemkl/developer-reference-c/2023-2/overview.html). This crate also depends on [UMFPACK](https://github.com/DrTimothyAldenDavis/SuiteSparse) and [MUMPS](https://mumps-solver.org).

[The root README file presents the steps to install the required dependencies.](https://github.com/cpmech/russell)

Expand Down Expand Up @@ -245,7 +245,7 @@ Also, to reproduce the issue, we need:

## <a name="developers"></a> For developers

* The `c_code` directory contains a thin wrapper to the sparse solvers (MUMPS, UMFPACK, and Intel DSS)
* The `c_code` directory contains a thin wrapper to the sparse solvers (MUMPS, UMFPACK)
* The `build.rs` file uses the crate `cc` to build the C-wrappers
* The `zscripts` directory also contains following:
* `memcheck.bash`: Checks for memory leaks on the C-code using Valgrind
Expand Down
37 changes: 0 additions & 37 deletions russell_sparse/build.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,3 @@
#[cfg(feature = "intel_mkl")]
const MKL_VERSION: &str = "2023.2.0";

#[cfg(feature = "local_libs")]
fn handle_local_libs() {
// local MUMPS
Expand Down Expand Up @@ -42,40 +39,6 @@ fn handle_local_libs() {
println!("cargo:rustc-link-lib=dylib=umfpack");
}

#[cfg(feature = "intel_mkl")]
fn handle_intel_mkl() {
// Find the link libs with: pkg-config --libs mkl-dynamic-lp64-iomp
cc::Build::new()
.file("c_code/interface_intel_dss.c")
.include(format!("/opt/intel/oneapi/mkl/{}/include", MKL_VERSION))
.define("WITH_INTEL_DSS", None)
.compile("c_code_interface_intel_dss");
println!(
"cargo:rustc-link-search=native=/opt/intel/oneapi/mkl/{}/lib/intel64",
MKL_VERSION
);
println!(
"cargo:rustc-link-search=native=/opt/intel/oneapi/compiler/{}/linux/compiler/lib/intel64_lin",
MKL_VERSION
);
println!("cargo:rustc-link-lib=mkl_intel_lp64");
println!("cargo:rustc-link-lib=mkl_intel_thread");
println!("cargo:rustc-link-lib=mkl_core");
println!("cargo:rustc-link-lib=pthread");
println!("cargo:rustc-link-lib=m");
println!("cargo:rustc-link-lib=dl");
println!("cargo:rustc-link-lib=iomp5");
println!("cargo:rustc-cfg=with_intel_dss");
}

#[cfg(not(feature = "intel_mkl"))]
fn handle_intel_mkl() {
cc::Build::new()
.file("c_code/interface_intel_dss.c")
.compile("c_code_interface_intel_dss");
}

fn main() {
handle_local_libs();
handle_intel_mkl();
}
240 changes: 0 additions & 240 deletions russell_sparse/c_code/interface_intel_dss.c

This file was deleted.

1 change: 0 additions & 1 deletion russell_sparse/examples/nonlinear_system_4eqs.rs
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,6 @@ fn main() -> Result<(), StrError> {
let genie = match opt.genie.to_lowercase().as_str() {
"mumps" => Genie::Mumps,
"umfpack" => Genie::Umfpack,
"dss" => Genie::IntelDss,
_ => Genie::Umfpack,
};
println!("... solving problem with {:?} ...", genie);
Expand Down
Loading

0 comments on commit 051f0f7

Please sign in to comment.