1.1. Installing SUNDIALS
In this chapter we discuss two ways for building and installing SUNDIALS from source. The first is with the Spack HPC package manager and the second is with CMake.
1.1.1. Installing with Spack
Spack is a package management tool that provides a simple spec syntax to configure and install software on a wide variety of platforms and environments. See the Getting Started section in the Spack documentation for more information on installing Spack.
Once Spack is setup on your system, the default SUNDIALS configuration can be install with the command
spack install sundials
Additional options can be enabled through various Spack package variants. For information on the available variants visit the SUNDIALS Spack package web page or use the command
spack info sundials
1.1.2. Installing with CMake
CMake provides a platform-independent build system capable of generating Unix and Linux Makefiles, as well as KDevelop, Visual Studio, and (Apple) XCode project files from the same configuration file. A GUI front end is also available allowing for an interactive build and installation process.
At a minimum, building SUNDIALS requires CMake version 3.18.0 or higher and a working C compiler. If a compatible version of CMake is not already installed on you system, source files or pre-built binary files can be obtained from the CMake Download website.
When building with CMake, you will need to obtain the SUNDIALS source code. You can get the source files by either cloning the SUNDIALS GitHub repository with the command
git clone https://github.com/LLNL/sundials
or by downloading release compressed archives (.tar.gz
files) from the
SUNDIALS download website. The
compressed archives allow for downloading the entire SUNDIALS suite or
individual packages. The name of the distribution archive is of the form
SOLVER-x.y.z.tar.gz
, where SOLVER
is one of: sundials
, cvode
,
cvodes
, arkode
, ida
, idas
, or kinsol
, and x.y.z
represents the version number of the SUNDIALS suite or of the individual
package. After downloading the relevant archives, uncompress and expand the
sources. For example, by running
tar -zxf SOLVER-x.y.z.tar.gz
the extracted source files will be under the SOLVER-x.y.z
directory.
In the installation steps below we will refer to the following directories:
SOLVER_DIR
is thesundials
directory created when cloning from GitHub or theSOLVER-x.y.z
directory created after uncompressing the release archive.BUILD_DIR
is the (temporary) directory under which SUNDIALS is built. In-source builds are prohibited; the build directoryBUILD_DIR
can not be the same asSOLVER_DIR
and such an attempt will lead to an error. This prevents “polluting” the source tree, simplifies building with different configurations and/or options, and makes it easy to clean-up all traces of the build by simply removing the build directory.INSTALL_DIR
is the directory under which the SUNDIALS exported header files and libraries will be installed. The installation directoryINSTALL_DIR
can not be the same as theSOLVER_DIR
directory. Typically, header files are exported under a directoryINSTALL_DIR/include
while libraries are typically installed underINSTALL_DIR/lib
orINSTALL_LIB/lib64
, withINSTALL_DIR
specified at configuration time.
1.1.2.1. Linux/Unix systems
CMake can be used from the command line with the cmake
command, or from
graphical interfaces with the ccmake
or cmake-gui
commands. Below we
present the installation steps using the command line interface.
Using CMake from the command line is simply a matter of generating the build files for the desired configuration, building, and installing. For example, the following commands will build and install the default configuration:
cmake \
-S SOLVER_DIR \
-B BUILD_DIR \
-D CMAKE_INSTALL_PREFIX=INSTALL_DIR
cd BUILD_DIR
make
make install
The default configuration will install static and shared libraries for all
SUNDIALS packages and install the associated example codes. Additional features
can be enabled by specifying more options in the configuration step. For
example, to enable MPI add -D ENABLE_MPI=ON
to the cmake
command above:
cmake \
-S SOLVER_DIR \
-B BUILD_DIR \
-D CMAKE_INSTALL_PREFIX=INSTALL_DIR \
-D ENABLE_MPI=ON
See section §1.1.3 below for a complete list of SUNDIALS configuration options and additional configuration examples.
1.1.2.2. Windows Systems
CMake can also be used to build SUNDIALS on Windows. To build SUNDIALS for use with Visual Studio the following steps should be performed:
Create a separate
BUILD_DIR
Open a Visual Studio Command Prompt and cd to
BUILD_DIR
Run
cmake-gui ../SOLVER_DIR
Hit Configure
Check/Uncheck solvers to be built
Change
CMAKE_INSTALL_PREFIX
toINSTALL_DIR
Set other options as desired (see section §1.1.3)
Hit Generate
Back in the VS Command Window:
Run
msbuild ALL_BUILD.vcxproj
Run
msbuild INSTALL.vcxproj
The resulting libraries will be in the INSTALL_DIR
.
The SUNDIALS project can also now be opened in Visual Studio. Double click on
the ALL_BUILD.vcxproj
file to open the project. Build the whole solution
to create the SUNDIALS libraries. To use the SUNDIALS libraries in your own
projects, you must set the include directories for your project, add the
SUNDIALS libraries to your project solution, and set the SUNDIALS libraries as
dependencies for your project.
1.1.2.3. HPC Clusters
This section is a guide for installing SUNDIALS on specific HPC clusters. In general, the procedure is the same as described previously in §1.1.2.1 for Unix/Linux machines. The main differences are in the modules and environment variables that are specific to different HPC clusters. We aim to keep this section as up to date as possible, but it may lag the latest software updates to each cluster.
1.1.2.3.1. Frontier
Frontier is an Exascale supercomputer at the Oak Ridge Leadership Computing Facility. If you are new to this system, then we recommend that you review the Frontier user guide.
A Standard Installation
Load the modules and set the environment variables needed to build SUNDIALS. This configuration enables both MPI and HIP support for distributed and GPU parallelism. It uses the HIP compiler for C and C++ and the Cray Fortran compiler. Other configurations are possible.
# required dependencies
module load PrgEnv-cray-amd/8.5.0
module load craype-accel-amd-gfx90a
module load rocm/5.3.0
module load cmake/3.23.2
# GPU-aware MPI
export MPICH_GPU_SUPPORT_ENABLED=1
# compiler environment hints
export CC=$(which hipcc)
export CXX=$(which hipcc)
export FC=$(which ftn)
export CFLAGS="-I${ROCM_PATH}/include"
export CXXFLAGS="-I${ROCM_PATH}/include -Wno-pass-failed"
export LDFLAGS="-L${ROCM_PATH}/lib -lamdhip64 ${PE_MPICH_GTL_DIR_amd_gfx90a} -lmpi_gtl_hsa"
Now we can build SUNDIALS. In general, this is the same procedure described in
the previous sections. The following command builds and installs SUNDIALS with
MPI, HIP, and the Fortran interface enabled, where <account>
is your
allocation account on Frontier:
cmake \
-S SOLVER_DIR \
-B BUILD_DIR \
-D CMAKE_INSTALL_PREFIX=INSTALL_DIR \
-D AMDGPU_TARGETS=gfx90a \
-D ENABLE_HIP=ON \
-D ENABLE_MPI=ON \
-D BUILD_FORTRAN_MODULE_INTERFACE=ON
cd BUILD_DIR
make -j8 install
# Need an allocation to run the tests:
salloc -A <account> -t 10 -N 1 -p batch
make test
make test_install_all
1.1.3. Configuration options
All available SUNDIALS CMake options are described in the sections below. The default values for some options (e.g., compiler flags and installation paths) are for a Linux system and are provided as illustration only.
Note
When using a CMake graphical interface (ccmake or cmake-gui), multiple
configuration passes are performed before generating the build files. For
options where the default value depends on the value of another option, the
initial value is set on the first configuration pass and is not updated
automatically if the related option value is changed in subsequent
passes. For example, the default value of EXAMPLES_INSTALL_PATH
is
CMAKE_INSTALL_PREFIX/examples
; if the value of
CMAKE_INSTALL_PREFIX
is updated, then
EXAMPLES_INSTALL_PATH
will also need to be updated as its value
was set using the CMAKE_INSTALL_PREFIX
default.
1.1.3.1. Build Type
The build type determines the level of compiler optimization, if debug information is included, and if additional error checking code is generated. The provided build types are:
Debug
– no optimization flags, debugging information included, additional error checking enabledRelease
– high optimization flags, no debugging information, no additional error checksRelWithDebInfo
– high optimization flags, debugging information included, no additional error checksMinSizeRel
– minimize size flags, no debugging information, no additional error checks
Each build type has a corresponding option for the set of compiler flags that are appended to the user-specified compiler flags. See section §1.1.3.2 for more information.
- CMAKE_BUILD_TYPE
Choose the type of build for single-configuration generators (e.g., Makefiles or Ninja).
Default:
RelWithDebInfo
- CMAKE_CONFIGURATION_TYPES
Specifies the build types for multi-config generators (e.g. Visual Studio, Xcode, or Ninja Multi-Config) as a semicolon-separated list.
Default:
Debug
,Release
,RelWithDebInfo
, andMinSizeRel
1.1.3.2. Compilers and Compiler Flags
Building SUNDIALS requires a C compiler that supports at least a subset of the C99 standard (specifically those features implemented by Visual Studio 2015).
Additional SUNDIALS features that interface with external C++ libraries or GPU programming models require a C++ compiler (e.g., CUDA, HIP, SYCL, Ginkgo, Trilinos, etc.). The C++ standard required depends on the particular library or programming model used and is noted with the relevant options below. The C++ convenience classes provided by SUNDIALS require C++14 or newer. C++ applications that require an earlier C++ standard should use the SUNDIALS C API.
When enabling the SUNDIALS Fortran interfaces, a Fortran compiler that supports
the Fortran 2003 or newer standard is required in order to utilize the
ISO_C_BINDING
module.
1.1.3.2.1. C Compiler
- CMAKE_C_COMPILER
The full path to the C compiler
Default: CMake will attempt to automatically locate a C compiler on the system (e.g., from the
CC
environment variable or common installation paths).
- CMAKE_C_FLAGS
User-specified flags for the C compiler. The value of this option should be a string with flags separated by spaces.
Default: Initialized by the
CFLAGS
environment variable.
- CMAKE_C_FLAGS_DEBUG
C compiler flags appended when the
CMAKE_BUILD_TYPE
isDebug
Default:
-g
- CMAKE_C_FLAGS_RELEASE
C compiler flags appended when the
CMAKE_BUILD_TYPE
isRelease
Default:
-O3 -DNDEBUG
- CMAKE_C_FLAGS_RELWITHDEBINFO
C compiler flags appended when the
CMAKE_BUILD_TYPE
isRelWithDebInfo
Default:
-O2 -g -DNDEBUG
- CMAKE_C_FLAGS_MINSIZEREL
C compiler flags appended when the
CMAKE_BUILD_TYPE
isMinSizeRel
Default:
-Os -DNDEBUG
- CMAKE_C_STANDARD
The C standard used when building SUNDIALS C source files.
Default:
99
Options:
99
,11
, or17
- CMAKE_C_EXTENSIONS
Enable compiler specific C extensions.
Default:
ON
1.1.3.2.2. C++ Compiler
- CMAKE_CXX_COMPILER
The full path to the C++ compiler
Default: CMake will attempt to automatically locate a C++ compiler on the system (e.g., from the
CXX
environment variable or common installation paths).
- CMAKE_CXX_FLAGS
User-specified flags for the C++ compiler. The value of this option should be a string with flags separated by spaces.
Default: Initialized by the
CXXFLAGS
environment variable.
- CMAKE_CXX_FLAGS_DEBUG
C++ compiler flags appended when the
CMAKE_BUILD_TYPE
isDebug
Default:
-g
- CMAKE_CXX_FLAGS_RELEASE
C++ compiler flags appended when the
CMAKE_BUILD_TYPE
isRelease
Default:
-O3 -DNDEBUG
- CMAKE_CXX_FLAGS_RELWITHDEBINFO
C++ compiler flags appended when the
CMAKE_BUILD_TYPE
isRelWithDebInfo
Default:
-O2 -g -DNDEBUG
- CMAKE_CXX_FLAGS_MINSIZEREL
C++ compiler flags appended when the
CMAKE_BUILD_TYPE
isMinSizeRel
Default:
-Os -DNDEBUG
- CMAKE_CXX_STANDARD
The C++ standard used when building SUNDIALS C++ source files.
Default:
14
Options:
14
,17
, or20
- CMAKE_CXX_EXTENSIONS
Enable compiler specific C++ extensions.
Default:
ON
1.1.3.2.3. Fortran Compiler
- CMAKE_Fortran_COMPILER
The full path to the Fortran compiler
Default: CMake will attempt to automatically locate a Fortran compiler on the system (e.g., from the
FC
environment variable or common installation paths).
- CMAKE_Fortran_FLAGS
User-specified flags for the Fortran compiler. The value of this option should be a string with flags separated by spaces.
Default: Initialized by the
FFLAGS
environment variable.
- CMAKE_Fortran_FLAGS_DEBUG
Fortran compiler flags appended when the
CMAKE_BUILD_TYPE
isDebug
Default:
-g
- CMAKE_Fortran_FLAGS_RELEASE
Fortran compiler flags appended when the
CMAKE_BUILD_TYPE
isRelease
Default:
-O3
- CMAKE_Fortran_FLAGS_RELWITHDEBINFO
Fortran compiler flags appended when the
CMAKE_BUILD_TYPE
isRelWithDebInfo
Default:
-O2 -g
- CMAKE_Fortran_FLAGS_MINSIZEREL
C++ compiler flags appended when the
CMAKE_BUILD_TYPE
isMinSizeRel
Default:
-Os
1.1.3.3. Install Location
Use the following options to set where the SUNDIALS headers, library, and CMake configuration files will be installed.
- CMAKE_INSTALL_PREFIX
Install path prefix (
INSTALL_DIR
), prepended onto install directoriesDefault:
/usr/local
Note
The user must have write access to the location specified through this option. Exported SUNDIALS header files and libraries will be installed under subdirectories
include
andCMAKE_INSTALL_LIBDIR
ofCMAKE_INSTALL_PREFIX
, respectively.
- CMAKE_INSTALL_LIBDIR
The directory under
CMAKE_INSTALL_PREFIX
where libraries will be installedDefault: Set based on the system as
lib
,lib64
, orlib/<multiarch-tuple>
- SUNDIALS_INSTALL_CMAKEDIR
The directory under
CMAKE_INSTALL_PREFIX
where the SUNDIALS CMake package configuration files will be installed (see section §1.1.6.1 for more information)Default:
CMAKE_INSTALL_LIBDIR/cmake/sundials
1.1.3.5. Index Size
- SUNDIALS_INDEX_SIZE
The integer size (in bits) used for indices in SUNDIALS (e.g., for vector and matrix entries), options are:
32
or64
Default:
64
Note
The build system tries to find an integer type of the appropriate size. Candidate 64-bit integer types are (in order of preference):
int64_t
,__int64
,long long
, andlong
. Candidate 32-bit integers are (in order of preference):int32_t
,int
, andlong
. The advanced option,SUNDIALS_INDEX_TYPE
can be used to provide a type not listed here.
- SUNDIALS_INDEX_TYPE
The integer type used for SUNDIALS indices. The type size must match the size provided in the
SUNDIALS_INDEX_SIZE
option.Default: Automatically determined based on
SUNDIALS_INDEX_SIZE
Changed in version 3.2.0: In prior versions, this option could be set to
INT64_T
to use 64-bit integers orINT32_T
to use 32-bit integers. These special values are deprecated and a user will only need to use theSUNDIALS_INDEX_SIZE
option in most cases.
1.1.3.6. Precision
- SUNDIALS_PRECISION
The floating-point precision used in SUNDIALS packages and class implementations, options are:
single
,double
, orextended
Default:
double
1.1.3.7. Math Library
- SUNDIALS_MATH_LIBRARY
The standard C math library (e.g.,
libm
) to link with.Default:
-lm
on Unix systems, none otherwise
1.1.3.8. SUNDIALS Packages
The following options can be used to enable/disable particular SUNDIALS packages.
- BUILD_ARKODE
Build the ARKODE library
Default:
ON
- BUILD_CVODE
Build the CVODE library
Default:
ON
- BUILD_CVODES
Build the CVODES library
Default:
ON
- BUILD_IDA
Build the IDA library
Default:
ON
- BUILD_IDAS
Build the IDAS library
Default:
ON
- BUILD_KINSOL
Build the KINSOL library
Default:
ON
1.1.3.9. Example Programs
- EXAMPLES_ENABLE_C
Build the SUNDIALS C examples
Default:
ON
- EXAMPLES_ENABLE_CXX
Build the SUNDIALS C++ examples
Default:
OFF
- EXAMPLES_ENABLE_CUDA
Build the SUNDIALS CUDA examples
Default:
ON
whenENABLE_CUDA
isON
, otherwiseOFF
- EXAMPLES_ENABLE_F2003
Build the SUNDIALS Fortran 2003 examples
Default:
ON
whenBUILD_FORTRAN_MODULE_INTERFACE
isON
, otherwiseOFF
- EXAMPLES_INSTALL
Install example program source files and sample output files. See
EXAMPLES_INSTALL_PATH
for the install location.A
CMakeLists.txt
file to build the examples will be automatically generated and installed with the source files. If building on a Unix-like system, aMakefile
for compiling the installed example programs will be also generated and installed.Default:
ON
- EXAMPLES_INSTALL_PATH
Full path to where example source and output files will be installed
Default:
CMAKE_INSTALL_PREFIX/examples
1.1.3.10. Fortran Interfaces
- BUILD_FORTRAN_MODULE_INTERFACE
Build the SUNDIALS Fortran 2003 interface
Default:
OFF
Note
The Fortran interface are only compatible with double precision (i.e.,
SUNDIALS_PRECISION
must bedouble
).Warning
There is a known issue with MSYS/gfortran and SUNDIALS shared libraries that causes linking the Fortran interfaces to fail when building SUNDIALS. For now the work around is to only build with static libraries when using MSYS with gfortran on Windows.
1.1.3.11. Error Checking
For more information on error handling in SUNDIALS, see Error Checking.
- SUNDIALS_ENABLE_ERROR_CHECKS
Build SUNDIALS with more extensive checks for unrecoverable errors.
Default:
ON
whenCMAKE_BUILD_TYPE
isDebug
, otherwiseOFF
Warning
Error checks will impact performance, but can be helpful for debugging.
1.1.3.12. Logging
For more information on logging in SUNDIALS, see Status and Error Logging.
- SUNDIALS_LOGGING_LEVEL
The maximum logging level. The options are:
0
– no logging1
– log errors2
– log errors + warnings3
– log errors + warnings + informational output4
– log errors + warnings + informational output + debug output5
– log all of the above and even more (e.g. vector valued variables may be logged)
Default:
2
Warning
Logging will impact performance, but can be helpful for debugging or understanding algorithm performance. The higher the logging level, the more output that may be logged, and the more performance may degrade.
1.1.3.13. Monitoring
- SUNDIALS_BUILD_WITH_MONITORING
Build SUNDIALS with capabilities for fine-grained monitoring of solver progress and statistics. This is primarily useful for debugging.
Default:
OFF
Warning
Building with monitoring may result in minor performance degradation even if monitoring is not utilized.
1.1.3.14. Profiling
For more information on profiling in SUNDIALS, see Performance Profiling.
- SUNDIALS_BUILD_WITH_PROFILING
Build SUNDIALS with capabilities for fine-grained profiling. This requires POSIX timers, the Windows
profileapi.h
timers, or enabling Caliper withENABLE_CALIPER
.Default:
OFF
Warning
Profiling will impact performance, and should be enabled judiciously.
1.1.3.15. Building with Adiak
Adiak is a library for recording meta-data about HPC simulations. Adiak is developed by Lawrence Livermore National Laboratory and can be obtained from the Adiak GitHub repository.
- ENABLE_ADIAK
Enable Adiak support
Default:
OFF
- adiak_DIR
Path to the root of an Adiak installation
Default: None
1.1.3.16. Building with Caliper
Caliper is a performance analysis library providing a code instrumentation and performance measurement framework for HPC applications. Caliper is developed by Lawrence Livermore National Laboratory and can be obtained from the Caliper GitHub repository.
When profiling and Caliper are both enabled, SUNDIALS will utilize Caliper for performance profiling.
To enable Caliper support, set the ENABLE_CALIPER
to ON
and set
CALIPER_DIR
to the root path of the Caliper installation. For
example, the following command will configure SUNDIALS with profiling and
Caliper support:
cmake \
-S SOLVER_DIR \
-B BUILD_DIR \
-D CMAKE_INSTALL_PREFIX=INSTALL_DIR \
-D SUNDIALS_BUILD_WITH_PROFILING=ON \
-D ENABLE_CALIPER=ON \
-D CALIPER_DIR=/path/to/caliper/installation
- ENABLE_CALIPER
Enable CALIPER support
Default:
OFF
Note
Using Caliper requires setting
SUNDIALS_BUILD_WITH_PROFILING
toON
.
- CALIPER_DIR
Path to the root of a Caliper installation
Default: None
1.1.3.17. Building with CUDA
The NVIDIA CUDA Toolkit provides a development environment for GPU-accelerated computing with NVIDIA GPUs. The CUDA Toolkit and compatible NVIDIA drivers are available from the NVIDIA developer website. SUNDIALS has been tested with the CUDA toolkit versions 10, 11, and 12.
When CUDA support is enabled, the CUDA NVector, the cuSPARSE SUNMatrix, and the cuSPARSE batched QR SUNLinearSolver will be built (see sections §1.1.7.3.11, §1.1.7.4.2, and §1.1.7.5.2, respectively, for the corresponding header files and libraries). For more information on using SUNDIALS with GPUs, see Features for GPU Accelerated Computing.
To enable CUDA support, set ENABLE_CUDA
to ON
. If CUDA is
installed in a nonstandard location, you may need to set
CUDA_TOOLKIT_ROOT_DIR
to your CUDA Toolkit installation path. You
will also need to set CMAKE_CUDA_ARCHITECTURES
to the CUDA
architecture for your system. For example, the following command will configure
SUNDIALS with CUDA support for a system with an Ampere GPU:
cmake \
-S SOLVER_DIR \
-B BUILD_DIR \
-D CMAKE_INSTALL_PREFIX=INSTALL_DIR \
-D ENABLE_CUDA=ON \
-D CMAKE_CUDA_ARCHITECTURES="80"
- ENABLE_CUDA
Enable CUDA support
Default:
OFF
- CUDA_TOOLKIT_ROOT_DIR
Path to the CUDA Toolkit installation
Default: CMake will attempt to automatically locate an installed CUDA Toolkit
- CMAKE_CUDA_ARCHITECTURES
Specifies the CUDA architecture to compile for i.e.,
60
for Pascal,70
for Volta,80
for Ampere,90
for Hopper, etc. See the GPU compute capability tables on the NVIDIA webpage and the GPU Compilation section of the CUDA documentation for more information.Default: Determined automatically by CMake. Users are encouraged to override this value with the architecture for their system as the default varies across compilers and compiler versions.
Changed in version 7.2.0: In prior versions
CMAKE_CUDA_ARCHITECTURES
defaulted to70
.
1.1.3.18. Building with Ginkgo
Ginkgo is a high-performance linear algebra library with a focus on solving sparse linear systems. It is implemented using modern C++ (you will need at least a C++14 compliant compiler to build it), with GPU kernels implemented in CUDA (for NVIDIA devices), HIP (for AMD devices), and SYCL/DPC++ (for Intel devices and other supported hardware). Ginkgo can be obtained from the Ginkgo GitHub repository. SUNDIALS is regularly tested with the latest versions of Ginkgo, specifically up to version 1.8.0.
When Ginkgo support is enabled, the Ginkgo SUNMatrix and the Ginkgo SUNLinearSolver header files will be installed (see sections §1.1.7.4.4 and §1.1.7.5.4, respectively, for the corresponding header files). For more information on using SUNDIALS with GPUs, see Features for GPU Accelerated Computing.
To enable Ginkgo support, set ENABLE_GINKGO
to ON
and set
Ginkgo_DIR
to the root path of the Ginkgo installation. Additionally,
set SUNDIALS_GINKGO_BACKENDS
to a semicolon-separated list of Ginkgo
target architectures/executors. For example, the following command will
configure SUNDIALS with Ginkgo support using the reference, OpenMP, and CUDA
(targeting Ampere GPUs) backends:
cmake \
-S SOLVER_DIR \
-B BUILD_DIR \
-D CMAKE_INSTALL_PREFIX=INSTALL_DIR \
-D ENABLE_GINKGO=ON \
-D Ginkgo_DIR=/path/to/ginkgo/installation \
-D SUNDIALS_GINKGO_BACKENDS="REF;OMP;CUDA" \
-D ENABLE_CUDA=ON \
-D CMAKE_CUDA_ARCHITECTURES="80" \
-D ENABLE_OPENMP=ON
Note
The SUNDIALS interfaces to Ginkgo are not compatible with extended precision
(i.e., when SUNDIALS_PRECISION
is set to extended
).
- ENABLE_GINKGO
Enable Ginkgo support
Default:
OFF
- Ginkgo_DIR
Path to the Ginkgo installation
Default: None
- SUNDIALS_GINKGO_BACKENDS
Semi-colon separated list of Ginkgo target architectures/executors to build for. Options currently supported are
REF
(the Ginkgo reference executor),OMP
(OpenMP),CUDA
,HIP
, andSYCL
.Default:
"REF;OMP"
Changed in version 7.1.0: The
DPCPP
option was changed toSYCL
to align with Ginkgo’s naming convention.
1.1.3.19. Building with HIP
The Heterogeneous-compute Interface for Portability (HIP) allows developers to create portable applications for AMD and NVIDIA GPUs. HIP can be obtained from the HIP GitHub repository. SUNDIALS has been tested with HIP versions between 5.0.0 to 5.4.3.
When HIP support is enabled, the HIP NVector will be built (see section §1.1.7.3.12 for the corresponding header file and library). For more information on using SUNDIALS with GPUs, see Features for GPU Accelerated Computing.
To enable HIP support, set ENABLE_HIP
to ON
and set
AMDGPU_TARGETS
to the desired target (e.g., gfx705
). In addition,
set CMAKE_C_COMPILER
and CMAKE_CXX_COMPILER
to a HIP
compatible compiler e.g., hipcc
. For example, the following command will
configure SUNDIALS with HIP support for a system with an MI250X GPU:
cmake \
-S SOLVER_DIR \
-B BUILD_DIR \
-D CMAKE_INSTALL_PREFIX=INSTALL_DIR \
-D CMAKE_C_COMPILER=hipcc \
-D CMAKE_CXX_COMPILER=hipcc \
-D ENABLE_HIP=ON \
-D AMDGPU_TARGETS="gfx90a"
- ENABLE_HIP
Enable HIP Support
Default:
OFF
- AMDGPU_TARGETS
Specify which AMD GPUs to target
Default: None
1.1.3.20. Building with hypre
hypre is a library of high performance preconditioners and solvers featuring multigrid methods for the solution of large, sparse linear systems of equations on massively parallel computers. The library is developed by Lawrence Livermore National Laboratory and is available from the hypre GitHub repository. SUNDIALS is regularly tested with the latest versions of hypre, specifically up to version 2.26.0.
When hypre support is enabled, the ParHyp NVector will be built (see section §1.1.7.3.9 for the corresponding header file and library).
To enable hypre support, set ENABLE_MPI
to ON
, set
ENABLE_HYPRE
to ON
, and set HYPRE_DIR
to the root path
of the hypre installation. For example, the following command will configure
SUNDIALS with hypre support:
cmake \
-S SOLVER_DIR \
-B BUILD_DIR \
-D CMAKE_INSTALL_PREFIX=INSTALL_DIR \
-D ENABLE_MPI=ON \
-D ENABLE_HYPRE=ON \
-D HYPRE_DIR=/path/to/hypre/installation
Note
SUNDIALS must be configured so that SUNDIALS_INDEX_SIZE
is
compatible with HYPRE_BigInt
in the hypre installation.
- ENABLE_HYPRE
Enable hypre support
Default:
OFF
- HYPRE_DIR
Path to the hypre installation
Default: none
1.1.3.21. Building with KLU
KLU is a software package for the direct solution of sparse nonsymmetric linear systems of equations that arise in circuit simulation and is part of SuiteSparse, a suite of sparse matrix software. The library is developed by Texas A&M University and is available from the SuiteSparse GitHub repository. SUNDIALS is regularly tested with the latest versions of KLU, specifically up to SuiteSparse version 7.7.0.
When KLU support is enabled, the KLU SUNLinearSolver will be built (see section §1.1.7.5.5 for the corresponding header file and library).
To enable KLU support, set ENABLE_KLU
to ON
. For SuiteSparse
7.4.0 and newer, set KLU_ROOT
to the root of the SuiteSparse
installation. Alternatively, set KLU_INCLUDE_DIR
and
KLU_LIBRARY_DIR
to the path to the header and library files,
respectively, of the SuiteSparse installation. For example, the
following command will configure SUNDIALS with KLU support:
cmake \
-S SOLVER_DIR \
-B BUILD_DIR \
-D CMAKE_INSTALL_PREFIX=INSTALL_DIR \
-D ENABLE_KLU=ON \
-D KLU_ROOT=/path/to/suitesparse/installation
- ENABLE_KLU
Enable KLU support
Default:
OFF
- KLU_ROOT
Path to the SuiteSparse installation
Default:
OFF
- KLU_INCLUDE_DIR
Path to SuiteSparse header files
Default: none
- KLU_LIBRARY_DIR
Path to SuiteSparse installed library files
Default: none
1.1.3.22. Building with Kokkos
Kokkos is a modern C++ (requires at least C++14) programming model for witting performance portable code for multicore CPU and GPU-based systems including NVIDIA, AMD, and Intel GPUs. Kokkos is developed by Sandia National Laboratory and can be obtained from the Kokkos GitHub repository. The minimum supported version of Kokkos 3.7.00. SUNDIALS is regularly tested with the latest versions of Kokkos, specifically up to version 4.3.01.
When Kokkos support is enabled, the Kokkos NVector header file will be installed (see section §1.1.7.3.16 for the corresponding header file). For more information on using SUNDIALS with GPUs, see Features for GPU Accelerated Computing.
To enable Kokkos support, set the ENABLE_KOKKOS
to ON
and set
Kokkos_DIR
to root path of the Kokkos installation. For example, the
following command will configure SUNDIALS with Kokkos support:
cmake \
-S SOLVER_DIR \
-B BUILD_DIR \
-D CMAKE_INSTALL_PREFIX=INSTALL_DIR \
-D ENABLE_KOKKOS=ON \
-D Kokkos_DIR=/path/to/kokkos/installation
- ENABLE_KOKKOS
Enable Kokkos support
Default:
OFF
- Kokkos_DIR
Path to the Kokkos installation.
Default: None
1.1.3.23. Building with KokkosKernels
The KokkosKernels library is built on Kokkos and provides common linear algebra computational kernels. KokkosKernels is developed by Sandia National Laboratory and can be obtained from the KokkosKernels GitHub repository. The minimum supported version of KokkosKernels 3.7.00. SUNDIALS is regularly tested with the latest versions of KokkosKernels, specifically up to version 4.3.01.
When KokkosKernels support is enabled, the KokkosKernels SUNMatrix and KokkosKernels SUNLinearSolver header files will be installed (see sections §1.1.7.4.5 and §1.1.7.5.6, respectively, for the corresponding header files). For more information on using SUNDIALS with GPUs, see Features for GPU Accelerated Computing.
To enable KokkosKernels support, set ENABLE_KOKKOS
and
ENABLE_KOKKOS_KERNELS
to ON
and set Kokkos_DIR
and
KokkosKernels_DIR
to the root paths for the Kokkos and KokkosKernels
installations, respectively. For example, the following command will configure
SUNDIALS with Kokkos and KokkosKernels support:
cmake \
-S SOLVER_DIR \
-B BUILD_DIR \
-D CMAKE_INSTALL_PREFIX=INSTALL_DIR \
-D ENABLE_KOKKOS=ON \
-D Kokkos_DIR=/path/to/kokkos/installation \
-D ENABLE_KOKKOS_KERNELS=ON \
-D KokkosKernels_DIR=/path/to/kokkoskernels/installation
- ENABLE_KOKKOS_KERNELS
Enable KokkosKernels support
Default:
OFF
- KokkosKernels_DIR
Path to the KokkosKernels installation.
Default: None
1.1.3.24. Building with LAPACK
The Linear Algebra PACKage (LAPACK) library interface defines functions for solving systems of linear equations. Several LAPACK implementations are available e.g., the Netlib reference implementation, the Intel oneAPI Math Kernel Library, or OpenBLAS (among others). SUNDIALS is regularly tested with the latest versions of OpenBLAS, specifically up to version 0.3.27.
When LAPACK support is enabled, the LAPACK banded SUNLinearSolver and LAPACK dense SUNLinearSolver will be built (see sections §1.1.7.5.7 and §1.1.7.5.8, respectively, for the corresponding header files and libraries).
To enable LAPACK support, set ENABLE_LAPACK
to ON
. CMake will
attempt to find BLAS and LAPACK installations on the system and set the
variables BLAS_LIBRARIES
, BLAS_LINKER_FLAGS
,
LAPACK_LIBRARIES
, and LAPACK_LINKER_FLAGS
. To explicitly
specify the LAPACK library to build with, manually set the aforementioned
variables to the desired values when configuring the build. For example, the
following command will configure SUNDIALS with LAPACK support:
cmake \
-S SOLVER_DIR \
-B BUILD_DIR \
-D CMAKE_INSTALL_PREFIX=INSTALL_DIR \
-D ENABLE_LAPACK=ON \
-D BLAS_LIBRARIES=/path/to/lapack/installation/lib/libblas.so \
-D LAPACK_LIBRARIES=/path/to/lapack/installation/lib/liblapack.so
Note
If a working Fortran compiler is not available to infer the name-mangling
scheme for LAPACK functions, the options SUNDIALS_LAPACK_CASE
and
SUNDIALS_LAPACK_UNDERSCORES
must be set to bypass the check for
a Fortran compiler and define the name-mangling scheme. The defaults for
these options in earlier versions of SUNDIALS were lower
and one
,
respectively.
- ENABLE_LAPACK
Enable LAPACK support
Default:
OFF
- BLAS_LIBRARIES
BLAS libraries
Default: none (CMake will try to find a BLAS installation)
- BLAS_LINKER_FLAGS
BLAS required linker flags
Default: none (CMake will try to determine the necessary flags)
- LAPACK_LIBRARIES
LAPACK libraries
Default: none (CMake will try to find a LAPACK installation)
- LAPACK_LINKER_FLAGS
LAPACK required linker flags
Default: none (CMake will try to determine the necessary flags)
- SUNDIALS_LAPACK_CASE
Specify the case to use in the Fortran name-mangling scheme, options are:
lower
orupper
Default:
Note
The build system will attempt to infer the Fortran name-mangling scheme using the Fortran compiler. This option should only be used if a Fortran compiler is not available or to override the inferred or default (
lower
) scheme if one can not be determined. If used,SUNDIALS_LAPACK_UNDERSCORES
must also be set.
- SUNDIALS_LAPACK_UNDERSCORES
Specify the number of underscores to append in the Fortran name-mangling scheme, options are:
none
,one
, ortwo
Default:
Note
The build system will attempt to infer the Fortran name-mangling scheme using the Fortran compiler. This option should only be used if a Fortran compiler is not available or to override the inferred or default (
one
) scheme if one can not be determined. If used,SUNDIALS_LAPACK_CASE
must also be set.
1.1.3.25. Building with MAGMA
The Matrix Algebra on GPU and Multicore Architectures (MAGMA) project provides a dense linear algebra library similar to LAPACK but targeting heterogeneous architectures. The library is developed by the University of Tennessee and is available from the MAGMA GitHub repository. SUNDIALS is regularly tested with the latest versions of MAGMA, specifically up to version 2.8.0.
When MAGMA support is enabled, the MAGMA dense SUNMatrix and MAGMA dense SUNLinearSolver will be built (see sections §1.1.7.4.6 and §1.1.7.5.9, respectively, for the corresponding header files and libraries). For more information on using SUNDIALS with GPUs, see Features for GPU Accelerated Computing.
To enable MAGMA support, set ENABLE_MAGMA
to ON
,
MAGMA_DIR
to the root path of MAGMA installation, and
SUNDIALS_MAGMA_BACKENDS
to the desired MAGMA backend to use. For
example, the following command will configure SUNDIALS with MAGMA support with
the CUDA backend (targeting Ampere GPUs):
cmake \
-S SOLVER_DIR \
-B BUILD_DIR \
-D CMAKE_INSTALL_PREFIX=INSTALL_DIR \
-D ENABLE_MAGMA=ON \
-D MAGMA_DIR=/path/to/magma/installation \
-D SUNDIALS_MAGMA_BACKEND="CUDA" \
-D ENABLE_CUDA=ON \
-D CMAKE_CUDA_ARCHITECTURES="80"
- ENABLE_MAGMA
Enable MAGMA support
Default:
OFF
- MAGMA_DIR
Path to the MAGMA installation
Default: none
- SUNDIALS_MAGMA_BACKENDS
Which MAGMA backend to use under the SUNDIALS MAGMA interface:
CUDA
orHIP
Default:
CUDA
1.1.3.26. Building with MPI
The Message Passing Interface (MPI) is a standard for communication on parallel computing systems. Several MPI implementations are available e.g., OpenMPI, MPICH, MVAPICH, Cray MPICH, Intel MPI, or IBM Spectrum MPI (among others). SUNDIALS is regularly tested with the latest versions of OpenMPI, specifically up to version 5.0.5.
When MPI support is enabled, the parallel NVector, MPI ManyVector NVector, and MPI+X NVector will be built (see sections §1.1.7.3.3, §1.1.7.3.4, and §1.1.7.3.5, respectively, for the corresponding header files and libraries).
Attention
Changed in version 7.0.0: When MPI is enabled, all SUNDIALS libraries will include MPI symbols and applications will need to include the path for MPI headers and link against the corresponding MPI library.
To enable MPI support, set ENABLE_MPI
to ON
. If CMake is unable
to locate an MPI installation, set the relevant MPI_<language>_COMPILER
options to the desired MPI compilers. For example, the following command will
configure SUNDIALS with MPI support:
cmake \
-S SOLVER_DIR \
-B BUILD_DIR \
-D CMAKE_INSTALL_PREFIX=INSTALL_DIR \
-D ENABLE_MPI=ON
- ENABLE_MPI
Enable MPI support
Default:
OFF
- MPI_C_COMPILER
The MPI C compiler e.g.,
mpicc
Default: CMake will attempt to locate an MPI C compiler
- MPI_CXX_COMPILER
The MPI C++ compiler e.g.,
mpicxx
Default: CMake will attempt to locate an MPI C++ compiler
Note
This option is only needed if MPI is enabled (
ENABLE_MPI
isON
) and C++ examples are enabled (EXAMPLES_ENABLE_CXX
isON
). All SUNDIALS solvers can be used from C++ MPI applications by without setting any additional configuration options other thanENABLE_MPI
.
- MPI_Fortran_COMPILER
The MPI Fortran compiler e.g.,
mpif90
Default: CMake will attempt to locate an MPI Fortran compiler
Note
This option is triggered only needed if MPI is enabled (
ENABLE_MPI
isON
) and the Fortran interfaces are enabled (BUILD_FORTRAN_MODULE_INTERFACE
isON
).
- MPIEXEC_EXECUTABLE
Specify the executable for running MPI programs e.g.,
mpiexec
Default: CMake will attempt to locate the MPI executable
- MPIEXEC_PREFLAGS
Specifies flags that come directly after
MPIEXEC_EXECUTABLE
and beforeMPIEXEC_NUMPROC_FLAG
andMPIEXEC_MAX_NUMPROCS
.Default: none
- MPIEXEC_POSTFLAGS
Specifies flags that come after the executable to run but before any other program arguments.
Default: none
1.1.3.27. Building with oneMKL
The Intel oneAPI Math Kernel Library (oneMKL) includes CPU and SYCL/DPC++ interfaces for LAPACK dense linear algebra routines. The SUNDIALS oneMKL interface targets the SYCL/DPC++ routines, to utilize the CPU routine see section §1.1.3.24. SUNDIALS has been tested with oneMKL version 2021.4.
When oneMKL support is enabled, the oneMLK dense SUNMatrix and the oneMKL dense SUNLinearSolver will be built (see sections §1.1.7.4.7 and §1.1.7.5.10, respectively, for the corresponding header files and libraries). For more information on using SUNDIALS with GPUs, see Features for GPU Accelerated Computing.
To enable the SUNDIALS oneMKL interface set ENABLE_ONEMKL
to ON
and ONEMKL_DIR
to the root path of oneMKL installation. For example,
the following command will configure SUNDIALS with oneMKL support:
cmake \
-S SOLVER_DIR \
-B BUILD_DIR \
-D CMAKE_INSTALL_PREFIX=INSTALL_DIR \
-D ENABLE_ONEMKL=ON \
-D ONEMKL_DIR=/path/to/onemkl/installation \
- ENABLE_ONEMKL
Enable oneMKL support
Default:
OFF
- ONEMKL_DIR
Path to oneMKL installation.
Default: none
- SUNDIALS_ONEMKL_USE_GETRF_LOOP
This advanced debugging option replaces the batched LU factorization with a loop over each system in the batch and a non-batched LU factorization.
Default:
OFF
- SUNDIALS_ONEMKL_USE_GETRS_LOOP
This advanced debugging option replaces the batched LU solve with a loop over each system in the batch and a non-batched solve.
Default:
OFF
1.1.3.28. Building with OpenMP
The OpenMP API defines a directive-based approach for portable parallel programming across architectures.
When OpenMP support is enabled, the OpenMP NVector will be built (see section §1.1.7.3.6 for the corresponding header file and library).
To enable OpenMP support, set the ENABLE_OPENMP
to ON
. For
example, the following command will configure SUNDIALS with OpenMP support:
cmake \
-S SOLVER_DIR \
-B BUILD_DIR \
-D CMAKE_INSTALL_PREFIX=INSTALL_DIR \
-D ENABLE_OPENMP=ON
- ENABLE_OPENMP
Enable OpenMP support
Default:
OFF
1.1.3.29. Building with OpenMP Device Offloading
The OpenMP 4.0 specification added support for offloading computations to devices (i.e., GPUs). SUNDIALS requires OpenMP 4.5 for GPU offloading support.
When OpenMP offloading support is enabled, the OpenMPDEV NVector will be built (see section §1.1.7.3.7 for the corresponding header file and library).
To enable OpenMP device offloading support, set the
ENABLE_OPENMP_DEVICE
to ON
. For example, the following command
will configure SUNDIALS with OpenMP device offloading support:
cmake \
-S SOLVER_DIR \
-B BUILD_DIR \
-D CMAKE_INSTALL_PREFIX=INSTALL_DIR \
-D ENABLE_OPENMP_DEVICE=ON
- ENABLE_OPENMP_DEVICE
Enable OpenMP device offloading support
Default:
OFF
1.1.3.30. Building with PETSc
The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines for simulating applications modeled by partial differential equations. The library is developed by Argonne National Laboratory and is available from the PETSc GitLab repository. SUNDIALS requires PETSc 3.5.0 or newer and is regularly tested with the latest versions of PETSc, specifically up to version 3.21.4.
When PETSc support is enabled, the PETSc NVector and PETSc SNES SUNNonlinearSolver will be built (see sections §1.1.7.3.10 and §1.1.7.6.3, respectively, for the corresponding header files and libraries).
To enable PETSc support, set ENABLE_MPI
to ON
, set
ENABLE_PETSC
to ON
, and set PETSC_DIR
to the path of
the PETSc installation. Alternatively, a user can provide a list of include
paths in PETSC_INCLUDES
and a list of complete paths to the PETSc
libraries in PETSC_LIBRARIES
. For example, the following command will
configure SUNDIALS with PETSc support:
cmake \
-S SOLVER_DIR \
-B BUILD_DIR \
-D CMAKE_INSTALL_PREFIX=INSTALL_DIR \
-D ENABLE_MPI=ON \
-D ENABLE_PETSC=ON \
-D PETSC_DIR=/path/to/petsc/installation
- ENABLE_PETSC
Enable PETSc support
Default:
OFF
- PETSC_DIR
Path to PETSc installation
Default: none
1.1.3.31. Building with PThreads
POSIX Threads (PThreads) is an API for shared memory programming defined by the Institute of Electrical and Electronics Engineers (IEEE) standard POSIX.1c.
When PThreads support is enabled, the PThreads NVector will be built (see section §1.1.7.3.8 for the corresponding header file and library).
To enable PThreads support, set ENABLE_PTHREAD
to ON
. For
example, the following command will configure SUNDIALS with PThreads support:
cmake \
-S SOLVER_DIR \
-B BUILD_DIR \
-D CMAKE_INSTALL_PREFIX=INSTALL_DIR \
-D ENABLE_PTHREAD=ON
- ENABLE_PTHREAD
Enable PThreads support
Default:
OFF
1.1.3.32. Building with RAJA
RAJA is a performance portability layer developed by Lawrence Livermore National Laboratory and can be obtained from the RAJA GitHub repository. SUNDIALS is regularly tested with the latest versions of RAJA, specifically up to version 2024.02.2.
When RAJA support is enabled, the RAJA NVector will be built (see section §1.1.7.3.13 for the corresponding header files and libraries).
To enable RAJA support, set ENABLE_RAJA
to ON
, set
RAJA_DIR
to the path of the RAJA installation, set
SUNDIALS_RAJA_BACKENDS
to the desired backend (CUDA
, HIP
, or
SYCL
), and set ENABLE_CUDA
, ENABLE_HIP
, or
ENABLE_SYCL
to ON
depending on the selected backend. For
example, the following command will configure SUNDIALS with RAJA support using
the CUDA backend (targeting Ampere GPUs):
cmake \
-S SOLVER_DIR \
-B BUILD_DIR \
-D CMAKE_INSTALL_PREFIX=INSTALL_DIR \
-D ENABLE_RAJA=ON \
-D RAJA_DIR=/path/to/raja/installation \
-D SUNDIALS_RAJA_BACKENDS="CUDA" \
-D ENABLE_CUDA=ON \
-D CMAKE_CUDA_ARCHITECTURES="80"
- ENABLE_RAJA
Enable RAJA support
Default:
OFF
- RAJA_DIR
Path to the RAJA installation
Default: none
- SUNDIALS_RAJA_BACKENDS
If building SUNDIALS with RAJA support, this sets the RAJA backend to target. Values supported are
CUDA
,HIP
, orSYCL
.Default:
CUDA
1.1.3.33. Building with SuperLU_DIST
SuperLU_DIST is a general purpose library for the direct solution of large, sparse, nonsymmetric systems of linear equations in a distributed memory setting. The library is developed by Lawrence Berkeley National Laboratory and is available from the SuperLU_DIST GitHub repository. SuperLU_DIST version 7.0.0 or newer is required. SUNDIALS is regularly tested with the latest versions of SuperLU_DIST, specifically up to version 8.2.1.
When SuperLU_DIST support is enabled, the SuperLU_DIST (SLUNRloc) SUNMatrix and SuperLU_DIST SUNLinearSolver will be built (see sections §1.1.7.4.9 and §1.1.7.5.16 for the corresponding header files and libraries).
To enable SuperLU_DIST support, set ENABLE_MPI
to ON
, set
ENABLE_SUPERLUDIST
to ON
, and set SUPERLUDIST_DIR
to
the path where SuperLU_DIST is installed. If SuperLU_DIST was built with OpenMP
enabled, set SUPERLUDIST_OpenMP
and ENABLE_OPENMP
to
ON
. For example, the following command will configure SUNDIALS with
SuperLU_DIST support:
cmake \
-S SOLVER_DIR \
-B BUILD_DIR \
-D CMAKE_INSTALL_PREFIX=INSTALL_DIR \
-D ENABLE_SUPERLUDIST=ON \
-D SUPERLUDIST_DIR=/path/to/superludist/installation
- ENABLE_SUPERLUDIST
Enable SuperLU_DIST support
Default:
OFF
- SUPERLUDIST_DIR
Path to SuperLU_DIST installation.
Default: none
- SUPERLUDIST_OpenMP
Enable SUNDIALS support for SuperLU_DIST built with OpenMP
Default: none
Note
SuperLU_DIST must be built with OpenMP support for this option to function. Additionally the environment variable
OMP_NUM_THREADS
must be set to the desired number of threads.
- SUPERLUDIST_INCLUDE_DIRS
List of include paths for SuperLU_DIST (under a typical SuperLU_DIST install, this is typically the SuperLU_DIST
SRC
directory)Default: none
Note
This is an advanced option. Prefer to use
SUPERLUDIST_DIR
.
- SUPERLUDIST_LIBRARIES
Semi-colon separated list of libraries needed for SuperLU_DIST
Default: none
Note
This is an advanced option. Prefer to use
SUPERLUDIST_DIR
.
- SUPERLUDIST_INCLUDE_DIR
Path to SuperLU_DIST header files (under a typical SuperLU_DIST install, this is typically the SuperLU_DIST
SRC
directory)Default: none
Note
This is an advanced option. This option is deprecated. Use
SUPERLUDIST_INCLUDE_DIRS
.
- SUPERLUDIST_LIBRARY_DIR
Path to SuperLU_DIST installed library files
Default: none
Note
This option is deprecated. Use
SUPERLUDIST_DIR
.
1.1.3.34. Building with SuperLU_MT
SuperLU_MT is a general purpose library for the direct solution of large, sparse, nonsymmetric systems of linear equations on shared memory parallel machines. The library is developed by Lawrence Berkeley National Laboratory and is available from the SuperLU_MT GitHub repository. SUNDIALS is regularly tested with the latest versions of SuperLU_MT, specifically up to version 4.0.1.
When SuperLU_MT support is enabled, the SuperLU_MT SUNLinearSolver will be built (see section §1.1.7.5.17 for the corresponding header file and library).
To enable SuperLU_MT support, set ENABLE_SUPERLUMT
to ON
, set
SUPERLUMT_INCLUDE_DIR
and SUPERLUMT_LIBRARY_DIR
to the
location of the header and library files, respectively, of the SuperLU_MT
installation. Depending on the SuperLU_MT installation, it may also be necessary
to set SUPERLUMT_LIBRARIES
to a semi-colon separated list of other
libraries SuperLU_MT depends on. For example, if SuperLU_MT was build with an
external blas library, then include the full path to the blas library in this
list. Additionally, the variable SUPERLUMT_THREAD_TYPE
must be set to
either Pthread
or OpenMP
. For example, the following command will
configure SUNDIALS with SuperLU_MT support using PThreads:
cmake \
-S SOLVER_DIR \
-B BUILD_DIR \
-D CMAKE_INSTALL_PREFIX=INSTALL_DIR \
-D ENABLE_SUPERLUMT=ON \
-D SUPERLUMT_INCLUDE_DIR=/path/to/superlumt/installation/include/dir \
-D SUPERLUMT_LIBRARY_DIR=/path/to/superlumt/installation/library/dir \
-D SUPERLUMT_THREAD_TYPE="Pthread"
Warning
Do not mix thread types when using SUNDIALS packages. For example, if using the OpenMP or PThreads NVector then the SuperLU_MT installation should use the same threading type.
- ENABLE_SUPERLUMT
Enable SuperLU_MT support
Default:
OFF
- SUPERLUMT_INCLUDE_DIR
Path to SuperLU_MT header files (under a typical SuperLU_MT install, this is typically the SuperLU_MT
SRC
directory)Default: none
- SUPERLUMT_LIBRARY_DIR
Path to SuperLU_MT installed library files
Default: none
- SUPERLUMT_LIBRARIES
Semi-colon separated list of libraries needed for SuperLU_MT
Default: none
- SUPERLUMT_THREAD_TYPE
Must be set to Pthread or OpenMP, depending on how SuperLU_MT was compiled.
Default: Pthread
1.1.3.35. Building with SYCL
SYCL is an abstraction layer for programming heterogeneous parallel computing based on C++17.
When SYCL support is enabled, the SYCL NVector will be built (see section §1.1.7.3.14 for the corresponding header file and library).
To enable SYCL support, set the ENABLE_SYCL
to ON
. For example,
the following command will configure SUNDIALS with SYCL support using Intel
compilers:
cmake \
-S SOLVER_DIR \
-B BUILD_DIR \
-D CMAKE_INSTALL_PREFIX=INSTALL_DIR \
-D CMAKE_C_COMPILER=icx \
-D CMAKE_CXX_COMPILER=icpx \
-D CMAKE_CXX_FLAGS="-fsycl" \
-D ENABLE_SYCL=ON
- ENABLE_SYCL
Enable SYCL support
Default:
OFF
Note
Building with SYCL enabled requires a compiler that supports a subset of the of SYCL 2020 specification (specifically
sycl/sycl.hpp
must be available).CMake does not currently support autodetection of SYCL compilers and
CMAKE_CXX_COMPILER
must be set to a valid SYCL compiler. At present the only supported SYCL compilers are the Intel oneAPI compilers i.e.,dpcpp
andicpx
. When usingicpx
the-fsycl
flag and any ahead of time compilation flags must be added toCMAKE_CXX_FLAGS
.
- SUNDIALS_SYCL_2020_UNSUPPORTED
This advanced option disables the use of some features from the SYCL 2020 standard in SUNDIALS libraries and examples. This can be used to work around some cases of incomplete compiler support for SYCL 2020.
Default:
OFF
1.1.3.36. Building with Trilinos
Trilinos is a collection of C++ libraries of linear solvers, non-linear solvers, optimization solvers, etc. developed by Sandia National Laboratory and available from the Trilinos GitHub repository. SUNDIALS is regularly tested with the latest versions of Trilinos, specifically up to version 16.0.0.
When Trilinos support is enabled, the Trilinos Tpetra NVector will be built (see section §1.1.7.3.15 for the corresponding header file and library).
To enable Trilinos support, set the ENABLE_TRILINOS
to ON
and set
Trilinos_DIR
to root path of the Trilinos installation. For example,
the following command will configure SUNDIALS with Trilinos support:
cmake \
-S SOLVER_DIR \
-B BUILD_DIR \
-D CMAKE_INSTALL_PREFIX=INSTALL_DIR \
-D ENABLE_TRILONOS=ON \
-D TRILINOS_DIR=/path/to/trilinos/installation
- ENABLE_TRILINOS
Enable Trilinos support
Default:
OFF
- Trilinos_DIR
Path to the Trilinos installation
Default: None
1.1.3.37. Building with XBraid
XBraid is parallel-in-time library implementing an optimal-scaling multigrid reduction in time (MGRIT) solver. The library is developed by Lawrence Livermore National Laboratory and is available from the XBraid GitHub repository. SUNDIALS is regularly tested with the latest versions of XBraid, specifically up to version 3.0.0.
To enable XBraid support, set ENABLE_MPI
to ON
, set
ENABLE_XBRAID
to ON
, set XBRAID_DIR
to the root path
of the XBraid installation. For example, the following command will configure
SUNDIALS with XBraid support:
cmake \
-S SOLVER_DIR \
-B BUILD_DIR \
-D CMAKE_INSTALL_PREFIX=INSTALL_DIR \
-D SUNDIALS_INDEX_SIZE="32" \
-D ENABLE_MPI=ON \
-D ENABLE_XBRAID=ON \
-D XBRAID_DIR=/path/to/xbraid/installation
Note
At this time the XBraid types braid_Int
and braid_Real
are hard-coded
to int
and double
respectively. As such SUNDIALS must be configured
with SUNDIALS_INDEX_SIZE
set to 32
and
SUNDIALS_PRECISION
set to double
. Additionally, SUNDIALS must
be configured with ENABLE_MPI
set to ON
.
- ENABLE_XBRAID
Enable or disable the ARKStep + XBraid interface.
Default:
OFF
- XBRAID_DIR
The root directory of the XBraid installation.
Default:
OFF
- XBRAID_INCLUDES
Semi-colon separated list of XBraid include directories. Unless provided by the user, this is autopopulated based on the XBraid installation found in
XBRAID_DIR
.Default: none
- XBRAID_LIBRARIES
Semi-colon separated list of XBraid link libraries. Unless provided by the user, this is autopopulated based on the XBraid installation found in
XBRAID_DIR
.Default: none
1.1.3.38. Building with xSDK Defaults
The Extreme-scale Scientific Software Development Kit (xSDK) is a community of HPC libraries and applications developing best practices and standards for scientific software.
- USE_XSDK_DEFAULTS
Enable xSDK default configuration settings. This sets the default value for
CMAKE_BUILD_TYPE
toDebug
,SUNDIALS_INDEX_SIZE
to32
, andSUNDIALS_PRECISION
todouble
.Default:
OFF
1.1.3.39. Building with External Addons
SUNDIALS “addons” are community developed code additions for SUNDIALS that can be subsumed by the SUNDIALS build system so that they have full access to all internal SUNDIALS symbols. The intent is for SUNDIALS addons to function as if they are part of the SUNDIALS library, while allowing them to potentially have different licenses (although we encourage BSD-3-Clause still), code style (although we encourage them to follow the SUNDIALS style outlined here).
Warning
SUNDIALS addons are not maintained by the SUNDIALS team and may come with different licenses. Use them at your own risk.
To build with SUNDIALS addons,
Clone/copy the addon(s) into
SOLVER_DIR/external/
Copy the
sundials-addon-example
block in theSOLVER_DIR/external/CMakeLists.txt
, paste it below the example block, and modify the path listed for your own external addon(s).When building SUNDIALS, set the CMake option
SUNDIALS_ENABLE_EXTERNAL_ADDONS
toON
Build SUNDIALS as usual.
- SUNDIALS_ENABLE_EXTERNAL_ADDONS
Build SUNDIALS with any external addons that you have put in
SOLVER_DIR/external
.Default:
OFF
1.1.4. Testing the Build and Installation
If SUNDIALS was configured with any EXAMPLES_ENABLE_<language>
options set
to ON
, then a set of regression tests can be run after building with the
command:
make test
Additionally, if EXAMPLES_INSTALL
is set to ON
, then a set of
smoke tests can be run after installing with the command:
make test_install
1.1.5. Building and Running Examples
Each of the SUNDIALS solvers is distributed with a set of examples demonstrating
basic usage. To build and install the examples, set at least one of the
EXAMPLES_ENABLE_<language>
options to ON
, and set
EXAMPLES_INSTALL
to ON
. Along side the example sources and
outputs, automatically generated CMakeLists.txt
configuration files (and
Makefile
files if on Linux/Unix systems) are installed referencing the
installed SUNDIALS headers and libraries.
Either the CMakeLists.txt
file or the traditional Makefile
may be used
to build the examples and serve as a template for building user developed
problems. To use the supplied Makefile
simply run make
to compile and
generate the executables. To use CMake from within the installed example
directory, run cmake
(or ccmake
or cmake-gui
to use the GUI)
followed by make
to compile the example code. Note that if CMake is used,
it will overwrite the traditional Makefile
with a new CMake-generated
Makefile
.
The resulting output from running the examples can be compared with example output bundled in the SUNDIALS distribution.
Note
There will potentially be differences in the output due to machine architecture, compiler versions, use of third party libraries, etc.
1.1.6. Using SUNDIALS In Your Project
After installing SUNDIALS, building your application with SUNDIALS involves two steps: including the right header files and linking to the right libraries. Depending on what features of SUNDIALS that your application uses, the header files and libraries needed will vary. For example, if you want to use CVODE for serial computations you need the following includes:
#include <cvode/cvode.h>
#include <nvector/nvector_serial.h>
and must link to libsundials_cvode
and libsundials_nvecserial
. If you
wanted to use CVODE with the GMRES linear solver and the CUDA NVector, you need
the following includes:
#include <cvode/cvode.h>
#include <nvector/nvector_cuda.h>
#include <sunlinsol/sunlinsol_spgmr.h>
and must link to libsundials_cvode
, libsundials_nveccuda
, and
libsundials_sunlinsolspgmr
.
Attention
Added in version 7.0.0: All applications must also link to libsundials_core
. For projects
using SUNDIALS CMake targets (see section
§1.1.6.1), this dependency is automatically
included.
Refer to section §1.1.7 below or the documentations sections for the individual SUNDIALS packages and modules of interest for the proper includes and libraries to link against.
1.1.6.1. CMake Projects
For projects that use CMake, the SUNDIALS CMake package configuration file provides
CMake targets for the consuming project. Use the CMake find_package
command
to search for the configuration file, SUNDIALSConfig.cmake
, which is
installed alongside a package version file, SUNDIALSConfigVersion.cmake
,
under the INSTALL_DIR/SUNDIALS_INSTALL_CMAKEDIR
directory. The SUNDIALS
CMake targets follow the same naming convention as the generated library
binaries with the libsundials_
prefix replaced by SUNDIALS::
. For
example, the exported target for libsundials_cvode
is
SUNDIALS::cvode
. See section §1.1.7 for
a complete list of CMake targets. The CMake code snippit below shows how a
consuming project might leverage the SUNDIALS package configuration file to
build against SUNDIALS in their own CMake project.
project(MyProject)
# Set the variable SUNDIALS_DIR to the SUNDIALS instdir.
# When using the cmake CLI command, this can be done like so:
# cmake -D SUNDIALS_DIR=/path/to/sundials/installation
# Find any SUNDIALS version...
find_package(SUNDIALS REQUIRED)
# ... or find any version newer than some minimum...
find_package(SUNDIALS 7.1.0 REQUIRED)
# ... or find a version in a range
find_package(SUNDIALS 7.0.0...7.1.0 REQUIRED)
# To check if specific components are available in the SUNDIALS installation,
# use the COMPONENTS option followed by the desired target names
find_package(SUNDIALS REQUIRED COMPONENTS cvode nvecpetsc)
add_executable(myexec main.c)
# Link to SUNDIALS libraries through the exported targets.
# This is just an example, users should link to the targets appropriate
# for their use case.
target_link_libraries(myexec PUBLIC SUNDIALS::cvode SUNDIALS::nvecpetsc)
Note
Changed in version 7.1.0: A single version provided to find_package
denotes the minimum version
of SUNDIALS to look for, and any version equal or newer than what is
specified will match. In prior versions SUNDIALSConfig.cmake
required
the version found to have the same major version number as the single
version provided to find_package
.
To accommodate installing both static and shared libraries simultaneously,
targets are created with _static
and _shared
suffixes, respectively, and
the un-suffixed target is an alias to the _shared
version. For example,
SUNDIALS::cvode
is an alias to SUNDIALS::cvode_shared
in this
case. Projects that wish to use static libraries should use the _static
version of the target when both library types are installed. When only static or
shared libraries (not both) are installed the un-suffixed alias corresponds to
the library type chosen at configuration time (see section
§1.1.3.4).
1.1.7. Libraries and Header Files
As noted above, the SUNDIALS the header files and libraries are installed under
the CMAKE_INSTALL_PREFIX
path in the include
and
CMAKE_INSTALL_LIBDIR
subdirectories, respectively. The public header
files are further organized into subdirectories under the include
directory.
The installed public header files and libraries are listed for reference in the
sections below. Additionally, the exported CMake targets are also listed for
projects using CMake (see section §1.1.6.1). The
file extension .LIB
used below is typically .so
, .dll
, or .dylib
for shared libraries and .a
or .lib
for static libraries.
Warning
SUNDIALS installs some header files to
CMAKE_INSTALL_PREFIX/include/sundials/priv
. All of the header files in
this directory are private and should not be included in user code. The
private headers are subject to change without any notice and relying on them
may break your code.
1.1.7.1. SUNDIALS Core
The core library contains the shared infrastructure utilized by SUNDIALS packages. All applications using SUNDIALS must link against the core library. For codes using the SUNDIALS CMake targets, the core target is automatically included as needed by other targets.
Libraries |
|
Headers |
|
CMake target |
|
The core header file is a convenient way to include all the header files that make up the SUNDIALS core infrastructure.
Headers |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
For C++ applications, several convenience classes are provided for interacting with SUNDIALS objects. These can be accessed by including the C++ core header file.
Headers |
|
Like the C core header file, the C++ core header file is a convenient way to include all the header files for the core C++ classes.
Warning
Features in the sundials::experimental
namespace are not yet part of the
public API and are subject to change or removal without notice.
Headers |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
When MPI support is enabled (ENABLE_MPI
is ON
), the following
header file provides aliases between MPI data types and SUNDIALS types. The
alias MPI_SUNREALTYPE
is one of MPI_FLOAT
, MPI_DOUBLE
, or
MPI_LONG_DOUBLE
depending on the value of SUNDIALS_PRECISION
. The
alias MPI_SUNINDEXTYPE
is either MPI_INT32_T
or MPI_INT64_T
depending on the value of SUNDIALS_INDEX_SIZE
.
Headers |
|
When XBraid support is enabled (ENABLE_XBRAID
is ON
), the
following header file defines types and functions for interfacing SUNDIALS with
XBraid.
Headers |
|
1.1.7.2. SUNDIALS Packages
1.1.7.2.1. CVODE
To use the CVODE package, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
The CVODE header file includes the files below which define functions, types, and constants for the CVODE linear solver interface and using projection methods with CVODE.
Headers |
|
|
CVODE provides a specialized linear solver module for diagonal linear systems. Include the header file below to access the related functions.
Headers |
|
For problems in which the user cannot define a more effective, problem-specific
preconditioner for Krylov iterative linear solvers, CVODE provides banded
(bandpre
) and band-block-diagonal (bbdpre
) preconditioner
modules. Include the header files below to access the related functions.
Headers |
|
|
1.1.7.2.2. CVODES
To use the CVODES package, include the header file and link to the library given below.
Warning
CVODES is a superset of CVODE and defines the same functions as provided by CVODE. As such, applications should not link to both CVODES and CVODE.
Libraries |
|
Headers |
|
CMake target |
|
The CVODES header file includes the files below which define functions, types, and constants for the CVODES linear solver interface and using projection methods with CVODES.
Headers |
|
|
CVODES provides a specialized linear solver module for diagonal linear systems. Include the header file below to access the related functions.
Headers |
|
For problems in which the user cannot define a more effective, problem-specific
preconditioner for Krylov iterative linear solvers, CVODES provides banded
(bandpre
) and band-block-diagonal (bbdpre
) preconditioner
modules. Include the header files below to access the related functions.
Headers |
|
|
1.1.7.2.3. ARKODE
To use the ARKODE package, link to the library below and include the header file for the desired module.
Libraries |
|
Headers |
|
|
|
|
|
|
|
|
|
|
|
|
|
CMake target |
|
The ARKODE module header files include the header file for the shared ARKODE
interface functions, constants, and types (arkode.h
). As appropriate, the
module header files also include the ARKODE linear solver interface as well as
the header files defining method coefficients.
Headers |
|
|
|
|
|
|
|
|
|
|
For problems in which the user cannot define a more effective, problem-specific
preconditioner for Krylov iterative linear solvers, ARKODE provides banded
(bandpre
) and band-block-diagonal (bbdpre
) preconditioner
modules. Include the header files below to access the related functions.
Headers |
|
|
When XBraid support is enabled (ENABLE_XBRAID
is ON
), include the
ARKODE-XBraid interface header file and link to the interface library given
below to use ARKODE and XBraid together.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.2.4. IDA
To use the IDA package, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
The IDA header file includes the header file below which defines functions, types, and constants for the IDA linear solver interface.
Headers |
|
For problems in which the user cannot define a more effective, problem-specific
preconditioner for Krylov iterative linear solvers, IDA provides a
band-block-diagonal (bbdpre
) preconditioner module. Include the header
file below to access the related functions.
Headers |
|
1.1.7.2.5. IDAS
To use the IDAS package, include the header file and link to the library given below.
Warning
IDAS is a superset of IDA and defines the same functions as provided by IDA. As such, applications should not link to both IDAS and IDA.
Libraries |
|
Headers |
|
CMake target |
|
The IDAS header file includes the header file below which defines functions, types, and constants for the IDAS linear solver interface.
Headers |
|
For problems in which the user cannot define a more effective, problem-specific
preconditioner for Krylov iterative linear solvers, IDAS provides a
band-block-diagonal (bbdpre
) preconditioner module. Include the header
file below to access the related functions.
Headers |
|
1.1.7.2.6. KINSOL
To use the KINSOL package, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
The KINSOL header file includes the header file below which defines functions, types, and constants for the KINSOL linear solver interface.
Headers |
|
For problems in which the user cannot define a more effective, problem-specific
preconditioner for Krylov iterative linear solvers, KINSOL provides a
band-block-diagonal (bbdpre
) preconditioner module. Include the header
file below to access the related functions.
Headers |
|
1.1.7.3. Vectors
1.1.7.3.1. Serial
To use the serial NVector, include the header file and link to the library given below.
When using SUNDIALS time integration packages or the KINSOL package, the serial NVector is bundled with the package library and it is not necessary to link to the library below when using those packages.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.3.2. ManyVector
To use the ManyVector NVector, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.3.3. Parallel (MPI)
To use the parallel (MPI) NVector, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.3.4. MPI ManyVector
To use the MPI ManyVector NVector, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.3.5. MPI+X
To use the MPI+X NVector, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.3.6. OpenMP
To use the OpenMP NVector, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.3.7. OpenMPDEV
To use the OpenMP device offload NVector, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.3.8. PThreads
To use the POSIX Threads NVector, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.3.9. hypre (ParHyp)
To use the hypre (ParHyp) NVector, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.3.10. PETSc
To use the PETSc NVector, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.3.11. CUDA
To use the CUDA NVector, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.3.12. HIP
To use the HIP NVector, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.3.13. RAJA
To use the RAJA NVector, include the header file and link to the library given below for the desired backend.
Libraries |
|
|
|
|
|
Headers |
|
CMake target |
|
|
|
|
1.1.7.3.14. SYCL
To use the SYCL NVector, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.3.15. Trilinos (Tpetra)
To use the Trilinos (Tpetra) NVector, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.3.16. Kokkos
To use the Kokkos NVector, include the header file and link to the library given below.
Headers |
|
CMake target |
|
1.1.7.4. Matrices
1.1.7.4.1. Banded
To use the banded SUNMatrix, include the header file and link to the library given below.
When using SUNDIALS time integration packages or the KINSOL package, the banded SUNMatrix is bundled with the package library and it is not necessary to link to the library below when using those packages.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.4.2. cuSPARSE
To use the cuSPARSE SUNMatrix, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.4.3. Dense
To use the dense SUNMatrix, include the header file and link to the library given below.
When using SUNDIALS time integration packages or the KINSOL package, the dense SUNMatrix is bundled with the package library and it is not necessary to link to the library below when using those packages.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.4.4. Ginkgo
To use the Ginkgo SUNMatrix, include the header file given below.
Headers |
|
CMake target |
|
1.1.7.4.5. KokkosKernels Dense
To use the KokkosKernels dense SUNMatrix, include the header file given below.
Headers |
|
CMake target |
|
1.1.7.4.6. MAGMA Dense
To use the MAGMA dense SUNMatrix, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.4.7. oneMKL Dense
To use the oneMKL dense SUNMatrix, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.4.8. Sparse
To use the sparse SUNMatrix, include the header file and link to the library given below.
When using SUNDIALS time integration packages or the KINSOL package, the sparse SUNMatrix is bundled with the package library and it is not necessary to link to the library below when using those packages.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.4.9. SuperLU_DIST (SLUNRloc)
To use the SuperLU_DIST (SLUNRloc) SUNMatrix, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.5. Linear Solvers
1.1.7.5.1. Banded
To use the banded SUNLinearSolver, include the header file and link to the library given below.
When using SUNDIALS time integration packages or the KINSOL package, the banded SUNLinearSolver is bundled with the package library and it is not necessary to link to the library below when using those packages.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.5.2. cuSPARSE Batched QR
To use the cuSPARSE batched QR SUNLinearSolver, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.5.3. Dense
To use the dense SUNLinearSolver, include the header file and link to the library given below.
When using SUNDIALS time integration packages or the KINSOL package, the dense SUNLinearSolver is bundled with the package library and it is not necessary to link to the library below when using those packages.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.5.4. Ginkgo
To use the Ginkgo SUNLinearSolver, include the header file given below.
Headers |
|
CMake target |
|
1.1.7.5.5. KLU
To use the KLU SUNLinearSolver, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.5.6. KokkosKernels Dense
To use the KokkosKernels dense SUNLinearSolver, include the header file given below.
Headers |
|
CMake target |
|
1.1.7.5.7. LAPACK Banded
To use the LAPACK banded SUNLinearSolver, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.5.8. LAPACK Dense
To use the LAPACK dense SUNLinearSolver, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.5.9. MAGMA Dense
To use the MAGMA dense SUNLinearSolver, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.5.10. oneMKL Dense
To use the oneMKL dense SUNLinearSolver, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.5.11. Preconditioned Conjugate Gradient (PCG)
To use the PCG SUNLinearSolver, include the header file and link to the library given below.
When using SUNDIALS time integration packages or the KINSOL package, the PCG SUNLinearSolver is bundled with the package library and it is not necessary to link to the library below when using those packages.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.5.12. Scaled, Preconditioned Bi-Conjugate Gradient, Stabilized (SPBCGS)
To use the SPBCGS SUNLinearSolver, include the header file and link to the library given below.
When using SUNDIALS time integration packages or the KINSOL package, the SPBCGS SUNLinearSolver is bundled with the package library and it is not necessary to link to the library below when using those packages.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.5.13. Scaled, Preconditioned, Flexible, Generalized Minimum Residual (SPFGMR)
To use the SPFGMR SUNLinearSolver, include the header file and link to the library given below.
When using SUNDIALS time integration packages or the KINSOL package, the SPFGMR SUNLinearSolver is bundled with the package library and it is not necessary to link to the library below when using those packages.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.5.14. Scaled, Preconditioned, Generalized Minimum Residual (SPGMR)
To use the SPGMR SUNLinearSolver, include the header file and link to the library given below.
When using SUNDIALS time integration packages or the KINSOL package, the SPGMR SUNLinearSolver is bundled with the package library and it is not necessary to link to the library below when using those packages.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.5.15. Scaled, Preconditioned, Transpose-Free Quasi-Minimum Residual (SPTFQMR)
To use the SPTFQMR SUNLinearSolver, include the header file and link to the library given below.
When using SUNDIALS time integration packages or the KINSOL package, the SPTFQMR SUNLinearSolver is bundled with the package library and it is not necessary to link to the library below when using those packages.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.5.16. SuperLU_DIST
To use the SuperLU_DIST SUNLinearSolver, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.5.17. SuperLU_MT
To use the SuperLU_MT SUNLinearSolver, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.6. Nonlinear Solvers
1.1.7.6.1. Newton
To use the Newton SUNNonlinearSolver, include the header file and link to the library given below.
When using SUNDIALS time integration packages, the Newton SUNNonlinearSolver is bundled with the package library and it is not necessary to link to the library below when using those packages.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.6.2. Fixed-point
To use the fixed-point SUNNonlinearSolver, include the header file and link to the library given below.
When using SUNDIALS time integration packages, the fixed-point SUNNonlinearSolver is bundled with the package library and it is not necessary to link to the library below when using those packages.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.6.3. PETSc SNES
To use the PETSc SNES SUNNonlinearSolver, include the header file and link to the library given below.
Libraries |
|
Headers |
|
CMake target |
|
1.1.7.7. Memory Helpers
1.1.7.7.1. System
When using SUNDIALS time integration packages or the KINSOL package, the system SUNMemoryHelper is bundled with the package library and it is not necessary to link to the library below when using those packages.
Headers |
|
1.1.7.7.2. CUDA
To use the CUDA SUNMemoryHelper, include the header file given below when using a CUDA-enabled NVector or SUNMatrix.
Headers |
|
1.1.7.7.3. HIP
To use the HIP SUNMemoryHelper, include the header file given below when using a HIP-enabled NVector or SUNMatrix.
Headers |
|
1.1.7.7.4. SYCL
To use the SYCL SUNMemoryHelper, include the header file given below when using a SYCL-enabled NVector or SUNMatrix.
Headers |
|
1.1.7.8. Execution Policies
1.1.7.8.1. CUDA
When using a CUDA-enabled NVector or SUNMatrix, include the header file below to access the CUDA execution policy C++ classes.
Headers |
|
1.1.7.8.2. HIP
When using a HIP-enabled NVector or SUNMatrix, include the header file below to access the HIP execution policy C++ classes.
Headers |
|
1.1.7.8.3. SYCL
When using a SYCL-enabled NVector or SUNMatrix, include the header file below to access the SYCL execution policy C++ classes.
Headers |
|