Preparing sparse solvers for exascale computing
Sparse solvers provide essential functionality for a wide variety of scientific applications.
Highly parallel sparse solvers are essential for continuing advances in high-fidelity, multi …
Highly parallel sparse solvers are essential for continuing advances in high-fidelity, multi …
[图书][B] Communication-avoiding Krylov subspace methods in theory and practice
EC Carson - 2015 - search.proquest.com
Advancements in the field of high-performance scientific computing are necessary to
address the most important challenges we face in the 21st century. From physical modeling …
address the most important challenges we face in the 21st century. From physical modeling …
Low-synchronization orthogonalization schemes for s-step and pipelined Krylov solvers in Trilinos
We investigate two single-reduce orthogonalization schemes for both s-step and pipelined
GMRES. The first is based on classical Gram Schmidt with reorthogonalization (CGS2), and …
GMRES. The first is based on classical Gram Schmidt with reorthogonalization (CGS2), and …
Improving performance of GMRES by reducing communication and pipelining global collectives
We compare the performance of pipelined and s-step GMRES, respectively referred to as l-
GMRES and s-GMRES, on distributed multicore CPUs. Compared to standard GMRES, s …
GMRES and s-GMRES, on distributed multicore CPUs. Compared to standard GMRES, s …
Scalable asynchronous domain decomposition solvers
Parallel implementations of linear iterative solvers generally alternate between phases of
data exchange and phases of local computation. Increasingly large problem sizes and more …
data exchange and phases of local computation. Increasingly large problem sizes and more …
Level-based blocking for sparse matrices: Sparse matrix-power-vector multiplication
The multiplication of a sparse matrix with a dense vector (SpMV) is a key component in
many numerical schemes and its performance is known to be severely limited by main …
many numerical schemes and its performance is known to be severely limited by main …
Convergence analysis of Anderson‐type acceleration of Richardson's iteration
M Lupo Pasini - Numerical Linear Algebra with Applications, 2019 - Wiley Online Library
We consider Anderson extrapolation to accelerate the (stationary) Richardson iterative
method for sparse linear systems. Using an Anderson mixing at periodic intervals, we …
method for sparse linear systems. Using an Anderson mixing at periodic intervals, we …
The Adaptive -Step Conjugate Gradient Method
EC Carson - SIAM Journal on Matrix Analysis and Applications, 2018 - SIAM
The performance of Krylov subspace methods on large-scale parallel computers is often
limited by communication, or movement of data. This has inspired the development of s-step …
limited by communication, or movement of data. This has inspired the development of s-step …
Algebraic temporal blocking for sparse iterative solvers on multi-core CPUs
Sparse linear iterative solvers are essential for many large-scale simulations. Much of the
runtime of these solvers is often spent in the implicit evaluation of matrix polynomials via a …
runtime of these solvers is often spent in the implicit evaluation of matrix polynomials via a …
Mixed Precision -step Conjugate Gradient with Residual Replacement on GPUs
I Yamazaki, E Carson, B Kelley - 2022 IEEE International …, 2022 - ieeexplore.ieee.org
The s-step Conjugate Gradient (CG) algorithm has the potential to reduce the
communication cost of standard CG by a factor of s. However, though mathematically …
communication cost of standard CG by a factor of s. However, though mathematically …