< Terug naar vorige pagina

Publicatie

Krylov subspace methods as key building blocks for numerical linear algebra and optimization

Boek - Dissertatie

Solving a linear system of equations is undoubtedly one of the most fundamental tasks in numerical linear algebra and optimization. Not only are these linear systems often important on their own, they also appear frequently as sub-problems when attempting to solve a more complex problem. Many applications in science and industry can be modelled using a sparse matrix, i.e. containing only a limited amount of non-zero entries. Moreover, in many cases it suffices to compute an approximate solution to the linear system of equations. Instead of computing the exact solution, Krylov subspace methods iteratively improve an approximate solution until some predefined tolerance is satisfied. The main computational cost of these methods are the matrix-vector products that need to be computed in each iteration. The total number of floating point operations for a matrix-vector product is relatively modest for sparse matrices, even when the number of unknowns is very large, which makes Krylov subspace methods efficient solution methods for sparse linear systems of equations. In the first part of the thesis we develop a reformulation of the Conjugate Gradient method, one of the most widely used Krylov subspace methods, with improved parallel scalability on large distributed-memory computers compared to the standard implementation. The main idea of our approach is to overlap the global communication phase of the dot-product operation with computational work, which can be achieved by introducing suitable auxiliary variables and recurrence relations. In the second part of the thesis we propose new techniques to exploit (generalized) Krylov subspaces in specialized optimization routines, with a special focus on ill-posed inverse problems. We develop a new hybrid regularization method that simultaneously solves a regularized inverse problem and finds the corresponding regularization parameter such that the discrepancy principle is satisfied. We also develop a mixed-precision technique to speed up an interior-point method for solving linear programming problems by using a Cholesky factorization in single precision as preconditioner for the Conjugate Gradient method implemented in double precision.
Aantal pagina's: 211
Jaar van publicatie:2022
Trefwoorden:Doctoral thesis
Toegankelijkheid:Open