< Back to previous page

## Publication

# The inverse fast multipole method: using a fast approximate direct solver as a preconditioner for dense linear systems

### Journal Contribution - Journal Article

Although some preconditioners are available for solving dense linear systems, there are still many matrices for which preconditioners are lacking, particularly in cases where the size of the matrix N becomes very large. Examples of preconditioners include incomplete LU (ILU) preconditioners that sparsify the matrix based on some threshold, algebraic multigrid preconditioners, and specialized preconditioners, e.g., Calder´on and other analytical approximation methods when available. Despite these methods, there remains a great need to develop general purpose preconditioners whose cost scales well with the matrix size N. In this paper, we propose a preconditioner with broad applicability and with cost O(N) for dense matrices, when the matrix is given by a “smooth” (as opposed to a highly oscillatory) kernel. Extending the method using the same framework as general H2 -matrices (i.e., algebraic instead of defined, in terms of an analytical kernel) is relatively straightforward, but this will not be discussed here. These preconditioners have a controlled accuracy (e.g., machine accuracy can be achieved if needed) and scale linearly with N. They are based on an approximate direct solve of the system. The linear scaling of the algorithm is achieved by means of two key ideas. First, the H2 -structure of the dense matrix is exploited to obtain an extended sparse system of equations. Second, fill-ins arising when elimination of the latter is performed are compressed as low-rank matrices if they correspond to well-separated interactions. This ensures that the sparsity pattern of the extended sparse matrix is preserved throughout the elimination, thereby resulting in a very efficient algorithm with O(Nlog21/ε) computational cost and O(Nlog1/ε) memory requirement for an error tolerance 0 < ε < 1. The solver is inexact, although the error can be controlled and made as small as needed. These solvers are related to ILU in the sense that the fill-in is controlled. However, in ILU, most of the fill-in (i.e., below a certain tolerance) is simply discarded, whereas here it is approximated using low-rank blocks, with a prescribed tolerance. Numerical examples are discussed to demonstrate the linear scaling of the method and to illustrate its effectiveness as a preconditioner.

Journal: Journal of the Society for Industrial and Applied Mathematics

ISSN: 0368-4245

Issue: 3

Volume: 39

Pages: A761 - A796

Number of pages: 36

Publication year:2017