Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

Chapter Contents
Chapter Introduction
NAG Toolbox

# NAG Toolbox Chapter IntroductionF07 — linear equations (lapack)

## Scope of the Chapter

This chapter provides functions for the solution of systems of simultaneous linear equations, and associated computations. It provides functions for
• matrix factorizations;
• solution of linear equations;
• estimating matrix condition numbers;
• computing error bounds for the solution of linear equations;
• matrix inversion;
• computing scaling factors to equilibrate a matrix.
Functions are provided for both real and complex data.
For a general introduction to the solution of systems of linear equations, you should turn first to the F04 Chapter Introduction. The decision trees, in Decision Trees in the F04 Chapter Introduction, direct you to the most appropriate functions in Chapters F04 or F07 for solving your particular problem. In particular, Chapters F04 and F07 contain Black Box (or driver) functions which enable some standard types of problem to be solved by a call to a single function. Where possible, functions in Chapter F04 call Chapter F07 functions to perform the necessary computational tasks.
There are two types of driver functions in this chapter: simple drivers which just return the solution to the linear equations; and expert drivers which also return condition and error estimates and, in many cases, also allow equilibration. The simple drivers for real matrices have names of the form f07_a and for complex matrices have names of the form f07_n. The expert drivers for real matrices have names of the form f07_b and for complex matrices have names of the form f07_p.
The functions in this chapter (Chapter F07) handle only dense and band matrices (not matrices with more specialised structures, or general sparse matrices).
The functions in this chapter have all been derived from the LAPACK project (see Anderson et al. (1999)). They have been designed to be efficient on a wide range of high-performance computers, without compromising efficiency on conventional serial machines.

## Background to the Problems

This section is only a brief introduction to the numerical solution of systems of linear equations. Consult a standard textbook, for example Golub and Van Loan (1996) for a more thorough discussion.

### Notation

We use the standard notation for a system of simultaneous linear equations:
 $Ax=b$ (1)
where $A$ is the coefficient matrix, $b$ is the right-hand side, and $x$ is the solution. $A$ is assumed to be a square matrix of order $n$.
If there are several right-hand sides, we write
 $AX=B$ (2)
where the columns of $B$ are the individual right-hand sides, and the columns of $X$ are the corresponding solutions.
We also use the following notation, both here and in the function documents:
 $\stackrel{^}{x}$ a computed solution to $Ax=b$, (which usually differs from the exact solution $x$ because of round-off error) $r=b-A\stackrel{^}{x}$ the residual corresponding to the computed solution $\stackrel{^}{x}$ ${‖x‖}_{\infty }=\underset{i}{\mathrm{max}}\phantom{\rule{0.25em}{0ex}}\left|{x}_{i}\right|$ the $\infty$-norm of the vector $x$ ${‖x‖}_{1}=\sum _{j=1}^{n}\left|{x}_{j}\right|$ the $1$-norm of the vector $x$ ${‖A‖}_{\infty }=\underset{i}{\mathrm{max}}\phantom{\rule{0.25em}{0ex}}\sum _{j}\phantom{\rule{0.25em}{0ex}}\left|{a}_{ij}\right|$ the $\infty$-norm of the matrix $A$ ${‖A‖}_{1}=\underset{j}{\mathrm{max}}\phantom{\rule{0.25em}{0ex}}\sum _{i=1}^{n}\left|{a}_{ij}\right|$ the $1$-norm of the matrix $A$ $\left|x\right|$ the vector with elements $\left|{x}_{i}\right|$ $\left|A\right|$ the matrix with elements $\left|{a}_{ij}\right|$
Inequalities of the form $\left|A\right|\le \left|B\right|$ are interpreted component-wise, that is $\left|{a}_{ij}\right|\le \left|{b}_{ij}\right|$ for all $i,j$.

### Matrix Factorizations

If $A$ is upper or lower triangular, $Ax=b$ can be solved by a straightforward process of backward or forward substitution.
Otherwise, the solution is obtained after first factorizing $A$, as follows.
General matrices (LU factorization with partial pivoting)
 $A=PLU$
where $P$ is a permutation matrix, $L$ is lower-triangular with diagonal elements equal to $1$, and $U$ is upper-triangular; the permutation matrix $P$ (which represents row interchanges) is needed to ensure numerical stability.
Symmetric positive definite matrices (Cholesky factorization)
 $A=UTU or A=LLT$
where $U$ is upper triangular and $L$ is lower triangular.
Symmetric positive semidefinite matrices (pivoted Cholesky factorization)
 $A=PUTUPT or A=PLLTPT$
where $P$ is a permutation matrix, $U$ is upper triangular and $L$ is lower triangular. The permutation matrix $P$ (which represents row-and-column interchanges) is needed to ensure numerical stability and to reveal the numerical rank of $A$.
Symmetric indefinite matrices (Bunch–Kaufman factorization)
 $A = PUD UT PT or A = PLD LT PT$
where $P$ is a permutation matrix, $U$ is upper triangular, $L$ is lower triangular, and $D$ is a block diagonal matrix with diagonal blocks of order $1$ or $2$; $U$ and $L$ have diagonal elements equal to $1$, and have $2$ by $2$ unit matrices on the diagonal corresponding to the $2$ by $2$ blocks of $D$. The permutation matrix $P$ (which represents symmetric row-and-column interchanges) and the $2$ by $2$ blocks in $D$ are needed to ensure numerical stability. If $A$ is in fact positive definite, no interchanges are needed and the factorization reduces to $A=UD{U}^{\mathrm{T}}$ or $A=LD{L}^{\mathrm{T}}$ with diagonal $D$, which is simply a variant form of the Cholesky factorization.

### Solution of Systems of Equations

Given one of the above matrix factorizations, it is straightforward to compute a solution to $Ax=b$ by solving two subproblems, as shown below, first for $y$ and then for $x$. Each subproblem consists essentially of solving a triangular system of equations by forward or backward substitution; the permutation matrix $P$ and the block diagonal matrix $D$ introduce only a little extra complication:
General matrices ( LU factorization)
 $Ly=PTb Ux=y$
Symmetric positive definite matrices (Cholesky factorization)
 $UTy=b Ux=y or Ly=b LTx=y$
Symmetric indefinite matrices (Bunch–Kaufman factorization)
 $PUDy=b UTPTx=y or PLDy=b LTPTx=y$

### Sensitivity and Error Analysis

#### Normwise error bounds

Frequently, in practical problems the data $A$ and $b$ are not known exactly, and it is then important to understand how uncertainties or perturbations in the data can affect the solution.
If $x$ is the exact solution to $Ax=b$, and $x+\delta x$ is the exact solution to a perturbed problem $\left(A+\delta A\right)\left(x+\delta x\right)=\left(b+\delta b\right)$, then
 $δx x ≤κA δA A +δb b +⋯second-order terms$
where $\kappa \left(A\right)$ is the condition number of $A$ defined by
 $κA = A.A-1 .$ (3)
In other words, relative errors in $A$ or $b$ may be amplified in $x$ by a factor $\kappa \left(A\right)$. Estimating condition numbers discusses how to compute or estimate $\kappa \left(A\right)$.
Similar considerations apply when we study the effects of rounding errors introduced by computation in finite precision. The effects of rounding errors can be shown to be equivalent to perturbations in the original data, such that $\frac{‖\delta A‖}{‖A‖}$ and $\frac{‖\delta b‖}{‖b‖}$ are usually at most $p\left(n\right)\epsilon$, where $\epsilon$ is the machine precision and $p\left(n\right)$ is an increasing function of $n$ which is seldom larger than $10n$ (although in theory it can be as large as ${2}^{n-1}$).
In other words, the computed solution $\stackrel{^}{x}$ is the exact solution of a linear system $\left(A+\delta A\right)\stackrel{^}{x}=b+\delta b$ which is close to the original system in a normwise sense.

#### Estimating condition numbers

The previous section has emphasized the usefulness of the quantity $\kappa \left(A\right)$ in understanding the sensitivity of the solution of $Ax=b$. To compute the value of $\kappa \left(A\right)$ from equation (3) is more expensive than solving $Ax=b$ in the first place. Hence it is standard practice to estimate $\kappa \left(A\right)$, in either the $1$-norm or the $\infty$-norm, by a method which only requires $\mathit{O}\left({n}^{2}\right)$ additional operations, assuming that a suitable factorization of $A$ is available.
The method used in this chapter is Higham's modification of Hager's method (see Higham (1988)). It yields an estimate which is never larger than the true value, but which seldom falls short by more than a factor of $3$ (although artificial examples can be constructed where it is much smaller). This is acceptable since it is the order of magnitude of $\kappa \left(A\right)$ which is important rather than its precise value.
Because $\kappa \left(A\right)$ is infinite if $A$ is singular, the functions in this chapter actually return the reciprocal of $\kappa \left(A\right)$.

#### Scaling and Equilibration

The condition of a matrix and hence the accuracy of the computed solution, may be improved by scaling; thus if ${D}_{1}$ and ${D}_{2}$ are diagonal matrices with positive diagonal elements, then
 $B = D1 A D2$
is the scaled matrix. A general matrix is said to be equilibrated if it is scaled so that the lengths of its rows and columns have approximately equal magnitude. Similarly a general matrix is said to be row-equilibrated (column-equilibrated) if it is scaled so that the lengths of its rows (columns) have approximately equal magnitude. Note that row scaling can affect the choice of pivot when partial pivoting is used in the factorization of $A$.
A symmetric or Hermitian positive definite matrix is said to be equilibrated if the diagonal elements are all approximately equal to unity.
For further information on scaling and equilibration see Section 3.5.2 of Golub and Van Loan (1996), Section 7.2, 7.3 and 9.8 of Higham (1988) and Section 5 of Chapter 4 of Wilkinson (1965).
Functions are provided to return the scaling factors that equilibrate a matrix for general, general band, symmetric and Hermitian positive definite and symmetric and Hermitian positive definite band matrices.

#### Componentwise error bounds

A disadvantage of normwise error bounds is that they do not reflect any special structure in the data $A$ and $b$ – that is, a pattern of elements which are known to be zero – and the bounds are dominated by the largest elements in the data.
Componentwise error bounds overcome these limitations. Instead of the normwise relative error, we can bound the relative error in each component of $A$ and $b$:
 $maxijk δaij aij , δbk bk ≤ω$
where the component-wise backward error bound $\omega$ is given by
 $ω= maxi ri A. x^+bi .$
Functions are provided in this chapter which compute $\omega$, and also compute a forward error bound which is sometimes much sharper than the normwise bound given earlier:
 $x- x^ ∞ x∞ ≤ A-1 . r ∞ x∞ .$
Care is taken when computing this bound to allow for rounding errors in computing $r$. The norm ${‖\left|{A}^{-1}\right|.\left|r\right|‖}_{\infty }$ is estimated cheaply (without computing ${A}^{-1}$) by a modification of the method used to estimate $\kappa \left(A\right)$.

#### Iterative refinement of the solution

If $\stackrel{^}{x}$ is an approximate computed solution to $Ax=b$, and $r$ is the corresponding residual, then a procedure for iterative refinement of $\stackrel{^}{x}$ can be defined as follows, starting with ${x}_{0}=\stackrel{^}{x}$:
• for $i=0,1,\dots \text{}$, until convergence
 compute ${r}_{i}=b-A{x}_{i}$ solve $A{d}_{i}={r}_{i}$ compute ${x}_{i+1}={x}_{i}+{d}_{i}$
In Chapter F04, functions are provided which perform this procedure using additional precision to compute $r$, and are thus able to reduce the forward error to the level of machine precision.
The functions in this chapter do not use additional precision to compute $r$, and cannot guarantee a small forward error, but can guarantee a small backward error (except in rare cases when $A$ is very ill-conditioned, or when $A$ and $x$ are sparse in such a way that $\left|A\right|.\left|x\right|$ has a zero or very small component). The iterations continue until the backward error has been reduced as much as possible; usually only one iteration is needed.

### Matrix Inversion

It is seldom necessary to compute an explicit inverse of a matrix. In particular, do not attempt to solve $Ax=b$ by first computing ${A}^{-1}$ and then forming the matrix-vector product $x={A}^{-1}b$; the procedure described in Solution of Systems of Equations is more efficient and more accurate.
However, functions are provided for the rare occasions when an inverse is needed, using one of the factorizations described in Matrix Factorizations.

### Packed Storage Formats

Functions which handle symmetric matrices are usually designed so that they use either the upper or lower triangle of the matrix; it is not necessary to store the whole matrix. If the upper or lower triangle is stored conventionally in the upper or lower triangle of a two-dimensional array, the remaining elements of the array can be used to store other useful data.
However, that is not always convenient, and if it is important to economize on storage, the upper or lower triangle can be stored in a one-dimensional array of length $n\left(n+1\right)/2$ or a two-dimensional array with $n\left(n+1\right)/2$ elements; in other words, the storage is almost halved.
The one-dimensional array storage format is referred to as packed storage; it is described in Packed storage. The two-dimensional array storage format is referred to as Rectangular Full Packed (RFP) format; it is described in Rectangular Full Packed (RFP) Storage. They may also be used for triangular matrices.
Functions designed for these packed storage formats perform the same number of arithmetic operations as functions which use conventional storage. Those using a packed one-dimensional array are usually less efficient, especially on high-performance computers, so there is then a trade-off between storage and efficiency. The RFP functions are as efficient as for conventional storage, although only a small subset of functions use this format.

### Band and Tridiagonal Matrices

A band matrix is one whose nonzero elements are confined to a relatively small number of subdiagonals or superdiagonals on either side of the main diagonal. A tridiagonal matrix is a special case of a band matrix with just one subdiagonal and one superdiagonal. Algorithms can take advantage of bandedness to reduce the amount of work and storage required. The storage scheme used for band matrices is described in Band storage.
The $LU$ factorization for general matrices, and the Cholesky factorization for symmetric and Hermitian positive definite matrices both preserve bandedness. Hence functions are provided which take advantage of the band structure when solving systems of linear equations.
The Cholesky factorization preserves bandedness in a very precise sense: the factor $U$ or $L$ has the same number of superdiagonals or subdiagonals as the original matrix. In the $LU$ factorization, the row-interchanges modify the band structure: if $A$ has ${k}_{l}$ subdiagonals and ${k}_{u}$ superdiagonals, then $L$ is not a band matrix but still has at most ${k}_{l}$ nonzero elements below the diagonal in each column; and $U$ has at most ${k}_{l}+{k}_{u}$ superdiagonals.
The Bunch–Kaufman factorization does not preserve bandedness, because of the need for symmetric row-and-column permutations; hence no functions are provided for symmetric indefinite band matrices.
The inverse of a band matrix does not in general have a band structure, so no functions are provided for computing inverses of band matrices.

### Block Partitioned Algorithms

Many of the functions in this chapter use what is termed a block partitioned algorithm. This means that at each major step of the algorithm a block of rows or columns is updated, and most of the computation is performed by matrix-matrix operations on these blocks. The matrix-matrix operations are performed by calls to the Level 3 BLAS which are the key to achieving high performance on many modern computers. See Golub and Van Loan (1996) or Anderson et al. (1999) for more about block partitioned algorithms.
The performance of a block partitioned algorithm varies to some extent with the block size – that is, the number of rows or columns per block. This is a machine-dependent argument, which is set to a suitable value when the library is implemented on each range of machines. You do not normally need to be aware of what value is being used. Different block sizes may be used for different functions. Values in the range $16$ to $64$ are typical.
On some machines there may be no advantage from using a block partitioned algorithm, and then the functions use an unblocked algorithm (effectively a block size of $1$), relying solely on calls to the Level 2 BLAS again).

### Mixed Precision LAPACK Routines

Some LAPACK routines use mixed precision arithmetic in an effort to solve problems more efficiently on modern hardware. They work by converting a double precision problem into an equivalent single precision problem, solving it and then using iterative refinement in double precision to find a full precision solution to the original problem. The method may fail if the problem is too ill-conditioned to allow the initial single precision solution, in which case the functions fall back to solve the original problem entirely in double precision. The vast majority of problems are not so ill-conditioned, and in those cases the technique can lead to significant gains in speed without loss of accuracy. This is particularly true on machines where double precision arithmetic is significantly slower than single precision.

## Recommendations on Choice and Use of Available Functions

### Available Functions

Tables 1 to 8 in Tables of Driver and Computational Routines show the functions which are provided for performing different computations on different types of matrices. Tables 1 to 4 show functions for real matrices; Tables 5 to 8 show functions for complex matrices. Each entry in the table gives the NAG function name and the LAPACK double precision name.
Functions are provided for the following types of matrix:
• general
• general band
• general tridiagonal
• symmetric or Hermitian positive definite
• symmetric or Hermitian positive definite (packed storage)
• symmetric or Hermitian positive definite (RFP storage)
• symmetric or Hermitian positive definite band
• symmetric or Hermitian positive definite tridiagonal
• symmetric or Hermitian indefinite
• symmetric or Hermitian indefinite (packed storage)
• triangular
• triangular (packed storage)
• triangular (RFP storage)
• triangular band
For each of the above types of matrix (except where indicated), functions are provided to perform the following computations:
 (a) (except for RFP matrices) solve a system of linear equations (driver functions); (b) (except for RFP matrices) solve a system of linear equations with condition and error estimation (expert drivers); (c) (except for triangular matrices) factorize the matrix (see Matrix Factorizations); (d) solve a system of linear equations, using the factorization (see Solution of Systems of Equations); (e) (except for RFP matrices) estimate the condition number of the matrix, using the factorization (see Estimating condition numbers); these functions also require the norm of the original matrix (except when the matrix is triangular) which may be computed by a function in (f) (except for RFP matrices) refine the solution and compute forward and backward error bounds (see Componentwise error bounds and Iterative refinement of the solution); these functions require the original matrix and right-hand side, as well as the factorization returned from (a) and the solution returned from (b); (g) (except for band and tridiagonal matrices) invert the matrix, using the factorization (see Matrix Inversion); (h) (except for tridiagonal, symmetric indefinite, triangular and RFP matrices) compute scale factors to equilibrate the matrix (see Scaling and Equilibration).
Thus, to solve a particular problem, it is usually only necessary to call a single driver function, but alternatively two or more functions may be called in succession. This is illustrated in the example programs in the function documents.

### Matrix Storage Schemes

In this chapter the following different storage schemes are used for matrices:
• – conventional storage in a two-dimensional array;
• – packed storage for symmetric, Hermitian or triangular matrices;
• – rectangular full packed (RFP) storage for symmetric, Hermitian or triangular matrices;
• – band storage for band matrices.
In the examples below, $*$ indicates an array element which need not be set and is not referenced by the functions. .

#### Conventional storage

The default scheme for storing matrices is the obvious one: a matrix $A$ is stored in a two-dimensional array a, with matrix element ${a}_{ij}$ stored in array element $\mathrm{a}\left(i,j\right)$.
If a matrix is triangular (upper or lower, as specified by the argument uplo), only the elements of the relevant triangle are stored; the remaining elements of the array need not be set. Such elements are indicated by * or $⌴$ in the examples below.
For example, when $n=4$:
 uplo Triangular matrix $\mathbit{A}$ Storage in array a 'U' $\left(\begin{array}{llll}{a}_{11}& {a}_{12}& {a}_{13}& {a}_{14}\\ & {a}_{22}& {a}_{23}& {a}_{24}\\ & & {a}_{33}& {a}_{34}\\ & & & {a}_{44}\end{array}\right)$ $\begin{array}{cccc}{a}_{11}& {a}_{12}& {a}_{13}& {a}_{14}\\ \text{⌴}& {a}_{22}& {a}_{23}& {a}_{24}\\ \text{⌴}& \text{⌴}& {a}_{33}& {a}_{34}\\ \text{⌴}& \text{⌴}& \text{⌴}& {a}_{44}\end{array}$ 'L' $\left(\begin{array}{llll}{a}_{11}& & & \\ {a}_{21}& {a}_{22}& & \\ {a}_{31}& {a}_{32}& {a}_{33}& \\ {a}_{41}& {a}_{42}& {a}_{43}& {a}_{44}\end{array}\right)$ $\begin{array}{cccc}{a}_{11}& \text{⌴}& \text{⌴}& \text{⌴}\\ {a}_{21}& {a}_{22}& \text{⌴}& \text{⌴}\\ {a}_{31}& {a}_{32}& {a}_{33}& \text{⌴}\\ {a}_{41}& {a}_{42}& {a}_{43}& {a}_{44}\end{array}$
Functions which handle symmetric or Hermitian matrices allow for either the upper or lower triangle of the matrix (as specified by uplo) to be stored in the corresponding elements of the array; the remaining elements of the array need not be set.
For example, when $n=4$:
 uplo Hermitian matrix $\mathbit{A}$ Storage in array a 'U' $\left(\begin{array}{llll}{a}_{11}& {a}_{12}& {a}_{13}& {a}_{14}\\ {\stackrel{-}{a}}_{12}& {a}_{22}& {a}_{23}& {a}_{24}\\ {\stackrel{-}{a}}_{13}& {\stackrel{-}{a}}_{23}& {a}_{33}& {a}_{34}\\ {\stackrel{-}{a}}_{14}& {\stackrel{-}{a}}_{24}& {\stackrel{-}{a}}_{34}& {a}_{44}\end{array}\right)$ $\begin{array}{cccc}{a}_{11}& {a}_{12}& {a}_{13}& {a}_{14}\\ \text{⌴}& {a}_{22}& {a}_{23}& {a}_{24}\\ \text{⌴}& \text{⌴}& {a}_{33}& {a}_{34}\\ \text{⌴}& \text{⌴}& \text{⌴}& {a}_{44}\end{array}$ 'L' $\left(\begin{array}{llll}{a}_{11}& {\stackrel{-}{a}}_{21}& {\stackrel{-}{a}}_{31}& {\stackrel{-}{a}}_{41}\\ {a}_{21}& {a}_{22}& {\stackrel{-}{a}}_{32}& {\stackrel{-}{a}}_{42}\\ {a}_{31}& {a}_{32}& {a}_{33}& {\stackrel{-}{a}}_{43}\\ {a}_{41}& {a}_{42}& {a}_{43}& {a}_{44}\end{array}\right)$ $\begin{array}{cccc}{a}_{11}& \text{⌴}& \text{⌴}& \text{⌴}\\ {a}_{21}& {a}_{22}& \text{⌴}& \text{⌴}\\ {a}_{31}& {a}_{32}& {a}_{33}& \text{⌴}\\ {a}_{41}& {a}_{42}& {a}_{43}& {a}_{44}\end{array}$

#### Packed storage

Symmetric, Hermitian or triangular matrices may be stored more compactly, if the relevant triangle (again as specified by uplo) is packed by columns in a one-dimensional array. In this chapter, as in Chapter F08, arrays which hold matrices in packed storage, have names ending in P. For a matrix of order $n$, the array must have at least $n\left(n+1\right)/2$ elements. So:
• if $\mathrm{uplo}=\text{'U'}$, ${a}_{ij}$ is stored in $\mathrm{ap}\left(i+j\left(j-1\right)/2\right)$ for $i\le j$;
• if $\mathrm{uplo}=\text{'L'}$, ${a}_{ij}$ is stored in $\mathrm{ap}\left(i+\left(2n-j\right)\left(j-1\right)/2\right)$ for $j\le i$.
For example:
 Triangle of matrix $A$ Packed storage in array ap $\mathrm{uplo}=\text{'U'}$ $\left(\begin{array}{llll}{a}_{11}& {a}_{12}& {a}_{13}& {a}_{14}\\ & {a}_{22}& {a}_{23}& {a}_{24}\\ & & {a}_{33}& {a}_{34}\\ & & & {a}_{44}\end{array}\right)$ ${a}_{11}\underbrace{{a}_{12}{a}_{22}}\underbrace{{a}_{13}{a}_{23}{a}_{33}}\underbrace{{a}_{14}{a}_{24}{a}_{34}{a}_{44}}$ $\mathrm{uplo}=\text{'L'}$ $\left(\begin{array}{llll}{a}_{11}& & & \\ {a}_{21}& {a}_{22}& & \\ {a}_{31}& {a}_{32}& {a}_{33}& \\ {a}_{41}& {a}_{42}& {a}_{43}& {a}_{44}\end{array}\right)$ $\underbrace{{a}_{11}{a}_{21}{a}_{31}{a}_{41}}\underbrace{{a}_{22}{a}_{32}{a}_{42}}\underbrace{{a}_{33}{a}_{43}}{a}_{44}$
Note that for real symmetric matrices, packing the upper triangle by columns is equivalent to packing the lower triangle by rows; packing the lower triangle by columns is equivalent to packing the upper triangle by rows. (For complex Hermitian matrices, the only difference is that the off-diagonal elements are conjugated.)

#### Rectangular Full Packed (RFP) Storage

The rectangular full packed (RFP) storage format offers the same savings in storage as the packed storage format (described in Packed storage), but is likely to be much more efficient in general since the block structure of the matrix is maintained. This structure can be exploited using block partition algorithms (see Block Partitioned Algorithms) in a similar way to matrices that use conventional storage.
Figure 1
Figure 1 gives a graphical representation of the key idea of RFP for the particular case of a lower triangular matrix of even dimensions. In all cases the original triangular matrix of stored elements is separated into a trapezoidal part and a triangular part. The number of columns in these two parts is equal when the dimension of the matrix is even, $n=2k$, while the trapezoidal part has $k+1$ columns when $n=2k+1$. The smaller part is then transposed and fitted onto the trapezoidal part forming a rectangle. The rectangle has dimensions $2k+1$ and $q$, where $q=k$ when $n$ is even and $q=k+1$ when $n$ is odd.
For functions using RFP there is the option of storing the rectangle as described above ($\mathrm{transr}=\text{'N'}$) or its transpose ($\mathrm{transr}=\text{'T'}$, for real a) or its conjugate transpose ($\mathrm{transr}=\text{'C'}$, for complex a).
As an example, we first consider RFP for the case $n=2k$ with $k=3$.
If $\mathrm{transr}=\text{'N'}$, then ar holds a as follows:
• For $\mathrm{uplo}=\text{'U'}$ the upper trapezoid $\mathrm{ar}\left(1:6,1:3\right)$ consists of the last three columns of a upper. The lower triangle $\mathrm{ar}\left(5:7,1:3\right)$ consists of the transpose of the first three columns of a upper.
• For $\mathrm{uplo}=\text{'L'}$ the lower trapezoid $\mathrm{ar}\left(2:7,1:3\right)$ consists of the first three columns of a lower. The upper triangle $\mathrm{ar}\left(1:3,1:3\right)$ consists of the transpose of the last three columns of a lower.
If $\mathrm{transr}=\text{'T'}$, then ar in both uplo cases is just the transpose of ar as defined when $\mathrm{transr}=\text{'N'}$.
 uplo Triangle of matrix $\mathbf{A}$ Rectangular Full Packed matrix $\mathbf{AR}$ $\mathrm{transr}=\text{'N'}$ $\mathrm{transr}=\text{'T'}$ 'U' $\left(\begin{array}{llllll}\mathbf{00}& \mathbf{01}& \mathbf{02}& 03& 04& 05\\ & \mathbf{11}& \mathbf{12}& 13& 14& 15\\ & & \mathbf{22}& 23& 24& 25\\ & & & 33& 34& 35\\ & & & & 44& 45\\ & & & & & 55\end{array}\right)$ $\begin{array}{ccc}03& 04& 05\\ 13& 14& 15\\ 23& 24& 25\\ 33& 34& 35\\ \mathbf{00}& 44& 45\\ \mathbf{01}& \mathbf{11}& 55\\ \mathbf{02}& \mathbf{12}& \mathbf{22}\end{array}$ $\begin{array}{ccccccc}03& 13& 23& 33& \mathbf{00}& \mathbf{01}& \mathbf{02}\\ 04& 14& 24& 34& 44& \mathbf{11}& \mathbf{12}\\ 05& 15& 25& 35& 45& 55& \mathbf{22}\end{array}$ 'L' $\left(\begin{array}{l}00\\ 10& 11\\ 20& 21& 22\\ 30& 31& 32& \mathbf{33}\\ 40& 41& 42& \mathbf{43}& \mathbf{44}\\ 50& 51& 52& \mathbf{53}& \mathbf{54}& \mathbf{55}\end{array}\right)$ $\begin{array}{ccc}\mathbf{33}& \mathbf{43}& \mathbf{53}\\ 00& \mathbf{44}& \mathbf{54}\\ 10& 11& \mathbf{55}\\ 20& 21& 22\\ 30& 31& 32\\ 40& 41& 42\\ 50& 51& 52\end{array}$ $\begin{array}{ccccccc}\mathbf{33}& 00& 10& 20& 30& 40& 50\\ \mathbf{43}& \mathbf{44}& 11& 21& 31& 41& 51\\ \mathbf{53}& \mathbf{54}& \mathbf{55}& 22& 32& 42& 52\end{array}$
Now we consider RFP for the case $n=2k+1$ and $k=2$.
If $\mathrm{transr}=\text{'N'}$. ar holds a as follows:
• if $\mathrm{uplo}=\text{'U'}$ the upper trapezoid $\mathrm{ar}\left(1:5,1:3\right)$ consists of the last three columns of a upper. The lower triangle $\mathrm{ar}\left(4:5,1:2\right)$ consists of the transpose of the first two columns of a upper;
• if $\mathrm{uplo}=\text{'L'}$ the lower trapezoid $\mathrm{ar}\left(1:5,1:3\right)$ consists of the first three columns of a lower. The upper triangle $\mathrm{ar}\left(1:2,2:3\right)$ consists of the transpose of the last two columns of a lower.
If $\mathrm{transr}=\text{'T'}$. ar in both uplo cases is just the transpose of ar as defined when $\mathrm{transr}=\text{'N'}$.
 uplo Triangle of matrix $\mathbf{A}$ Rectangular Full Packed matrix $\mathbf{AR}$ $\mathrm{transr}=\text{'N'}$ $\mathrm{transr}=\text{'T'}$ 'U' $\left(\begin{array}{lllll}\mathbf{00}& \mathbf{01}& 02& 03& 04\\ & \mathbf{11}& 12& 13& 14\\ & & 22& 23& 24\\ & & & 33& 34\\ & & & & 44\end{array}\right)$ $\begin{array}{ccc}02& 03& 04\\ 12& 13& 14\\ 22& 23& 24\\ \mathbf{00}& 33& 34\\ \mathbf{01}& \mathbf{11}& 44\end{array}$ $\begin{array}{ccccc}02& 12& 22& \mathbf{00}& \mathbf{01}\\ 03& 13& 23& 33& \mathbf{11}\\ 04& 14& 24& 34& 44\end{array}$ 'L' $\left(\begin{array}{l}00\\ 10& 11\\ 20& 21& 22\\ 30& 31& 32& \mathbf{33}\\ 40& 41& 42& \mathbf{43}& \mathbf{44}\end{array}\right)$ $\begin{array}{ccc}00& \mathbf{33}& \mathbf{43}\\ 10& 11& \mathbf{44}\\ 20& 21& 22\\ 30& 31& 32\\ 40& 41& 42\end{array}$ $\begin{array}{cccccc}00& 10& 20& 30& 40& 50\\ \mathbf{33}& 11& 21& 31& 41& 51\\ \mathbf{43}& \mathbf{44}& 22& 32& 42& 52\end{array}$
Explicitly, in the real matrix case, ar is a one-dimensional array of length $n\left(n+1\right)/2$ and contains the elements of a as follows:
for $\mathrm{uplo}=\text{'U'}$ and $\mathrm{transr}=\text{'N'}$,
${a}_{ij}$ is stored in $\mathrm{ar}\left(\left(2k+1\right)\left(\mathit{i}-1\right)+\mathit{j}+k+1\right)$, for $1\le \mathit{j}\le k$ and $1\le i\le \mathit{j}$, and
${a}_{ij}$ is stored in $\mathrm{ar}\left(\left(2k+1\right)\left(\mathit{j}-k-1\right)+i\right)$, for $k and $1\le i\le j$;
for $\mathrm{uplo}=\text{'U'}$ and $\mathrm{transr}=\text{'T'}$,
${a}_{ij}$ is stored in $\mathrm{ar}\left(q\left(j+k\right)+i\right)$, for $1\le j\le k$ and $1\le i\le j$, and
${a}_{ij}$ is stored in $\mathrm{ar}\left(q\left(i-1\right)+\mathit{j}-k\right)$, for $k and $1\le i\le j$;
for $\mathrm{uplo}=\text{'L'}$ and $\mathrm{transr}=\text{'N'}$,
${a}_{ij}$ is stored in $\mathrm{ar}\left(\left(2k+1\right)\left(j-1\right)+i+k-q+1\right)$, for $1\le j\le q$ and $j\le i\le n$, and
${a}_{ij}$ is stored in $\mathrm{ar}\left(\left(2k+1\right)\left(i-k-1\right)+j-q\right)$, for $q and $j\le i\le n$;
for $\mathrm{uplo}=\text{'L'}$ and $\mathrm{transr}=\text{'T'}$,
${a}_{ij}$ is stored in $\mathrm{ar}\left(q\left(i+k-q\right)+j\right)$, for $1\le j\le q$ and $1\le i\le n$, and
${a}_{ij}$ is stored in $\mathrm{ar}\left(q\left(j-1-q\right)+i-k\right)$, for $q and $1\le i\le n$.
In the case of complex matrices, the assumption is that the full matrix, if it existed, would be Hermitian. Thus, when $\mathrm{transr}=\text{'N'}$, the triangular portion of a that is, in the real case, transposed into the notional $\left(2k+1\right)$ by $q$ RFP matrix is also conjugated. When $\mathrm{transr}=\text{'C'}$ the notional $q$ by $\left(2k+1\right)$ RFP matrix is the conjugate transpose of the corresponding $\mathrm{transr}=\text{'N'}$ RFP matrix. Explicitly, for complex a, the array ar contains the elements (or conjugated elements) of a as follows:
for $\mathrm{uplo}=\text{'U'}$ and $\mathrm{transr}=\text{'N'}$,
${\stackrel{-}{a}}_{ij}$ is stored in $\mathrm{ar}\left(\left(2k+1\right)\left(\mathit{i}-1\right)+\mathit{j}+k+1\right)$, for $1\le \mathit{j}\le k$ and $1\le i\le \mathit{j}$, and
${a}_{ij}$ is stored in $\mathrm{ar}\left(\left(2k+1\right)\left(\mathit{j}-k-1\right)+i\right)$, for $k and $1\le i\le j$;
for $\mathrm{uplo}=\text{'U'}$ and $\mathrm{transr}=\text{'C'}$,
${a}_{ij}$ is stored in $\mathrm{ar}\left(q\left(j+k\right)+i\right)$, for $1\le j\le k$ and $1\le i\le j$, and
${\stackrel{-}{a}}_{ij}$ is stored in $\mathrm{ar}\left(q\left(i-1\right)+\mathit{j}-k\right)$, for $k and $1\le i\le j$;
for $\mathrm{uplo}=\text{'L'}$ and $\mathrm{transr}=\text{'N'}$,
${a}_{ij}$ is stored in $\mathrm{ar}\left(\left(2k+1\right)\left(j-1\right)+i+k-q+1\right)$, for $1\le j\le q$ and $j\le i\le n$, and
${\stackrel{-}{a}}_{ij}$ is stored in $\mathrm{ar}\left(\left(2k+1\right)\left(i-k-1\right)+j-q\right)$, for $q and $j\le i\le n$;
for $\mathrm{uplo}=\text{'L'}$ and $\mathrm{transr}=\text{'C'}$,
${\stackrel{-}{a}}_{ij}$ is stored in $\mathrm{ar}\left(q\left(i+k-q\right)+j\right)$, for $1\le j\le q$ and $1\le i\le n$, and
${a}_{ij}$ is stored in $\mathrm{ar}\left(q\left(j-1-q\right)+i-k\right)$, for $q and $1\le i\le n$.

#### Band storage

A band matrix with ${k}_{l}$ subdiagonals and ${k}_{u}$ superdiagonals may be stored compactly in a two-dimensional array with ${k}_{l}+{k}_{u}+1$ rows and $n$ columns. Columns of the matrix are stored in corresponding columns of the array, and diagonals of the matrix are stored in rows of the array. This storage scheme should be used in practice only if ${k}_{l}$, ${k}_{u}\ll n$, although the functions in Chapters F07 and F08 work correctly for all values of ${k}_{l}$ and ${k}_{u}$. In Chapters F07 and F08 arrays which hold matrices in band storage have names ending in $\mathrm{B}$.
To be precise, elements of matrix elements ${a}_{ij}$ are stored as follows:
• ${a}_{ij}$ is stored in $\mathrm{ab}\left({k}_{u}+1+i-j,j\right)$ for $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,j-{k}_{u}\right)\le i\le \mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(n,j+{k}_{l}\right)$.
For example, when $n=5$, ${k}_{l}=2$ and ${k}_{u}=1$:
 Band matrix $\mathbf{A}$ Band storage in array ab $\left(\begin{array}{lllll}{a}_{11}& {a}_{12}& & & \\ {a}_{21}& {a}_{22}& {a}_{23}& & \\ {a}_{31}& {a}_{32}& {a}_{33}& {a}_{34}& \\ & {a}_{42}& {a}_{43}& {a}_{44}& {a}_{45}\\ & & {a}_{53}& {a}_{54}& {a}_{55}\end{array}\right)$ $\begin{array}{ccccc}& & & & \\ \text{*}& {a}_{12}& {a}_{23}& {a}_{34}& {a}_{45}\\ {a}_{11}& {a}_{22}& {a}_{33}& {a}_{44}& {a}_{55}\\ {a}_{21}& {a}_{32}& {a}_{43}& {a}_{54}& \text{*}\\ {a}_{31}& {a}_{42}& {a}_{53}& \text{*}& \text{*}\end{array}$
The elements marked $*$ in the upper left and lower right corners of the array ab need not be set, and are not referenced by the functions.
Note:  when a general band matrix is supplied for $LU$ factorization, space must be allowed to store an additional ${k}_{l}$ superdiagonals, generated by fill-in as a result of row interchanges. This means that the matrix is stored according to the above scheme, but with ${k}_{l}+{k}_{u}$ superdiagonals.
Triangular band matrices are stored in the same format, with either ${k}_{l}=0$ if upper triangular, or ${k}_{u}=0$ if lower triangular.
For symmetric or Hermitian band matrices with $k$ subdiagonals or superdiagonals, only the upper or lower triangle (as specified by uplo) need be stored:
• if $\mathrm{uplo}=\text{'U'}$, ${a}_{ij}$ is stored in $\mathrm{ab}\left(k+1+i-j,j\right)$ for $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,j-k\right)\le i\le j$;
• if $\mathrm{uplo}=\text{'L'}$, ${a}_{ij}$ is stored in $\mathrm{ab}\left(1+i-j,j\right)$ for $j\le i\le \mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(n,j+k\right)$.
For example, when $n=5$ and $k=2$:
 uplo Hermitian band matrix $\mathbit{A}$ Band storage in array ab 'U' $\left(\begin{array}{lllll}{a}_{11}& {a}_{12}& {a}_{13}& & \\ {\stackrel{-}{a}}_{12}& {a}_{22}& {a}_{23}& {a}_{24}& \\ {\stackrel{-}{a}}_{13}& {\stackrel{-}{a}}_{23}& {a}_{33}& {a}_{34}& {a}_{35}\\ & {\stackrel{-}{a}}_{24}& {\stackrel{-}{a}}_{34}& {a}_{44}& {a}_{45}\\ & & {\stackrel{-}{a}}_{35}& {\stackrel{-}{a}}_{45}& {a}_{55}\end{array}\right)$ $\begin{array}{lllll}\text{*}& \text{*}& {a}_{13}& {a}_{24}& {a}_{35}\\ \text{*}& {a}_{12}& {a}_{23}& {a}_{34}& {a}_{45}\\ {a}_{11}& {a}_{22}& {a}_{33}& {a}_{44}& {a}_{55}\end{array}$ 'L' $\left(\begin{array}{lllll}{a}_{11}& {\stackrel{-}{a}}_{21}& {\stackrel{-}{a}}_{31}& & \\ {a}_{21}& {a}_{22}& {\stackrel{-}{a}}_{32}& {\stackrel{-}{a}}_{42}& \\ {a}_{31}& {a}_{32}& {a}_{33}& {\stackrel{-}{a}}_{43}& {\stackrel{-}{a}}_{53}\\ & {a}_{42}& {a}_{43}& {a}_{44}& {\stackrel{-}{a}}_{54}\\ & & {a}_{53}& {a}_{54}& {a}_{55}\end{array}\right)$ $\begin{array}{lllll}{a}_{11}& {a}_{22}& {a}_{33}& {a}_{44}& {a}_{55}\\ {a}_{21}& {a}_{32}& {a}_{43}& {a}_{54}& \text{*}\\ {a}_{31}& {a}_{42}& {a}_{53}& \text{*}& \text{*}\end{array}$
Note that different storage schemes for band matrices are used by some functions in Chapters F01, F02, F03 and F04.

#### Unit triangular matrices

Some functions in this chapter have an option to handle unit triangular matrices (that is, triangular matrices with diagonal elements $\text{}=1$). This option is specified by a argument diag. If $\mathrm{diag}=\text{'U'}$ (Unit triangular), the diagonal elements of the matrix need not be stored, and the corresponding array elements are not referenced by the functions. The storage scheme for the rest of the matrix (whether conventional, packed or band) remains unchanged.

#### Real diagonal elements of complex matrices

Complex Hermitian matrices have diagonal elements that are by definition purely real. In addition, complex triangular matrices which arise in Cholesky factorization are defined by the algorithm to have real diagonal elements.
If such matrices are supplied as input to functions in Chapters F07 and F08, the imaginary parts of the diagonal elements are not referenced, but are assumed to be zero. If such matrices are returned as output by the functions, the computed imaginary parts are explicitly set to zero.

### Parameter Conventions

#### Option arguments

Most functions in this chapter have one or more option arguments, of type string. The descriptions in Section 5 of the function documents refer only to upper-case values (for example $\mathrm{uplo}=\text{'U'}$ or $\text{'L'}$); however, in every case, the corresponding lower-case characters may be supplied (with the same meaning). Any other value is illegal.
A longer character string can be passed as the actual argument, making the calling program more readable, but only the first character is significant. For example:
```[b, info] = f07ae('Transpose', a, ipiv, b);
```

#### Problem dimensions

It is permissible for the problem dimensions (for example, m in nag_lapack_dgetrf (f07ad), n or nrhs_p in nag_lapack_dgetrs (f07ae)) to be passed as zero, in which case the computation (or part of it) is skipped. Negative dimensions are regarded as an error.

### Tables of Driver and Computational Functions

#### Real matrices

 Type of matrix and storage scheme Operation general general band general tridiagonal driver nag_lapack_dgesv (f07aa) nag_lapack_dgbsv (f07ba) nag_lapack_dgtsv (f07ca) expert driver nag_lapack_dgesvx (f07ab) nag_lapack_dgbsvx (f07bb) nag_lapack_dgtsvx (f07cb) mixed precision driver nag_lapack_dsgesv (f07ac) factorize nag_lapack_dgetrf (f07ad) nag_lapack_dgbtrf (f07bd) nag_lapack_dgttrf (f07cd) solve nag_lapack_dgetrs (f07ae) nag_lapack_dgbtrs (f07be) nag_lapack_dgttrs (f07ce) scaling factors nag_lapack_dgeequ (f07af) nag_lapack_dgbequ (f07bf) condition number nag_lapack_dgecon (f07ag) nag_lapack_dgbcon (f07bg) nag_lapack_dgtcon (f07cg) error estimate nag_lapack_dgerfs (f07ah) nag_lapack_dgbrfs (f07bh) nag_lapack_dgtrfs (f07ch) invert nag_lapack_dgetri (f07aj)
Table 1
Functions for real general matrices
 Type of matrix and storage scheme Operation symmetric positive definite symmetric positive definite (packed storage) symmetric positive definite (RFP storage) symmetric positive definite band symmetric positive definite tridiagonal symmetric positive semidefinite driver nag_lapack_dposv (f07fa) nag_lapack_dppsv (f07ga) nag_lapack_dpbsv (f07ha) nag_lapack_dptsv (f07ja) expert driver nag_lapack_dposvx (f07fb) nag_lapack_dppsvx (f07gb) nag_lapack_dpbsvx (f07hb) nag_lapack_dptsvx (f07jb) mixed precision nag_lapack_dsposv (f07fc) factorize nag_lapack_dpotrf (f07fd) nag_lapack_dpptrf (f07gd) nag_lapack_dpftrf (f07wd) nag_lapack_dpbtrf (f07hd) nag_lapack_dpttrf (f07jd) nag_lapack_dpstrf (f07kd) solve nag_lapack_dpotrs (f07fe) nag_lapack_dpptrs (f07ge) nag_lapack_dpftrs (f07we) nag_lapack_dpbtrs (f07he) nag_lapack_dpttrs (f07je) scaling factors nag_lapack_dpoequ (f07ff) nag_lapack_dppequ (f07gf) nag_lapack_dpbequ (f07hf) condition number nag_lapack_dpocon (f07fg) nag_lapack_dppcon (f07gg) nag_lapack_dpbcon (f07hg) nag_lapack_dptcon (f07jg) error estimate nag_lapack_dporfs (f07fh) nag_lapack_dpprfs (f07gh) nag_lapack_dpbrfs (f07hh) nag_lapack_dptrfs (f07jh) invert nag_lapack_dpotri (f07fj) nag_lapack_dpptri (f07gj) nag_lapack_dpftri (f07wj)
Table 2
Functions for real symmetric positive definite and positive semidefinite matrices
 Type of matrix and storage scheme Operation symmetric indefinite symmetric indefinite (packed storage) driver nag_lapack_dsysv (f07ma) nag_lapack_dspsv (f07pa) expert driver nag_lapack_dsysvx (f07mb) nag_lapack_dspsvx (f07pb) factorize nag_lapack_dsytrf (f07md) nag_lapack_dsptrf (f07pd) solve nag_lapack_dsytrs (f07me) nag_lapack_dsptrs (f07pe) condition number nag_lapack_dsycon (f07mg) nag_lapack_dspcon (f07pg) error estimate nag_lapack_dsyrfs (f07mh) nag_lapack_dsprfs (f07ph) invert nag_lapack_dsytri (f07mj) nag_lapack_dsptri (f07pj)
Table 3
Functions for real symmetric indefinite matrices
 Type of matrix and storage scheme Operation triangular triangular (packed storage) triangular (RFP storage) triangular band solve nag_lapack_dtrtrs (f07te) nag_lapack_dtptrs (f07ue) nag_lapack_dtbtrs (f07ve) condition number nag_lapack_dtrcon (f07tg) nag_lapack_dtpcon (f07ug) nag_lapack_dtbcon (f07vg) error estimate nag_lapack_dtrrfs (f07th) nag_lapack_dtprfs (f07uh) nag_lapack_dtbrfs (f07vh) invert nag_lapack_dtrtri (f07tj) nag_lapack_dtptri (f07uj) nag_lapack_dtftri (f07wk)
Table 4
Functions for real triangular matrices

#### Complex matrices

 Type of matrix and storage scheme Operation general general band general tridiagonal driver nag_lapack_zgesv (f07an) nag_lapack_zgbsv (f07bn) nag_lapack_zgtsv (f07cn) expert driver nag_lapack_zgesvx (f07ap) nag_lapack_zgbsvx (f07bp) nag_lapack_zgtsvx (f07cp) mixed precision driver nag_lapack_zcgesv (f07aq) factorize nag_lapack_zgetrf (f07ar) nag_lapack_zgbtrf (f07br) nag_lapack_zgttrf (f07cr) solve nag_lapack_zgetrs (f07as) nag_lapack_zgbtrs (f07bs) nag_lapack_zgttrs (f07cs) scaling factors nag_lapack_zgeequ (f07at) nag_lapack_zgbequ (f07bt) condition number nag_lapack_zgecon (f07au) nag_lapack_zgbcon (f07bu) nag_lapack_zgtcon (f07cu) error estimate nag_lapack_zgerfs (f07av) nag_lapack_zgbrfs (f07bv) nag_lapack_zgtrfs (f07cv) invert nag_lapack_zgetri (f07aw)
Table 5
Functions for complex general matrices
 Type of matrix and storage scheme Operation Hermitian positive definite Hermitian positive definite (packed storage) Hermitian positive definite (RFP storage) Hermitian positive definite band Hermitian positive definite tridiagonal Hermitian positive semidefinite driver nag_lapack_zposv (f07fn) nag_lapack_zppsv (f07gn) nag_lapack_zpbsv (f07hn) nag_lapack_zptsv (f07jn) expert driver nag_lapack_zposvx (f07fp) nag_lapack_zppsvx (f07gp) nag_lapack_zpbsvx (f07hp) nag_lapack_zptsvx (f07jp) mixed precision driver nag_lapack_zcposv (f07fq) factorize nag_lapack_zpotrf (f07fr) nag_lapack_zpptrf (f07gr) nag_lapack_zpftrf (f07wr) nag_lapack_zpbtrf (f07hr) nag_lapack_zpttrf (f07jr) nag_lapack_zpstrf (f07kr) solve nag_lapack_zpotrs (f07fs) nag_lapack_zpptrs (f07gs) nag_lapack_zpftrs (f07ws) nag_lapack_zpbtrs (f07hs) nag_lapack_zpttrs (f07js) scaling factors nag_lapack_zpoequ (f07ft) nag_lapack_zppequ (f07gt) condition number nag_lapack_zpocon (f07fu) nag_lapack_zppcon (f07gu) nag_lapack_zpbcon (f07hu) nag_lapack_zptcon (f07ju) error estimate nag_lapack_zporfs (f07fv) nag_lapack_zpprfs (f07gv) nag_lapack_zpbrfs (f07hv) nag_lapack_zptrfs (f07jv) invert nag_lapack_zpotri (f07fw) nag_lapack_zpptri (f07gw) nag_lapack_zpftri (f07ww)
Table 6
Functions for complex Hermitian positive definite and positive semidefinite matrices
 Type of matrix and storage scheme Operation Hermitian indefinite symmetric indefinite (packed storage) Hermitian indefinite band symmetric indefinite tridiagonal driver nag_lapack_zhesv (f07mn) nag_lapack_zsysv (f07nn) nag_lapack_zhpsv (f07pn) nag_lapack_zspsv (f07qn) expert driver nag_lapack_zhesvx (f07mp) nag_lapack_zsysvx (f07np) nag_lapack_zhpsvx (f07pp) nag_lapack_zspsvx (f07qp) factorize nag_lapack_zhetrf (f07mr) nag_lapack_zsytrf (f07nr) nag_lapack_zhptrf (f07pr) nag_lapack_zsptrf (f07qr) solve nag_lapack_zhetrs (f07ms) nag_lapack_zsytrs (f07ns) nag_lapack_zhptrs (f07ps) nag_lapack_zsptrs (f07qs) condition number nag_lapack_zhecon (f07mu) nag_lapack_zsycon (f07nu) nag_lapack_zhpcon (f07pu) nag_lapack_zspcon (f07qu) error estimate nag_lapack_zherfs (f07mv) nag_lapack_zsyrfs (f07nv) nag_lapack_zhprfs (f07pv) nag_lapack_zsprfs (f07qv) invert nag_lapack_zhetri (f07mw) nag_lapack_zsytri (f07nw) nag_lapack_zhptri (f07pw) nag_lapack_zsptri (f07qw)
Table 7
Functions for complex Hermitian and symmetric indefinite matrices
 Type of matrix and storage scheme Operation triangular triangular (packed storage) triangular (RFP storage) triangular band solve nag_lapack_ztrtrs (f07ts) nag_lapack_ztptrs (f07us) nag_lapack_ztbtrs (f07vs) condition number nag_lapack_ztrcon (f07tu) nag_lapack_ztpcon (f07uu) nag_lapack_ztbcon (f07vu) error estimate nag_lapack_ztrrfs (f07tv) nag_lapack_ztprfs (f07uv) nag_lapack_ztbrfs (f07vv) invert nag_lapack_ztrtri (f07tw) nag_lapack_ztptri (f07uw) nag_lapack_ztftri (f07wx)
Table 8
Functions for complex triangular matrices

## Functionality Index

 Apply iterative refinement to the solution and compute error estimates,
 after factorizing the matrix of coefficients,
 complex band matrix nag_lapack_zgbrfs (f07bv)
 complex Hermitian indefinite matrix nag_lapack_zherfs (f07mv)
 complex Hermitian indefinite matrix, packed storage nag_lapack_zhprfs (f07pv)
 complex Hermitian positive definite band matrix nag_lapack_zpbrfs (f07hv)
 complex Hermitian positive definite matrix nag_lapack_zporfs (f07fv)
 complex Hermitian positive definite matrix, packed storage nag_lapack_zpprfs (f07gv)
 complex Hermitian positive definite tridiagonal matrix nag_lapack_zptrfs (f07jv)
 complex matrix nag_lapack_zgerfs (f07av)
 complex symmetric indefinite matrix nag_lapack_zsyrfs (f07nv)
 complex symmetric indefinite matrix, packed storage nag_lapack_zsprfs (f07qv)
 complex tridiagonal matrix nag_lapack_zgtrfs (f07cv)
 real band matrix nag_lapack_dgbrfs (f07bh)
 real matrix nag_lapack_dgerfs (f07ah)
 real symmetric indefinite matrix nag_lapack_dsyrfs (f07mh)
 real symmetric indefinite matrix, packed storage nag_lapack_dsprfs (f07ph)
 real symmetric positive definite band matrix nag_lapack_dpbrfs (f07hh)
 real symmetric positive definite matrix nag_lapack_dporfs (f07fh)
 real symmetric positive definite matrix, packed storage nag_lapack_dpprfs (f07gh)
 real symmetric positive definite tridiagonal matrix nag_lapack_dptrfs (f07jh)
 real tridiagonal matrix nag_lapack_dgtrfs (f07ch)
 Compute error estimates,
 complex triangular band matrix nag_lapack_ztbrfs (f07vv)
 complex triangular matrix nag_lapack_ztrrfs (f07tv)
 complex triangular matrix, packed storage nag_lapack_ztprfs (f07uv)
 real triangular band matrix nag_lapack_dtbrfs (f07vh)
 real triangular matrix nag_lapack_dtrrfs (f07th)
 real triangular matrix, packed storage nag_lapack_dtprfs (f07uh)
 Compute row and column scalings,
 complex band matrix nag_lapack_zgbequ (f07bt)
 complex Hermitian positive definite band matrix nag_lapack_zpbequ (f07ht)
 complex Hermitian positive definite matrix nag_lapack_zpoequ (f07ft)
 complex Hermitian positive definite matrix, packed storage nag_lapack_zppequ (f07gt)
 complex matrix nag_lapack_zgeequ (f07at)
 real band matrix nag_lapack_dgbequ (f07bf)
 real matrix nag_lapack_dgeequ (f07af)
 real symmetric positive definite band matrix nag_lapack_dpbequ (f07hf)
 real symmetric positive definite matrix nag_lapack_dpoequ (f07ff)
 real symmetric positive definite matrix, packed storage nag_lapack_dppequ (f07gf)
 Condition number estimation,
 after factorizing the matrix of coefficients,
 complex band matrix nag_lapack_zgbcon (f07bu)
 complex Hermitian indefinite matrix nag_lapack_zhecon (f07mu)
 complex Hermitian indefinite matrix, packed storage nag_lapack_zhpcon (f07pu)
 complex Hermitian positive definite band matrix nag_lapack_zpbcon (f07hu)
 complex Hermitian positive definite matrix nag_lapack_zpocon (f07fu)
 complex Hermitian positive definite matrix, packed storage nag_lapack_zppcon (f07gu)
 complex Hermitian positive definite tridiagonal matrix nag_lapack_zptcon (f07ju)
 complex matrix nag_lapack_zgecon (f07au)
 complex symmetric indefinite matrix nag_lapack_zsycon (f07nu)
 complex symmetric indefinite matrix, packed storage nag_lapack_zspcon (f07qu)
 complex tridiagonal matrix nag_lapack_zgtcon (f07cu)
 real band matrix nag_lapack_dgbcon (f07bg)
 real matrix nag_lapack_dgecon (f07ag)
 real symmetric indefinite matrix nag_lapack_dsycon (f07mg)
 real symmetric indefinite matrix, packed storage nag_lapack_dspcon (f07pg)
 real symmetric positive definite band matrix nag_lapack_dpbcon (f07hg)
 real symmetric positive definite matrix nag_lapack_dpocon (f07fg)
 real symmetric positive definite matrix, packed storage nag_lapack_dppcon (f07gg)
 real symmetric positive definite tridiagonal matrix nag_lapack_dptcon (f07jg)
 real tridiagonal matrix nag_lapack_dgtcon (f07cg)
 complex triangular band matrix nag_lapack_ztbcon (f07vu)
 complex triangular matrix nag_lapack_ztrcon (f07tu)
 complex triangular matrix, packed storage nag_lapack_ztpcon (f07uu)
 real triangular band matrix nag_lapack_dtbcon (f07vg)
 real triangular matrix nag_lapack_dtrcon (f07tg)
 real triangular matrix, packed storage nag_lapack_dtpcon (f07ug)
 LDLT factorization,
 complex Hermitian positive definite tridiagonal matrix nag_lapack_zpttrf (f07jr)
 real symmetric positive definite tridiagonal matrix nag_lapack_dpttrf (f07jd)
 LLT or UTU factorization,
 complex Hermitian positive definite band matrix nag_lapack_zpbtrf (f07hr)
 complex Hermitian positive definite matrix nag_lapack_zpotrf (f07fr)
 complex Hermitian positive definite matrix, packed storage nag_lapack_zpptrf (f07gr)
 complex Hermitian positive definite matrix, RFP storage nag_lapack_zpftrf (f07wr)
 complex Hermitian positive semidefinite matrix nag_lapack_zpstrf (f07kr)
 real symmetric positive definite band matrix nag_lapack_dpbtrf (f07hd)
 real symmetric positive definite matrix nag_lapack_dpotrf (f07fd)
 real symmetric positive definite matrix, packed storage nag_lapack_dpptrf (f07gd)
 real symmetric positive definite matrix, RFP storage nag_lapack_dpftrf (f07wd)
 real symmetric positive semidefinite matrix nag_lapack_dpstrf (f07kd)
 LU factorization,
 complex band matrix nag_lapack_zgbtrf (f07br)
 complex matrix nag_lapack_zgetrf (f07ar)
 complex tridiagonal matrix nag_lapack_zgttrf (f07cr)
 real band matrix nag_lapack_dgbtrf (f07bd)
 real tridiagonal matrix nag_lapack_dgttrf (f07cd)
 Matrix inversion,
 after factorizing the matrix of coefficients,
 complex Hermitian indefinite matrix nag_lapack_zhetri (f07mw)
 complex Hermitian indefinite matrix, packed storage nag_lapack_zhptri (f07pw)
 complex Hermitian positive definite matrix nag_lapack_zpotri (f07fw)
 complex Hermitian positive definite matrix, packed storage nag_lapack_zpptri (f07gw)
 complex Hermitian positive definite matrix, RFP storage nag_lapack_zpftri (f07ww)
 complex matrix nag_lapack_zgetri (f07aw)
 complex symmetric indefinite matrix nag_lapack_zsytri (f07nw)
 complex symmetric indefinite matrix, packed storage nag_lapack_zsptri (f07qw)
 real matrix nag_lapack_dgetri (f07aj)
 real symmetric indefinite matrix nag_lapack_dsytri (f07mj)
 real symmetric indefinite matrix, packed storage nag_lapack_dsptri (f07pj)
 real symmetric positive definite matrix nag_lapack_dpotri (f07fj)
 real symmetric positive definite matrix, packed storage nag_lapack_dpptri (f07gj)
 real symmetric positive definite matrix, RFP storage nag_lapack_dpftri (f07wj)
 complex triangular matrix nag_lapack_ztrtri (f07tw)
 complex triangular matrix, packed storage nag_lapack_ztptri (f07uw)
 complex triangular matrix, RFP storage,
 expert driver nag_lapack_ztftri (f07wx)
 real triangular matrix nag_lapack_dtrtri (f07tj)
 real triangular matrix, packed storage nag_lapack_dtptri (f07uj)
 real triangular matrix, RFP storage,
 expert driver nag_lapack_dtftri (f07wk)
 PLDLTPT or PUDUTPT factorization,
 complex Hermitian indefinite matrix nag_lapack_zhetrf (f07mr)
 complex Hermitian indefinite matrix, packed storage nag_lapack_zhptrf (f07pr)
 complex symmetric indefinite matrix nag_lapack_zsytrf (f07nr)
 complex symmetric indefinite matrix, packed storage nag_lapack_zsptrf (f07qr)
 real symmetric indefinite matrix nag_lapack_dsytrf (f07md)
 real symmetric indefinite matrix, packed storage nag_lapack_dsptrf (f07pd)
 Solution of simultaneous linear equations,
 after factorizing the matrix of coefficients,
 complex band matrix nag_lapack_zgbtrs (f07bs)
 complex Hermitian indefinite matrix nag_lapack_zhetrs (f07ms)
 complex Hermitian indefinite matrix, packed storage nag_lapack_zhptrs (f07ps)
 complex Hermitian positive definite band matrix nag_lapack_zpbtrs (f07hs)
 complex Hermitian positive definite matrix nag_lapack_zpotrs (f07fs)
 complex Hermitian positive definite matrix, packed storage nag_lapack_zpptrs (f07gs)
 complex Hermitian positive definite matrix, RFP storage nag_lapack_zpftrs (f07ws)
 complex Hermitian positive definite tridiagonal  matrix nag_lapack_zpttrs (f07js)
 complex matrix nag_lapack_zgetrs (f07as)
 complex symmetric indefinite matrix nag_lapack_zsytrs (f07ns)
 complex symmetric indefinite matrix, packed storage nag_lapack_zsptrs (f07qs)
 complex tridiagonal matrix nag_lapack_zgttrs (f07cs)
 real band matrix nag_lapack_dgbtrs (f07be)
 real matrix nag_lapack_dgetrs (f07ae)
 real symmetric indefinite matrix nag_lapack_dsytrs (f07me)
 real symmetric indefinite matrix, packed storage nag_lapack_dsptrs (f07pe)
 real symmetric positive definite band matrix nag_lapack_dpbtrs (f07he)
 real symmetric positive definite matrix nag_lapack_dpotrs (f07fe)
 real symmetric positive definite matrix, packed storage nag_lapack_dpptrs (f07ge)
 real symmetric positive definite matrix, RFP storage nag_lapack_dpftrs (f07we)
 real symmetric positive definite tridiagonal matrix nag_lapack_dpttrs (f07je)
 real tridiagonal matrix nag_lapack_dgttrs (f07ce)
 expert drivers (with condition and error estimation):
 complex band matrix nag_lapack_zgbsvx (f07bp)
 complex Hermitian indefinite matrix nag_lapack_zhesvx (f07mp)
 complex Hermitian indefinite matrix, packed storage nag_lapack_zhpsvx (f07pp)
 complex Hermitian positive definite band matrix nag_lapack_zpbsvx (f07hp)
 complex Hermitian positive definite matrix nag_lapack_zposvx (f07fp)
 complex Hermitian positive definite matrix, packed storage nag_lapack_zppsvx (f07gp)
 complex Hermitian positive definite tridiagonal matrix nag_lapack_zptsvx (f07jp)
 complex matrix nag_lapack_zgesvx (f07ap)
 complex symmetric indefinite matrix nag_lapack_zsysvx (f07np)
 complex symmetric indefinite matrix, packed storage nag_lapack_zspsvx (f07qp)
 complex tridiagonal matrix nag_lapack_zgtsvx (f07cp)
 real band matrix nag_lapack_dgbsvx (f07bb)
 real matrix nag_lapack_dgesvx (f07ab)
 real symmetric indefinite matrix nag_lapack_dsysvx (f07mb)
 real symmetric indefinite matrix, packed storage nag_lapack_dspsvx (f07pb)
 real symmetric positive definite band matrix nag_lapack_dpbsvx (f07hb)
 real symmetric positive definite matrix nag_lapack_dposvx (f07fb)
 real symmetric positive definite matrix, packed storage nag_lapack_dppsvx (f07gb)
 real symmetric positive definite tridiagonal matrix nag_lapack_dptsvx (f07jb)
 real tridiagonal matrix nag_lapack_dgtsvx (f07cb)
 simple drivers,
 complex band matrix nag_lapack_zgbsv (f07bn)
 complex Hermitian indefinite matrix nag_lapack_zhesv (f07mn)
 complex Hermitian indefinite matrix, packed storage nag_lapack_zhpsv (f07pn)
 complex Hermitian positive definite band matrix nag_lapack_zpbsv (f07hn)
 complex Hermitian positive definite matrix nag_lapack_zposv (f07fn)
 complex Hermitian positive definite matrix, packed storage nag_lapack_zppsv (f07gn)
 complex Hermitian positive definite matrix, using mixed precision nag_lapack_zcposv (f07fq)
 complex Hermitian positive definite tridiagonal matrix nag_lapack_zptsv (f07jn)
 complex matrix nag_lapack_zgesv (f07an)
 complex matrix, using mixed precision nag_lapack_zcgesv (f07aq)
 complex symmetric indefinite matrix nag_lapack_zsysv (f07nn)
 complex symmetric indefinite matrix, packed storage nag_lapack_zspsv (f07qn)
 complex triangular band matrix nag_lapack_ztbtrs (f07vs)
 complex triangular matrix nag_lapack_ztrtrs (f07ts)
 complex triangular matrix, packed storage nag_lapack_ztptrs (f07us)
 complex tridiagonal matrix nag_lapack_zgtsv (f07cn)
 real band matrix nag_lapack_dgbsv (f07ba)
 real matrix nag_lapack_dgesv (f07aa)
 real matrix, using mixed precision nag_lapack_dsgesv (f07ac)
 real symmetric indefinite matrix nag_lapack_dsysv (f07ma)
 real symmetric indefinite matrix, packed storage nag_lapack_dspsv (f07pa)
 real symmetric positive definite band matrix nag_lapack_dpbsv (f07ha)
 real symmetric positive definite matrix nag_lapack_dposv (f07fa)
 real symmetric positive definite matrix, packed storage nag_lapack_dppsv (f07ga)
 real symmetric positive definite matrix, using mixed precision nag_lapack_dsposv (f07fc)
 real symmetric positive definite tridiagonal matrix nag_lapack_dptsv (f07ja)
 real triangular band matrix nag_lapack_dtbtrs (f07ve)
 real triangular matrix nag_lapack_dtrtrs (f07te)
 real triangular matrix, packed storage nag_lapack_dtptrs (f07ue)
 real tridiagonal matrix nag_lapack_dgtsv (f07ca)

## References

Anderson E, Bai Z, Bischof C, Blackford S, Demmel J, Dongarra J J, Du Croz J J, Greenbaum A, Hammarling S, McKenney A and Sorensen D (1999) LAPACK Users' Guide (3rd Edition) SIAM, Philadelphia
Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore
Higham N J (1988) Algorithm 674: Fortran codes for estimating the one-norm of a real or complex matrix, with applications to condition estimation ACM Trans. Math. Software 14 381–396
Wilkinson J H (1965) The Algebraic Eigenvalue Problem Oxford University Press, Oxford