e04he is a comprehensive modified Gauss–Newton algorithm for finding an unconstrained minimum of a sum of squares of $m$ nonlinear functions in $n$ variables $\left(m\ge n\right)$. First and second derivatives are required.

The method is intended for functions which have continuous first and second derivatives (although it will usually work even if the derivatives have occasional discontinuities).

# Syntax

C# |
---|

public static void e04he( int m, int n, E04..::..E04HE_LSQFUN lsqfun, E04..::..E04HE_LSQHES lsqhes, E04..::..E04HE_LSQMON lsqmon, int iprint, int maxcal, double eta, double xtol, double stepmx, double[] x, out double fsumsq, double[] fvec, double[,] fjac, double[] s, double[,] v, out int niter, out int nf, out int ifail ) |

Visual Basic |
---|

Public Shared Sub e04he ( _ m As Integer, _ n As Integer, _ lsqfun As E04..::..E04HE_LSQFUN, _ lsqhes As E04..::..E04HE_LSQHES, _ lsqmon As E04..::..E04HE_LSQMON, _ iprint As Integer, _ maxcal As Integer, _ eta As Double, _ xtol As Double, _ stepmx As Double, _ x As Double(), _ <OutAttribute> ByRef fsumsq As Double, _ fvec As Double(), _ fjac As Double(,), _ s As Double(), _ v As Double(,), _ <OutAttribute> ByRef niter As Integer, _ <OutAttribute> ByRef nf As Integer, _ <OutAttribute> ByRef ifail As Integer _ ) |

Visual C++ |
---|

public: static void e04he( int m, int n, E04..::..E04HE_LSQFUN^ lsqfun, E04..::..E04HE_LSQHES^ lsqhes, E04..::..E04HE_LSQMON^ lsqmon, int iprint, int maxcal, double eta, double xtol, double stepmx, array<double>^ x, [OutAttribute] double% fsumsq, array<double>^ fvec, array<double,2>^ fjac, array<double>^ s, array<double,2>^ v, [OutAttribute] int% niter, [OutAttribute] int% nf, [OutAttribute] int% ifail ) |

F# |
---|

static member e04he : m : int * n : int * lsqfun : E04..::..E04HE_LSQFUN * lsqhes : E04..::..E04HE_LSQHES * lsqmon : E04..::..E04HE_LSQMON * iprint : int * maxcal : int * eta : float * xtol : float * stepmx : float * x : float[] * fsumsq : float byref * fvec : float[] * fjac : float[,] * s : float[] * v : float[,] * niter : int byref * nf : int byref * ifail : int byref -> unit |

#### Parameters

- m
- Type: System..::..Int32
*On entry*: the number $m$ of residuals, ${f}_{i}\left(x\right)$, and the number $n$ of variables, ${x}_{j}$.*Constraint*: $1\le {\mathbf{n}}\le {\mathbf{m}}$.

- n
- Type: System..::..Int32
*On entry*: the number $m$ of residuals, ${f}_{i}\left(x\right)$, and the number $n$ of variables, ${x}_{j}$.*Constraint*: $1\le {\mathbf{n}}\le {\mathbf{m}}$.

- lsqfun
- Type: NagLibrary..::..E04..::..E04HE_LSQFUNlsqfun must calculate the vector of values ${f}_{i}\left(x\right)$ and Jacobian matrix of first derivatives $\frac{\partial {f}_{i}}{\partial {x}_{j}}$ at any point $x$. (However, if you do not wish to calculate the residuals or first derivatives at a particular $x$, there is the option of setting a parameter to cause e04he to terminate immediately.)
A delegate of type E04HE_LSQFUN.

- lsqhes
- Type: NagLibrary..::..E04..::..E04HE_LSQHESlsqhes must calculate the elements of the symmetric matrixat any point $x$, where ${G}_{i}\left(x\right)$ is the Hessian matrix of ${f}_{i}\left(x\right)$. (As with lsqfun, there is the option of causing e04he to terminate immediately.)
$$B\left(x\right)=\sum _{i=1}^{m}{f}_{i}\left(x\right){G}_{i}\left(x\right)\text{,}$$ A delegate of type E04HE_LSQHES.

- lsqmon
- Type: NagLibrary..::..E04..::..E04HE_LSQMONIf ${\mathbf{iprint}}\ge 0$, you must supply lsqmon which is suitable for monitoring the minimization process. lsqmon must not change the values of any of its parameters.If ${\mathbf{iprint}}<0$, the dummy method E04FDZ can be used as lsqmon.
A delegate of type E04HE_LSQMON.

**Note:**you should normally print the sum of squares of residuals, so as to be able to examine the sequence of values of $F\left(x\right)$ mentioned in [Accuracy]. It is usually helpful to also print xc, the gradient of the sum of squares, niter and nf.

- iprint
- Type: System..::..Int32
*On entry*: specifies the frequency with which lsqmon is to be called.- ${\mathbf{iprint}}>0$
- lsqmon is called once every iprint iterations and just before exit from e04he.
- ${\mathbf{iprint}}=0$
- lsqmon is just called at the final point.
- ${\mathbf{iprint}}<0$
- lsqmon is not called at all.

iprint should normally be set to a small positive number.*Suggested value*: ${\mathbf{iprint}}=1$.

- maxcal
- Type: System..::..Int32
*On entry*: this parameter is present so as to enable you to limit the number of times that lsqfun is called by e04he. There will be an error exit (see [Error Indicators and Warnings]) after maxcal calls of lsqfun.*Suggested value*: ${\mathbf{maxcal}}=50\times n$.*Constraint*: ${\mathbf{maxcal}}\ge 1$.

- eta
- Type: System..::..Double
*On entry*: every iteration of e04he involves a linear minimization (i.e., minimization of $F\left({x}^{\left(k\right)}+{\alpha}^{\left(k\right)}{p}^{\left(k\right)}\right)$ with respect to ${\alpha}^{\left(k\right)}$). eta must lie in the range $0.0\le {\mathbf{eta}}<1.0$, and specifies how accurately these linear minimizations are to be performed. The minimum with respect to ${\alpha}^{\left(k\right)}$ will be located more accurately for small values of eta (say, $0.01$) than for large values (say, $0.9$).*Suggested value*: ${\mathbf{eta}}=0.5$ (${\mathbf{eta}}=0.0$ if ${\mathbf{n}}=1$).*Constraint*: $0.0\le {\mathbf{eta}}<1.0$.

- xtol
- Type: System..::..Double
*On entry*: the accuracy in $x$ to which the solution is required.If ${x}_{\mathrm{true}}$ is the true value of $x$ at the minimum, then ${x}_{\mathrm{sol}}$, the estimated position before a normal exit, is such thatwhere $\Vert y\Vert =\sqrt{{\displaystyle \sum _{j=1}^{n}}{y}_{j}^{2}}$. For example, if the elements of ${x}_{\mathrm{sol}}$ are not much larger than $1.0$ in modulus and if ${\mathbf{xtol}}=\text{1.0E\u22125}$, then ${x}_{\mathrm{sol}}$ is usually accurate to about five decimal places. (For further details see [Accuracy].)$$\Vert {x}_{\mathrm{sol}}-{x}_{\mathrm{true}}\Vert <{\mathbf{xtol}}\times \left(1.0+\Vert {x}_{\mathrm{true}}\Vert \right)\text{,}$$ If $F\left(x\right)$ and the variables are scaled roughly as described in [Further Comments] and $\epsilon $ is the machine precision, then a setting of order ${\mathbf{xtol}}=\sqrt{\epsilon}$ will usually be appropriate. If xtol is set to $0.0$ or some positive value less than $10\epsilon $, e04he will use $10\epsilon $ instead of xtol, since $10\epsilon $ is probably the smallest reasonable setting.*Constraint*: ${\mathbf{xtol}}\ge 0.0$.

- stepmx
- Type: System..::..Double
*On entry*: an estimate of the Euclidean distance between the solution and the starting point supplied by you. (For maximum efficiency, a slight overestimate is preferable.)e04he will ensure that, for each iterationwhere $k$ is the iteration number. Thus, if the problem has more than one solution, e04he is most likely to find the one nearest to the starting point. On difficult problems, a realistic choice can prevent the sequence of ${x}^{\left(k\right)}$ entering a region where the problem is ill-behaved and can help avoid overflow in the evaluation of $F\left(x\right)$. However, an underestimate of stepmx can lead to inefficiency.$$\sum _{j=1}^{n}{\left({x}_{j}^{\left(k\right)}-{x}_{j}^{\left(k-1\right)}\right)}^{2}\le {\left({\mathbf{stepmx}}\right)}^{2}\text{,}$$ *Suggested value*: ${\mathbf{stepmx}}=100000.0$.*Constraint*: ${\mathbf{stepmx}}\ge {\mathbf{xtol}}$.

- x
- Type: array<System..::..Double>[]()[][]An array of size [n]
*On entry*: ${\mathbf{x}}\left[\mathit{j}-1\right]$ must be set to a guess at the $\mathit{j}$th component of the position of the minimum, for $\mathit{j}=1,2,\dots ,n$.*On exit*: the final point ${x}^{\left(k\right)}$. Thus, if ${\mathbf{ifail}}={0}$ on exit, ${\mathbf{x}}\left[j-1\right]$ is the $j$th component of the estimated position of the minimum.

- fsumsq
- Type: System..::..Double%
*On exit*: the value of $F\left(x\right)$, the sum of squares of the residuals ${f}_{i}\left(x\right)$, at the final point given in x.

- fvec
- Type: array<System..::..Double>[]()[][]An array of size [m]
*On exit*: the value of the residual ${f}_{\mathit{i}}\left(x\right)$ at the final point given in x, for $\mathit{i}=1,2,\dots ,m$.

- fjac
- Type: array<System..::..Double,2>[,](,)[,][,]An array of size [dim1, n]
**Note:**dim1 must satisfy the constraint: $\mathrm{dim1}\ge {\mathbf{m}}$*On exit*: the value of the first derivative $\frac{\partial {f}_{\mathit{i}}}{\partial {x}_{\mathit{j}}}$ evaluated at the final point given in x, for $\mathit{i}=1,2,\dots ,m$ and $\mathit{j}=1,2,\dots ,n$.

- s
- Type: array<System..::..Double>[]()[][]An array of size [n]
*On exit*: the singular values of the Jacobian matrix at the final point. Thus s may be useful as information about the structure of your problem.

- v
- Type: array<System..::..Double,2>[,](,)[,][,]An array of size [dim1, n]
**Note:**dim1 must satisfy the constraint: $\mathrm{dim1}\ge {\mathbf{n}}$*On exit*: the matrix $V$ associated with the singular value decompositionof the Jacobian matrix at the final point, stored by columns. This matrix may be useful for statistical purposes, since it is the matrix of orthonormalized eigenvectors of ${J}^{\mathrm{T}}J$.$$J=US{V}^{\mathrm{T}}$$

- niter
- Type: System..::..Int32%
*On exit*: the number of iterations which have been performed in e04he.

- nf
- Type: System..::..Int32%
*On exit*: the number of times that the residuals and Jacobian matrix have been evaluated (i.e., number of calls of lsqfun).

- ifail
- Type: System..::..Int32%
*On exit*: ${\mathbf{ifail}}={0}$ unless the method detects an error or a warning has been flagged (see [Error Indicators and Warnings]).

# Description

e04he is essentially identical to the method LSQSDN in the NPL Algorithms Library. It is applicable to problems of the form:

where $x={\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)}^{\mathrm{T}}$ and $m\ge n$. (The functions ${f}_{i}\left(x\right)$ are often referred to as ‘residuals’.)

$$\mathrm{Minimize}\u200aF\left(x\right)=\sum _{i=1}^{m}{\left[{f}_{i}\left(x\right)\right]}^{2}$$ |

You must supply methods to calculate the values of the ${f}_{i}\left(x\right)$ and their first derivatives and second derivatives at any point $x$.

From a starting point ${x}^{\left(1\right)}$ supplied by you, the method generates a sequence of points ${x}^{\left(2\right)},{x}^{\left(3\right)},\dots $, which is intended to converge to a local minimum of $F\left(x\right)$. The sequence of points is given by

where the vector ${p}^{\left(k\right)}$ is a direction of search, and ${\alpha}^{\left(k\right)}$ is chosen such that $F\left({x}^{\left(k\right)}+{\alpha}^{\left(k\right)}{p}^{\left(k\right)}\right)$ is approximately a minimum with respect to ${\alpha}^{\left(k\right)}$.

$${x}^{\left(k+1\right)}={x}^{\left(k\right)}+{\alpha}^{\left(k\right)}{p}^{\left(k\right)}$$ |

The vector ${p}^{\left(k\right)}$ used depends upon the reduction in the sum of squares obtained during the last iteration. If the sum of squares was sufficiently reduced, then ${p}^{\left(k\right)}$ is the Gauss–Newton direction; otherwise the second derivatives of the ${f}_{i}\left(x\right)$ are taken into account.

The method is designed to ensure that steady progress is made whatever the starting point, and to have the rapid ultimate convergence of Newton's method.

# References

Gill P E and Murray W (1978) Algorithms for the solution of the nonlinear least squares problem

*SIAM J. Numer. Anal.***15**977–992# Error Indicators and Warnings

**Note:**e04he may return useful information for one or more of the following detected errors or warnings.

Errors or warnings detected by the method:

Some error messages may refer to parameters that are dropped from this interface
(LDFJAC, LDV, IW, LIW, W, LW) In these
cases, an error in another parameter has usually caused an incorrect value to be inferred.

- ${\mathbf{ifail}}<0$

- ${\mathbf{ifail}}=1$
On entry, ${\mathbf{n}}<1$, or ${\mathbf{m}}<{\mathbf{n}}$, or ${\mathbf{maxcal}}<1$, or ${\mathbf{eta}}<0.0$, or ${\mathbf{eta}}\ge 1.0$, or ${\mathbf{xtol}}<0.0$, or ${\mathbf{stepmx}}<{\mathbf{xtol}}$,

- ${\mathbf{ifail}}=2$
- There have been maxcal calls of lsqfun. If steady reductions in the sum of squares, $F\left(x\right)$, were monitored up to the point where this exit occurred, then the exit probably occurred simply because maxcal was set too small, so the calculations should be restarted from the final point held in x. This exit may also indicate that $F\left(x\right)$ has no minimum.

- ${\mathbf{ifail}}=3$
- The conditions for a minimum have not all been satisfied, but a lower point could not be found. This could be because xtol has been set so small that rounding errors in the evaluation of the residuals and derivatives make attainment of the convergence conditions impossible.

- ${\mathbf{ifail}}=4$
- The method for computing the singular value decomposition of the Jacobian matrix has failed to converge in a reasonable number of sub-iterations. It may be worth applying e04he again starting with an initial approximation which is not too close to the point at which the failure occurred.

- ${\mathbf{ifail}}=-9000$
- An error occured, see message report.
- ${\mathbf{ifail}}=-6000$
- Invalid Parameters $\u2329\mathit{\text{value}}\u232a$
- ${\mathbf{ifail}}=-4000$
- Invalid dimension for array $\u2329\mathit{\text{value}}\u232a$
- ${\mathbf{ifail}}=-8000$
- Negative dimension for array $\u2329\mathit{\text{value}}\u232a$
- ${\mathbf{ifail}}=-6000$
- Invalid Parameters $\u2329\mathit{\text{value}}\u232a$

The values ${\mathbf{ifail}}={2}$, ${3}$ and ${4}$ may also be caused by mistakes in lsqfun or lsqhes, by the formulation of the problem or by an awkward function. If there are no such mistakes it is worth restarting the calculations from a different starting point (not the point at which the failure occurred) in order to avoid the region which caused the failure.

# Accuracy

A successful exit (${\mathbf{ifail}}={0}$) is made from e04he when the matrix of second derivatives of $F\left(x\right)$ is positive definite, and when (B1, B2 and B3) or B4 or B5 hold, where

and where $\Vert .\Vert $ and $\epsilon $ are as defined in [Parameters], and ${F}^{\left(k\right)}$ and ${g}^{\left(k\right)}$ are the values of $F\left(x\right)$ and its vector of first derivatives at ${x}^{\left(k\right)}$.

$$\begin{array}{lll}\mathrm{B1}& \equiv & {\alpha}^{\left(k\right)}\times \Vert {p}^{\left(k\right)}\Vert <\left({\mathbf{xtol}}+\epsilon \right)\times \left(1.0+\Vert {x}^{\left(k\right)}\Vert \right)\\ \mathrm{B2}& \equiv & \left|{F}^{\left(k\right)}-{F}^{\left(k-1\right)}\right|<{\left({\mathbf{xtol}}+\epsilon \right)}^{2}\times \left(1.0+{F}^{\left(k\right)}\right)\\ \mathrm{B3}& \equiv & \Vert {g}^{\left(k\right)}\Vert <{\epsilon}^{1/3}\times \left(1.0+{F}^{\left(k\right)}\right)\\ \mathrm{B4}& \equiv & {F}^{\left(k\right)}<{\epsilon}^{2}\\ \mathrm{B5}& \equiv & \Vert {g}^{\left(k\right)}\Vert <{\left(\epsilon \times \sqrt{{F}^{\left(k\right)}}\right)}^{1/2}\end{array}$$ |

If ${\mathbf{ifail}}={0}$, then the vector in x on exit, ${x}_{\mathrm{sol}}$, is almost certainly an estimate of ${x}_{\mathrm{true}}$, the position of the minimum to the accuracy specified by xtol.

If ${\mathbf{ifail}}={3}$, then ${x}_{\mathrm{sol}}$ may still be a good estimate of ${x}_{\mathrm{true}}$, but to verify this you should make the following checks. If

(a) | the sequence $\left\{F\left({x}^{\left(k\right)}\right)\right\}$ converges to $F\left({x}_{\mathrm{sol}}\right)$ at a superlinear or a fast linear rate, and |

(b) | $g{\left({x}_{\mathrm{sol}}\right)}^{\mathrm{T}}g\left({x}_{\mathrm{sol}}\right)<10\epsilon $, where $\mathrm{T}$ denotes transpose, then it is almost certain that ${x}_{\mathrm{sol}}$ is a close approximation to the minimum. |

When (b) is true, then usually $F\left({x}_{\mathrm{sol}}\right)$ is a close approximation to $F\left({x}_{\mathrm{true}}\right)$. The values of $F\left({x}^{\left(k\right)}\right)$ can be calculated in lsqmon, and the vector $g\left({x}_{\mathrm{sol}}\right)$ can be calculated from the contents of fvec and fjac on exit from e04he.

Further suggestions about confirmation of a computed solution are given in the

**E04**class.# Parallelism and Performance

None.

# Further Comments

The number of iterations required depends on the number of variables, the number of residuals, the behaviour of $F\left(x\right)$, the accuracy demanded and the distance of the starting point from the solution. The number of multiplications performed per iteration of e04he varies, but for $m\gg n$ is approximately $n\times {m}^{2}+\mathit{O}\left({n}^{3}\right)$. In addition, each iteration makes at least one call of lsqfun and some iterations may call lsqhes. So, unless the residuals and their derivatives can be evaluated very quickly, the run time will be dominated by the time spent in lsqfun (and, to a lesser extent, in lsqhes).

Ideally, the problem should be scaled so that, at the solution, $F\left(x\right)$ and the corresponding values of the ${x}_{j}$ are each in the range $\left(-1,+1\right)$, and so that at points one unit away from the solution, $F\left(x\right)$ differs from its value at the solution by approximately one unit. This will usually imply that the Hessian matrix of $F\left(x\right)$ at the solution is well-conditioned. It is unlikely that you will be able to follow these recommendations very closely, but it is worth trying (by guesswork), as sensible scaling will reduce the difficulty of the minimization problem, so that e04he will take less computer time.

When the sum of squares represents the goodness-of-fit of a nonlinear model to observed data, elements of the variance-covariance matrix of the estimated regression coefficients can be computed by a subsequent call to (E04YCF not in this release), using information returned in the arrays s and v. See (E04YCF not in this release) for further details.