NAG Library Routine Document
e04gdf (lsq_uncon_mod_deriv_comp)
1
Purpose
e04gdf is a comprehensive modified Gauss–Newton algorithm for finding an unconstrained minimum of a sum of squares of $m$ nonlinear functions in $n$ variables $\left(m\ge n\right)$. First derivatives are required.
The routine is intended for functions which have continuous first and second derivatives (although it will usually work even if the derivatives have occasional discontinuities).
2
Specification
Fortran Interface
Subroutine e04gdf ( 
m, n, lsqfun, lsqmon, iprint, maxcal, eta, xtol, stepmx, x, fsumsq, fvec, fjac, ldfjac, s, v, ldv, niter, nf, iw, liw, w, lw, ifail) 
Integer, Intent (In)  ::  m, n, iprint, maxcal, ldfjac, ldv, liw, lw  Integer, Intent (Inout)  ::  iw(liw), ifail  Integer, Intent (Out)  ::  niter, nf  Real (Kind=nag_wp), Intent (In)  ::  eta, xtol, stepmx  Real (Kind=nag_wp), Intent (Inout)  ::  x(n), fjac(ldfjac,n), v(ldv,n), w(lw)  Real (Kind=nag_wp), Intent (Out)  ::  fsumsq, fvec(m), s(n)  External  ::  lsqfun, lsqmon 

C Header Interface
#include nagmk26.h
void 
e04gdf_ (const Integer *m, const Integer *n, void (NAG_CALL *lsqfun)(Integer *iflag, const Integer *m, const Integer *n, const double xc[], double fvec[], double fjac[], const Integer *ldfjac, Integer iw[], const Integer *liw, double w[], const Integer *lw), void (NAG_CALL *lsqmon)(const Integer *m, const Integer *n, const double xc[], const double fvec[], const double fjac[], const Integer *ldfjac, const double s[], const Integer *igrade, const Integer *niter, const Integer *nf, Integer iw[], const Integer *liw, double w[], const Integer *lw), const Integer *iprint, const Integer *maxcal, const double *eta, const double *xtol, const double *stepmx, double x[], double *fsumsq, double fvec[], double fjac[], const Integer *ldfjac, double s[], double v[], const Integer *ldv, Integer *niter, Integer *nf, Integer iw[], const Integer *liw, double w[], const Integer *lw, Integer *ifail) 

3
Description
e04gdf is essentially identical to the subroutine LSQFDN in the NPL Algorithms Library. It is applicable to problems of the form
where
$x={\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)}^{\mathrm{T}}$ and
$m\ge n$. (The functions
${f}_{i}\left(x\right)$ are often referred to as ‘residuals’.)
You must supply a subroutine to calculate the values of the ${f}_{i}\left(x\right)$ and their first derivatives $\frac{\partial {f}_{i}}{\partial {x}_{j}}$ at any point $x$.
From a starting point
${x}^{\left(1\right)}$ supplied by you, the routine generates a sequence of points
${x}^{\left(2\right)},{x}^{\left(3\right)},\dots $, which is intended to converge to a local minimum of
$F\left(x\right)$. The sequence of points is given by
where the vector
${p}^{\left(k\right)}$ is a direction of search, and
${\alpha}^{\left(k\right)}$ is chosen such that
$F\left({x}^{\left(k\right)}+{\alpha}^{\left(k\right)}{p}^{\left(k\right)}\right)$ is approximately a minimum with respect to
${\alpha}^{\left(k\right)}$.
The vector ${p}^{\left(k\right)}$ used depends upon the reduction in the sum of squares obtained during the last iteration. If the sum of squares was sufficiently reduced, then ${p}^{\left(k\right)}$ is the Gauss–Newton direction; otherwise finite difference estimates of the second derivatives of the ${f}_{i}\left(x\right)$ are taken into account.
The method is designed to ensure that steady progress is made whatever the starting point, and to have the rapid ultimate convergence of Newton's method.
4
References
Gill P E and Murray W (1978) Algorithms for the solution of the nonlinear least squares problem SIAM J. Numer. Anal. 15 977–992
5
Arguments
 1: $\mathbf{m}$ – IntegerInput
 2: $\mathbf{n}$ – IntegerInput

On entry: the number $m$ of residuals, ${f}_{i}\left(x\right)$, and the number $n$ of variables, ${x}_{j}$.
Constraint:
$1\le {\mathbf{n}}\le {\mathbf{m}}$.
 3: $\mathbf{lsqfun}$ – Subroutine, supplied by the user.External Procedure

lsqfun must calculate the vector of values
${f}_{i}\left(x\right)$ and Jacobian matrix of first derivatives
$\frac{\partial {f}_{i}}{\partial {x}_{j}}$ at any point
$x$. (However, if you do not wish to calculate the residuals or first derivatives at a particular
$x$, there is the option of setting an argument to cause
e04gdf to terminate immediately.)
The specification of
lsqfun is:
Fortran Interface
Subroutine lsqfun ( 
iflag, m, n, xc, fvec, fjac, ldfjac, iw, liw, w, lw) 
Integer, Intent (In)  ::  m, n, ldfjac, liw, lw  Integer, Intent (Inout)  ::  iflag, iw(liw)  Real (Kind=nag_wp), Intent (In)  ::  xc(n)  Real (Kind=nag_wp), Intent (Inout)  ::  fjac(ldfjac,n), w(lw)  Real (Kind=nag_wp), Intent (Out)  ::  fvec(m) 

C Header Interface
#include nagmk26.h
void 
lsqfun (Integer *iflag, const Integer *m, const Integer *n, const double xc[], double fvec[], double fjac[], const Integer *ldfjac, Integer iw[], const Integer *liw, double w[], const Integer *lw) 

Note: the dimension declaration for
fjac must contain the variable
ldfjac, not an integer constant.
 1: $\mathbf{iflag}$ – IntegerInput/Output

On entry: to
lsqfun,
iflag will be set to
$1$ or
$2$.
 ${\mathbf{iflag}}=1$
 Indicates that only the Jacobian matrix needs to be evaluated
 ${\mathbf{iflag}}=2$
 Indicates that both the residuals and the Jacobian matrix must be calculated
On exit: if it is not possible to evaluate the
${f}_{i}\left(x\right)$ or their first derivatives at the point given in
xc (or if it wished to stop the calculations for any other reason), you should reset
iflag to some negative number and return control to
e04gdf.
e04gdf will then terminate immediately, with
ifail set to your setting of
iflag.
 2: $\mathbf{m}$ – IntegerInput

On entry: $m$, the numbers of residuals.
 3: $\mathbf{n}$ – IntegerInput

On entry: $n$, the numbers of variables.
 4: $\mathbf{xc}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayInput

On entry: the point $x$ at which the values of the ${f}_{i}$ and the $\frac{\partial {f}_{i}}{\partial {x}_{j}}$ are required.
 5: $\mathbf{fvec}\left({\mathbf{m}}\right)$ – Real (Kind=nag_wp) arrayOutput

On exit: unless
${\mathbf{iflag}}=1$ on entry, or
iflag is reset to a negative number,
${\mathbf{fvec}}\left(i\right)$ must contain the value of
${f}_{\mathit{i}}$ at the point
$x$, for
$\mathit{i}=1,2,\dots ,m$.
 6: $\mathbf{fjac}\left({\mathbf{ldfjac}},{\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayOutput

On exit: unless
iflag is reset to a negative number,
${\mathbf{fjac}}\left(\mathit{i},\mathit{j}\right)$ must contain the value of
$\frac{\partial {f}_{\mathit{i}}}{\partial {x}_{\mathit{j}}}$ at the point
$x$, for
$\mathit{i}=1,2,\dots ,m$ and
$\mathit{j}=1,2,\dots ,n$.
 7: $\mathbf{ldfjac}$ – IntegerInput

On entry: the first dimension of the array
fjac, set to
$m$ by
e04gdf.
 8: $\mathbf{iw}\left({\mathbf{liw}}\right)$ – Integer arrayWorkspace
 9: $\mathbf{liw}$ – IntegerInput
 10: $\mathbf{w}\left({\mathbf{lw}}\right)$ – Real (Kind=nag_wp) arrayWorkspace
 11: $\mathbf{lw}$ – IntegerInput

lsqfun is called with
e04gdf's arguments
iw,
liw,
w,
lw as these arguments. They are present so that, when other library routines require the solution of a minimization subproblem, constants needed for the evaluation of residuals can be passed through
iw and
w. Similarly, you could pass quantities to
lsqfun from the segment which calls
e04gdf by using partitions of
iw and
w beyond those used as workspace by
e04gdf. However, because of the danger of mistakes in partitioning, it is
recommended that you should pass information to
lsqfun via COMMON global variables and
not use iw or w at all. In any case you
must not change the elements of
iw and
w used as workspace by
e04gdf.
lsqfun must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which
e04gdf is called. Arguments denoted as
Input must
not be changed by this procedure.
Note: lsqfun should not return floatingpoint NaN (Not a Number) or infinity values, since these are not handled by
e04gdf. If your code inadvertently
does return any NaNs or infinities,
e04gdf is likely to produce unexpected results.
lsqfun should be tested separately before being used in conjunction with
e04gdf.
 4: $\mathbf{lsqmon}$ – Subroutine, supplied by the NAG Library or the user.External Procedure

If
${\mathbf{iprint}}\ge 0$, you must supply
lsqmon which is suitable for monitoring the minimization process.
lsqmon must not change the values of any of its arguments.
If
${\mathbf{iprint}}<0$, the dummy routine e04fdz can be used as
lsqmon.
The specification of
lsqmon is:
Fortran Interface
Subroutine lsqmon ( 
m, n, xc, fvec, fjac, ldfjac, s, igrade, niter, nf, iw, liw, w, lw) 
Integer, Intent (In)  ::  m, n, ldfjac, igrade, niter, nf, liw, lw  Integer, Intent (Inout)  ::  iw(liw)  Real (Kind=nag_wp), Intent (In)  ::  xc(n), fvec(m), fjac(ldfjac,n), s(n)  Real (Kind=nag_wp), Intent (Inout)  ::  w(lw) 

C Header Interface
#include nagmk26.h
void 
lsqmon (const Integer *m, const Integer *n, const double xc[], const double fvec[], const double fjac[], const Integer *ldfjac, const double s[], const Integer *igrade, const Integer *niter, const Integer *nf, Integer iw[], const Integer *liw, double w[], const Integer *lw) 

Note: the dimension declaration for
fjac must contain the variable
ldfjac, not an integer constant.
 1: $\mathbf{m}$ – IntegerInput

On entry: $m$, the numbers of residuals.
 2: $\mathbf{n}$ – IntegerInput

On entry: $n$, the numbers of variables.
 3: $\mathbf{xc}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayInput

On entry: the coordinates of the current point $x$.
 4: $\mathbf{fvec}\left({\mathbf{m}}\right)$ – Real (Kind=nag_wp) arrayInput

On entry: the values of the residuals ${f}_{i}$ at the current point $x$.
 5: $\mathbf{fjac}\left({\mathbf{ldfjac}},{\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayInput

On entry: ${\mathbf{fjac}}\left(\mathit{i},\mathit{j}\right)$ contains the value of $\frac{\partial {f}_{\mathit{i}}}{\partial {x}_{\mathit{j}}}$ at the current point $x$, for $\mathit{i}=1,2,\dots ,m$ and $\mathit{j}=1,2,\dots ,n$.
 6: $\mathbf{ldfjac}$ – IntegerInput

On entry: the first dimension of the array
fjac as declared in the (sub)program from which
e04gdf is called.
 7: $\mathbf{s}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayInput

On entry: the singular values of the current Jacobian matrix. Thus
s may be useful as information about the structure of your problem. (If
${\mathbf{iprint}}>0$,
lsqmon is called at the initial point before the singular values have been calculated. So the elements of
s are set to zero for the first call of
lsqmon.)
 8: $\mathbf{igrade}$ – IntegerInput

On entry:
e04gdf estimates the dimension of the subspace for which the Jacobian matrix can be used as a valid approximation to the curvature (see
Gill and Murray (1978)). This estimate is called the grade of the Jacobian matrix, and
igrade gives its current value.
 9: $\mathbf{niter}$ – IntegerInput

On entry: the number of iterations which have been performed in e04gdf.
 10: $\mathbf{nf}$ – IntegerInput

On entry: the number of times that
lsqfun has been called so far with
${\mathbf{iflag}}=2$. (In addition to these calls monitored by
nf,
lsqfun is called not more than
n times per iteration with
iflag set to
$1$.)
 11: $\mathbf{iw}\left({\mathbf{liw}}\right)$ – Integer arrayWorkspace
 12: $\mathbf{liw}$ – IntegerInput
 13: $\mathbf{w}\left({\mathbf{lw}}\right)$ – Real (Kind=nag_wp) arrayWorkspace
 14: $\mathbf{lw}$ – IntegerInput

As in
lsqfun, these arguments correspond to the arguments
iw,
liw,
w,
lw of
e04gdf. They are included in
lsqmon's argument list primarily for when
e04gdf is called by other library routines.
lsqmon must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which
e04gdf is called. Arguments denoted as
Input must
not be changed by this procedure.
Note: you should normally print the sum of squares of residuals, so as to be able to examine the sequence of values of
$F\left(x\right)$ mentioned in
Section 7. It is usually also helpful to print
xc, the gradient of the sum of squares,
niter and
nf.
 5: $\mathbf{iprint}$ – IntegerInput

On entry: the frequency with which
lsqmon is to be called.
 ${\mathbf{iprint}}>0$
 lsqmon is called once every iprint iterations and just before exit from e04gdf.
 ${\mathbf{iprint}}=0$
 lsqmon is just called at the final point.
 ${\mathbf{iprint}}<0$
 lsqmon is not called at all.
iprint should normally be set to a small positive number.
Suggested value:
${\mathbf{iprint}}=1$.
 6: $\mathbf{maxcal}$ – IntegerInput

On entry: enables you to limit the number of times that
lsqfun is called by
e04gdf. There will be an error exit (see
Section 6) after
maxcal evaluations of the residuals (i.e., calls of
lsqfun with
iflag set to
$2$). It should be borne in mind that, in addition to the calls of
lsqfun which are limited directly by
maxcal, there will be calls of
lsqfun (with
iflag set to
$1$) to evaluate only first derivatives.
Suggested value:
${\mathbf{maxcal}}=50\times n$.
Constraint:
${\mathbf{maxcal}}\ge 1$.
 7: $\mathbf{eta}$ – Real (Kind=nag_wp)Input

On entry: every iteration of
e04gdf involves a linear minimization, i.e., minimization of
$F\left({x}^{\left(k\right)}+{\alpha}^{\left(k\right)}{p}^{\left(k\right)}\right)$ with respect to
${\alpha}^{\left(k\right)}$.
eta specifies how accurately these linear minimizations are to be performed. The minimum with respect to
${\alpha}^{\left(k\right)}$ will be located more accurately for small values of
eta (say,
$0.01$) than for large values (say,
$0.9$).
Although accurate linear minimizations will generally reduce the number of iterations, they will tend to increase the number of calls of
lsqfun (with
iflag set to
$2$) needed for each linear minimization. On balance it is usually efficient to perform a low accuracy linear minimization.
Suggested value:
${\mathbf{eta}}=0.5$ (${\mathbf{eta}}=0.0$ if ${\mathbf{n}}=1$).
Constraint:
$0.0\le {\mathbf{eta}}<1.0$.
 8: $\mathbf{xtol}$ – Real (Kind=nag_wp)Input

On entry: the accuracy in
$x$ to which the solution is required.
If
${x}_{\mathrm{true}}$ is the true value of
$x$ at the minimum, then
${x}_{\mathrm{sol}}$, the estimated position before a normal exit, is such that
where
$\Vert y\Vert =\sqrt{{\displaystyle \sum _{j=1}^{n}}{y}_{j}^{2}}$. For example, if the elements of
${x}_{\mathrm{sol}}$ are not much larger than
$1.0$ in modulus and if
${\mathbf{xtol}}=\text{1.0E\u22125}$, then
${x}_{\mathrm{sol}}$ is usually accurate to about five decimal places. (For further details see
Section 7.)
If
$F\left(x\right)$ and the variables are scaled roughly as described in
Section 9 and
$\epsilon $ is the
machine precision, then a setting of order
${\mathbf{xtol}}=\sqrt{\epsilon}$ will usually be appropriate. If
xtol is set to
$0.0$ or some positive value less than
$10\epsilon $,
e04gdf will use
$10\epsilon $ instead of
xtol, since
$10\epsilon $ is probably the smallest reasonable setting.
Constraint:
${\mathbf{xtol}}\ge 0.0$.
 9: $\mathbf{stepmx}$ – Real (Kind=nag_wp)Input

On entry: an estimate of the Euclidean distance between the solution and the starting point supplied by you. (For maximum efficiency, a slight overestimate is preferable.)
e04gdf will ensure that, for each iteration,
where
$k$ is the iteration number. Thus, if the problem has more than one solution,
e04gdf is most likely to find the one nearest to the starting point. On difficult problems, a realistic choice can prevent the sequence of
${x}^{\left(k\right)}$ entering a region where the problem is illbehaved and can help avoid overflow in the evaluation of
$F\left(x\right)$. However, an underestimate of
stepmx can lead to inefficiency.
Suggested value:
${\mathbf{stepmx}}=100000.0$.
Constraint:
${\mathbf{stepmx}}\ge {\mathbf{xtol}}$.
 10: $\mathbf{x}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayInput/Output

On entry: ${\mathbf{x}}\left(\mathit{j}\right)$ must be set to a guess at the $\mathit{j}$th component of the position of the minimum, for $\mathit{j}=1,2,\dots ,n$.
On exit: the final point ${x}^{\left(k\right)}$. Thus, if ${\mathbf{ifail}}={\mathbf{0}}$ on exit, ${\mathbf{x}}\left(j\right)$ is the $j$th component of the estimated position of the minimum.
 11: $\mathbf{fsumsq}$ – Real (Kind=nag_wp)Output

On exit: the value of
$F\left(x\right)$, the sum of squares of the residuals
${f}_{i}\left(x\right)$, at the final point given in
x.
 12: $\mathbf{fvec}\left({\mathbf{m}}\right)$ – Real (Kind=nag_wp) arrayOutput

On exit: the value of the residual
${f}_{\mathit{i}}\left(x\right)$ at the final point given in
x, for
$\mathit{i}=1,2,\dots ,m$.
 13: $\mathbf{fjac}\left({\mathbf{ldfjac}},{\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayOutput

On exit: the value of the first derivative
$\frac{\partial {f}_{\mathit{i}}}{\partial {x}_{\mathit{j}}}$ evaluated at the final point given in
x, for
$\mathit{i}=1,2,\dots ,m$ and
$\mathit{j}=1,2,\dots ,n$.
 14: $\mathbf{ldfjac}$ – IntegerInput

On entry: the first dimension of the array
fjac as declared in the (sub)program from which
e04gdf is called.
Constraint:
${\mathbf{ldfjac}}\ge {\mathbf{m}}$.
 15: $\mathbf{s}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayOutput

On exit: the singular values of the Jacobian matrix at the final point. Thus
s may be useful as information about the structure of your problem.
 16: $\mathbf{v}\left({\mathbf{ldv}},{\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayOutput

On exit: the matrix
$V$ associated with the singular value decomposition
of the Jacobian matrix at the final point, stored by columns. This matrix may be useful for statistical purposes, since it is the matrix of orthonormalized eigenvectors of
${J}^{\mathrm{T}}J$.
 17: $\mathbf{ldv}$ – IntegerInput

On entry: the first dimension of the array
v as declared in the (sub)program from which
e04gdf is called.
Constraint:
${\mathbf{ldv}}\ge {\mathbf{n}}$.
 18: $\mathbf{niter}$ – IntegerOutput

On exit: the number of iterations which have been performed in e04gdf.
 19: $\mathbf{nf}$ – IntegerOutput

On exit: the number of times that the residuals have been evaluated (i.e., number of calls of
lsqfun with
iflag set to
$2$).
 20: $\mathbf{iw}\left({\mathbf{liw}}\right)$ – Integer arrayWorkspace
 21: $\mathbf{liw}$ – IntegerInput

On entry: the dimension of the array
iw as declared in the (sub)program from which
e04gdf is called.
Constraint:
${\mathbf{liw}}\ge 1$.
 22: $\mathbf{w}\left({\mathbf{lw}}\right)$ – Real (Kind=nag_wp) arrayWorkspace
 23: $\mathbf{lw}$ – IntegerInput

On entry: the dimension of the array
w as declared in the (sub)program from which
e04gdf is called.
Constraints:
 if ${\mathbf{n}}>1$, ${\mathbf{lw}}\ge 7\times {\mathbf{n}}+{\mathbf{m}}\times {\mathbf{n}}+2\times {\mathbf{m}}+{\mathbf{n}}\times {\mathbf{n}}$;
 if ${\mathbf{n}}=1$, ${\mathbf{lw}}\ge 9+3\times {\mathbf{m}}$.
 24: $\mathbf{ifail}$ – IntegerInput/Output

On entry:
ifail must be set to
$0$,
$1\text{ or}1$. If you are unfamiliar with this argument you should refer to
Section 3.4 in How to Use the NAG Library and its Documentation for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value
$1\text{ or}1$ is recommended. If the output of error messages is undesirable, then the value
$1$ is recommended. Otherwise, because for this routine the values of the output arguments may be useful even if
${\mathbf{ifail}}\ne {\mathbf{0}}$ on exit, the recommended value is
$1$.
When the value $\mathbf{1}\text{ or}1$ is used it is essential to test the value of ifail on exit.
On exit:
${\mathbf{ifail}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see
Section 6).
6
Error Indicators and Warnings
If on entry
${\mathbf{ifail}}=0$ or
$1$, explanatory error messages are output on the current error message unit (as defined by
x04aaf).
Note: e04gdf may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the routine:
 ${\mathbf{ifail}}<0$

A negative value of
ifail indicates an exit from
e04gdf because you have set
iflag negative in
lsqfun. The value of
ifail will be the same as your setting of
iflag.
 ${\mathbf{ifail}}=1$

On entry,  ${\mathbf{n}}<1$, 
or  ${\mathbf{m}}<{\mathbf{n}}$, 
or  ${\mathbf{maxcal}}<1$, 
or  ${\mathbf{eta}}<0.0$, 
or  ${\mathbf{eta}}\ge 1.0$, 
or  ${\mathbf{xtol}}<0.0$, 
or  ${\mathbf{stepmx}}<{\mathbf{xtol}}$, 
or  ${\mathbf{ldfjac}}<{\mathbf{m}}$, 
or  ${\mathbf{ldv}}<{\mathbf{n}}$, 
or  ${\mathbf{liw}}<1$, 
or  ${\mathbf{lw}}<7\times {\mathbf{n}}+{\mathbf{m}}\times {\mathbf{n}}+2\times {\mathbf{m}}+{\mathbf{n}}\times {\mathbf{n}}$ when ${\mathbf{n}}>1$, 
or  ${\mathbf{lw}}<9+3\times {\mathbf{m}}$ when ${\mathbf{n}}=1$. 
When this exit occurs, no values will have been assigned to
fsumsq, or to the elements of
fvec,
fjac,
s or
v.
 ${\mathbf{ifail}}=2$

There have been
maxcal evaluations of the residuals. If steady reductions in the sum of squares,
$F\left(x\right)$, were monitored up to the point where this exit occurred, then the exit probably occurred simply because
maxcal was set too small, so the calculations should be restarted from the final point held in
x. This exit may also indicate that
$F\left(x\right)$ has no minimum.
 ${\mathbf{ifail}}=3$
The conditions for a minimum have not all been satisfied, but a lower point could not be found. This could be because
xtol has been set so small that rounding errors in the evaluation of the residuals and derivatives make attainment of the convergence conditions impossible. See
Section 7 for further information.
 ${\mathbf{ifail}}=4$

The method for computing the singular value decomposition of the Jacobian matrix has failed to converge in a reasonable number of subiterations. It may be worth applying e04gdf again starting with an initial approximation which is not too close to the point at which the failure occurred.
 ${\mathbf{ifail}}=99$
An unexpected error has been triggered by this routine. Please
contact
NAG.
See
Section 3.9 in How to Use the NAG Library and its Documentation for further information.
 ${\mathbf{ifail}}=399$
Your licence key may have expired or may not have been installed correctly.
See
Section 3.8 in How to Use the NAG Library and its Documentation for further information.
 ${\mathbf{ifail}}=999$
Dynamic memory allocation failed.
See
Section 3.7 in How to Use the NAG Library and its Documentation for further information.
The values
${\mathbf{ifail}}={\mathbf{2}}$,
${\mathbf{3}}$ or
${\mathbf{4}}$ may also be caused by mistakes in
lsqfun, by the formulation of the problem or by an awkward function. If there are no such mistakes it is worth restarting the calculations from a different starting point (not the point at which the failure occurred) in order to avoid the region which caused the failure.
7
Accuracy
A successful exit (
${\mathbf{ifail}}={\mathbf{0}}$) is made from
e04gdf when the matrix of approximate second derivatives of
$F\left(x\right)$ is positive definite, and when (B1, B2 and B3) or B4 or B5 hold, where
and where
$\Vert .\Vert $ and
$\epsilon $ are as defined in
xtol, and
${F}^{\left(k\right)}$ and
${g}^{\left(k\right)}$ are the values of
$F\left(x\right)$ and its vector of estimated first derivatives at
${x}^{\left(k\right)}$.
If
${\mathbf{ifail}}={\mathbf{0}}$ then the vector in
x on exit,
${x}_{\mathrm{sol}}$, is almost certainly an estimate of
${x}_{\mathrm{true}}$, the position of the minimum to the accuracy specified by
xtol.
If
${\mathbf{ifail}}={\mathbf{3}}$, then
${x}_{\mathrm{sol}}$ may still be a good estimate of
${x}_{\mathrm{true}}$, but to verify this you should make the following checks. If
(a) 
the sequence $\left\{F\left({x}^{\left(k\right)}\right)\right\}$ converges to $F\left({x}_{\mathrm{sol}}\right)$ at a superlinear or a fast linear rate, and 
(b) 
$g{\left({x}_{\mathrm{sol}}\right)}^{\mathrm{T}}g\left({x}_{\mathrm{sol}}\right)<10\epsilon $, where $\mathrm{T}$ denotes transpose, then it is almost certain that ${x}_{\mathrm{sol}}$ is a close approximation to the minimum. 
When
(b) is true, then usually
$F\left({x}_{\mathrm{sol}}\right)$ is a close approximation to
$F\left({x}_{\mathrm{true}}\right)$. The values of
$F\left({x}^{\left(k\right)}\right)$ can be calculated in
lsqmon, and the vector
$g\left({x}_{\mathrm{sol}}\right)$ can be calculated from the contents of
fvec and
fjac on exit from
e04gdf.
Further suggestions about confirmation of a computed solution are given in the
E04 Chapter Introduction.
8
Parallelism and Performance
e04gdf is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library.
e04gdf makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the
X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the
Users' Note for your implementation for any additional implementationspecific information.
The number of iterations required depends on the number of variables, the number of residuals, the behaviour of
$F\left(x\right)$, the accuracy demanded and the distance of the starting point from the solution. The number of multiplications performed per iteration of
e04gdf varies, but for
$m\gg n$ is approximately
$n\times {m}^{2}+\mathit{O}\left({n}^{3}\right)$. In addition, each iteration makes at least one call of
lsqfun. So, unless the residuals and their derivatives can be evaluated very quickly, the run time will be dominated by the time spent in
lsqfun.
Ideally, the problem should be scaled so that, at the solution, $F\left(x\right)$ and the corresponding values of the ${x}_{j}$ are each in the range $\left(1,+1\right)$, and so that at points one unit away from the solution, $F\left(x\right)$ differs from its value at the solution by approximately one unit. This will usually imply that the Hessian matrix of $F\left(x\right)$ at the solution is wellconditioned. It is unlikely that you will be able to follow these recommendations very closely, but it is worth trying (by guesswork), as sensible scaling will reduce the difficulty of the minimization problem, so that e04gdf will take less computer time.
When the sum of squares represents the goodnessoffit of a nonlinear model to observed data, elements of the variancecovariance matrix of the estimated regression coefficients can be computed by a subsequent call to
e04ycf, using information returned in the arrays
s and
v. See
e04ycf for further details.
10
Example
This example finds least squares estimates of
${x}_{1}$,
${x}_{2}$ and
${x}_{3}$ in the model
using the
$15$ sets of data given in the following table.
Before calling
e04gdf, the program calls
e04yaf to check
lsqfun. It uses
$\left(0.5,1.0,1.5\right)$ as the initial guess at the position of the minimum.
10.1
Program Text
Program Text (e04gdfe.f90)
10.2
Program Data
Program Data (e04gdfe.d)
10.3
Program Results
Program Results (e04gdfe.r)