e04lb is a comprehensive modified Newton algorithm for finding:

- an unconstrained minimum of a function of several variables
- a minimum of a function of several variables subject to fixed upper and/or lower bounds on the variables.

# Syntax

C# |
---|

public static void e04lb( int n, E04..::..E04LB_FUNCT funct, E04..::..E04LB_H h, E04..::..E04LB_MONIT monit, int iprint, int maxcal, double eta, double xtol, double stepmx, int ibound, double[] bl, double[] bu, double[] x, double[] hesl, double[] hesd, int[] istate, out double f, double[] g, out int ifail ) |

Visual Basic |
---|

Public Shared Sub e04lb ( _ n As Integer, _ funct As E04..::..E04LB_FUNCT, _ h As E04..::..E04LB_H, _ monit As E04..::..E04LB_MONIT, _ iprint As Integer, _ maxcal As Integer, _ eta As Double, _ xtol As Double, _ stepmx As Double, _ ibound As Integer, _ bl As Double(), _ bu As Double(), _ x As Double(), _ hesl As Double(), _ hesd As Double(), _ istate As Integer(), _ <OutAttribute> ByRef f As Double, _ g As Double(), _ <OutAttribute> ByRef ifail As Integer _ ) |

Visual C++ |
---|

public: static void e04lb( int n, E04..::..E04LB_FUNCT^ funct, E04..::..E04LB_H^ h, E04..::..E04LB_MONIT^ monit, int iprint, int maxcal, double eta, double xtol, double stepmx, int ibound, array<double>^ bl, array<double>^ bu, array<double>^ x, array<double>^ hesl, array<double>^ hesd, array<int>^ istate, [OutAttribute] double% f, array<double>^ g, [OutAttribute] int% ifail ) |

F# |
---|

static member e04lb : n : int * funct : E04..::..E04LB_FUNCT * h : E04..::..E04LB_H * monit : E04..::..E04LB_MONIT * iprint : int * maxcal : int * eta : float * xtol : float * stepmx : float * ibound : int * bl : float[] * bu : float[] * x : float[] * hesl : float[] * hesd : float[] * istate : int[] * f : float byref * g : float[] * ifail : int byref -> unit |

#### Parameters

- n
- Type: System..::..Int32
*On entry*: the number $n$ of independent variables.*Constraint*: ${\mathbf{n}}\ge 1$.

- funct
- Type: NagLibrary..::..E04..::..E04LB_FUNCTfunct must evaluate the function $F\left(x\right)$ and its first derivatives $\frac{\partial F}{\partial {x}_{j}}$ at any point $x$. (However, if you do not wish to calculate $F\left(x\right)$ or its first derivatives at a particular $x$, there is the option of setting a parameter to cause e04lb to terminate immediately.)
A delegate of type E04LB_FUNCT.

- h
- Type: NagLibrary..::..E04..::..E04LB_Hh must calculate the second derivatives of $F$ at any point $x$. (As with funct, there is the option of causing e04lb to terminate immediately.)
A delegate of type E04LB_H.

- monit
- Type: NagLibrary..::..E04..::..E04LB_MONITIf ${\mathbf{iprint}}\ge 0$, you must supply monit which is suitable for monitoring the minimization process. monit must not change the values of any of its parameters.If ${\mathbf{iprint}}<0$, a monit with the correct parameter list should still be supplied, although it will not be called.
A delegate of type E04LB_MONIT.

- iprint
- Type: System..::..Int32
*On entry*: the frequency with which monit is to be called.- ${\mathbf{iprint}}>0$
- monit is called once every iprint iterations and just before exit from e04lb.
- ${\mathbf{iprint}}=0$
- monit is just called at the final point.
- ${\mathbf{iprint}}<0$
- monit is not called at all.

iprint should normally be set to a small positive number.*Suggested value*: ${\mathbf{iprint}}=1$.

- maxcal
- Type: System..::..Int32
*On entry*: the maximum permitted number of evaluations of $F\left(x\right)$, i.e., the maximum permitted number of calls of funct.*Suggested value*: ${\mathbf{maxcal}}=50\times {\mathbf{n}}$.*Constraint*: ${\mathbf{maxcal}}\ge 1$.

- eta
- Type: System..::..Double
*On entry*: every iteration of e04lb involves a linear minimization (i.e., minimization of $F\left(x+\alpha p\right)$ with respect to $\alpha $). eta specifies how accurately these linear minimizations are to be performed. The minimum with respect to $\alpha $ will be located more accurately for small values of eta (say, $0.01$) than for large values (say, $0.9$).Although accurate linear minimizations will generally reduce the number of iterations of e04lb, this usually results in an increase in the number of function and gradient evaluations required for each iteration. On balance, it is usually more efficient to perform a low accuracy linear minimization.*Suggested value*:**${\mathbf{eta}}=0.9$ is usually a good choice**although a smaller value may be warranted if the matrix of second derivatives is expensive to compute compared with the function and first derivatives.**If ${\mathbf{n}}=1$, eta should be set to $0.0$**(also when the problem is effectively one-dimensional even though $n>1$; i.e., if for all except one of the variables the lower and upper bounds are equal).*Constraint*: $0.0\le {\mathbf{eta}}<1.0$.

- xtol
- Type: System..::..Double
*On entry*: the accuracy in $x$ to which the solution is required.If ${x}_{\mathrm{true}}$ is the true value of $x$ at the minimum, then ${x}_{\mathrm{sol}}$, the estimated position before a normal exit, is such that $\Vert {x}_{\mathrm{sol}}-{x}_{\mathrm{true}}\Vert <{\mathbf{xtol}}\times \left(1.0+\Vert {x}_{\mathrm{true}}\Vert \right)$, where $\Vert y\Vert =\sqrt{{\displaystyle \sum _{j=1}^{n}}{y}_{j}^{2}}$. For example, if the elements of ${x}_{\mathrm{sol}}$ are not much larger than $1.0$ in modulus, and if xtol is set to ${10}^{-5}$ then ${x}_{\mathrm{sol}}$ is usually accurate to about five decimal places. (For further details see [Accuracy].)If the problem is scaled roughly as described in [Further Comments] and $\epsilon $ is the machine precision, then $\sqrt{\epsilon}$ is probably the smallest reasonable choice for xtol. (This is because, normally, to machine accuracy, $F\left(x+\sqrt{\epsilon},{e}_{j}\right)=F\left(x\right)$ where ${e}_{j}$ is any column of the identity matrix.)*Suggested value*: ${\mathbf{xtol}}=0.0$.*Constraint*: ${\mathbf{xtol}}\ge 0.0$.

- stepmx
- Type: System..::..Double
*On entry*: an estimate of the Euclidean distance between the solution and the starting point supplied by you. (For maximum efficiency a slight overestimate is preferable.)e04lb will ensure that, for each iteration,where $k$ is the iteration number. Thus, if the problem has more than one solution, e04lb is most likely to find the one nearest to the starting point. On difficult problems, a realistic choice can prevent the sequence of ${x}^{\left(k\right)}$ entering a region where the problem is ill-behaved and can also help to avoid possible overflow in the evaluation of $F\left(x\right)$. However, an underestimate of stepmx can lead to inefficiency.$$\sqrt{\sum _{j=1}^{n}{\left[{x}_{j}^{\left(k\right)}-{x}_{j}^{\left(k-1\right)}\right]}^{2}}\le {\mathbf{stepmx}}$$ *Suggested value*: ${\mathbf{stepmx}}=100000.0$.*Constraint*: ${\mathbf{stepmx}}\ge {\mathbf{xtol}}$.

- ibound
- Type: System..::..Int32
*On entry*: specifies whether the problem is unconstrained or bounded. If there are bounds on the variables, ibound can be used to indicate whether the facility for dealing with bounds of special forms is to be used. It must be set to one of the following values:- ${\mathbf{ibound}}=0$
- If the variables are bounded and you are supplying all the ${l}_{j}$ and ${u}_{j}$ individually.
- ${\mathbf{ibound}}=1$
- If the problem is unconstrained.
- ${\mathbf{ibound}}=2$
- If the variables are bounded, but all the bounds are of the form $0\le {x}_{j}$.
- ${\mathbf{ibound}}=3$
- If all the variables are bounded, and ${l}_{1}={l}_{2}=\cdots ={l}_{n}$ and ${u}_{1}={u}_{2}=\cdots ={u}_{n}$.
- ${\mathbf{ibound}}=4$
- If the problem is unconstrained. (The ${\mathbf{ibound}}=4$ option is provided purely for consistency with other methods. In e04lb it produces the same effect as ${\mathbf{ibound}}=1$.)

*Constraint*: $0\le {\mathbf{ibound}}\le 4$.

- bl
- Type: array<System..::..Double>[]()[][]An array of size [n]
*On entry*: the fixed lower bounds ${l}_{j}$.If ibound is set to $0$, you must set ${\mathbf{bl}}\left[\mathit{j}-1\right]$ to ${l}_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,n$. (If a lower bound is not specified for any ${x}_{j}$, the corresponding ${\mathbf{bl}}\left[j-1\right]$ should be set to a large negative number, e.g., $-{10}^{6}$.)*On exit*: the lower bounds actually used by e04lb, e.g., if ${\mathbf{ibound}}=2$, ${\mathbf{bl}}\left[0\right]={\mathbf{bl}}\left[1\right]=\cdots ={\mathbf{bl}}\left[n-1\right]=0.0$.

- bu
- Type: array<System..::..Double>[]()[][]An array of size [n]
*On entry*: the fixed upper bounds ${u}_{j}$.If ibound is set to $0$, you must set ${\mathbf{bu}}\left[\mathit{j}-1\right]$ to ${u}_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,n$. (If an upper bound is not specified for any variable, the corresponding ${\mathbf{bu}}\left[j-1\right]$ should be set to a large positive number, e.g., ${10}^{6}$.)*On exit*: the upper bounds actually used by e04lb, e.g., if ${\mathbf{ibound}}=2$, ${\mathbf{bu}}\left[0\right]={\mathbf{bu}}\left[1\right]=\cdots ={\mathbf{bu}}\left[{\mathbf{n}}-1\right]={10}^{6}$.

- x
- Type: array<System..::..Double>[]()[][]An array of size [n]
*On entry*: ${\mathbf{x}}\left[\mathit{j}-1\right]$ must be set to a guess at the $\mathit{j}$th component of the position of the minimum, for $\mathit{j}=1,2,\dots ,n$.*On exit*: the final point ${x}^{\left(k\right)}$. Thus, if ${\mathbf{ifail}}={0}$ on exit, ${\mathbf{x}}\left[j-1\right]$ is the $j$th component of the estimated position of the minimum.

- hesl
- Type: array<System..::..Double>[]()[][]An array of size [lh]
*On exit*: during the determination of a direction ${p}_{z}$ (see [Description]), $H+E$ is decomposed into the product $LD{L}^{\mathrm{T}}$, where $L$ is a unit lower triangular matrix and $D$ is a diagonal matrix. (The matrices $H$, $E$, $L$ and $D$ are all of dimension ${n}_{z}$, where ${n}_{z}$ is the number of variables free from their bounds. $H$ consists of those rows and columns of the full estimated second derivative matrix which relate to free variables. $E$ is chosen so that $H+E$ is positive definite.)hesl and hesd are used to store the factors $L$ and $D$. The elements of the strict lower triangle of $L$ are stored row by row in the first ${n}_{z}\left({n}_{z}-1\right)/2$ positions of hesl. The diagonal elements of $D$ are stored in the first ${n}_{z}$ positions of hesd. In the last factorization before a normal exit, the matrix $E$ will be zero, so that hesl and hesd will contain, on exit, the factors of the final estimated second derivative matrix $H$. The elements of hesd are useful for deciding whether to accept the results produced by e04lb (see [Accuracy]).

- hesd
- Type: array<System..::..Double>[]()[][]An array of size [n]
*On exit*: during the determination of a direction ${p}_{z}$ (see [Description]), $H+E$ is decomposed into the product $LD{L}^{\mathrm{T}}$, where $L$ is a unit lower triangular matrix and $D$ is a diagonal matrix. (The matrices $H$, $E$, $L$ and $D$ are all of dimension ${n}_{z}$, where ${n}_{z}$ is the number of variables free from their bounds. $H$ consists of those rows and columns of the full second derivative matrix which relate to free variables. $E$ is chosen so that $H+E$ is positive definite.)hesl and hesd are used to store the factors $L$ and $D$. The elements of the strict lower triangle of $L$ are stored row by row in the first ${n}_{z}\left({n}_{z}-1\right)/2$ positions of hesl. The diagonal elements of $D$ are stored in the first ${n}_{z}$ positions of hesd.In the last factorization before a normal exit, the matrix $E$ will be zero, so that hesl and hesd will contain, on exit, the factors of the final second derivative matrix $H$. The elements of hesd are useful for deciding whether to accept the result produced by e04lb (see [Accuracy]).

- istate
- Type: array<System..::..Int32>[]()[][]An array of size [n]
*On exit*: information about which variables are currently on their bounds and which are free. If ${\mathbf{istate}}\left[j-1\right]$ is:- – equal to $-1$, ${x}_{j}$ is fixed on its upper bound;
- – equal to $-2$, ${x}_{j}$ is fixed on its lower bound;
- – equal to $-3$, ${x}_{j}$ is effectively a constant (i.e., ${l}_{j}={u}_{j}$);
- – positive, ${\mathbf{istate}}\left[j-1\right]$ gives the position of ${x}_{j}$ in the sequence of free variables.

- f
- Type: System..::..Double%
*On exit*: the function value at the final point given in x.

- g
- Type: array<System..::..Double>[]()[][]An array of size [n]

- ifail
- Type: System..::..Int32%
*On exit*: ${\mathbf{ifail}}={0}$ unless the method detects an error or a warning has been flagged (see [Error Indicators and Warnings]).

# Description

e04lb is applicable to problems of the form:

Special provision is made for unconstrained minimization (i.e., problems which actually have no bounds on the ${x}_{j}$), problems which have only non-negativity bounds, and problems in which ${l}_{1}={l}_{2}=\cdots ={l}_{n}$ and ${u}_{1}={u}_{2}=\cdots ={u}_{n}$. It is possible to specify that a particular ${x}_{j}$ should be held constant. You must supply a starting point, a funct to calculate the value of $F\left(x\right)$ and its first derivatives $\frac{\partial F}{\partial {x}_{j}}$ at any point $x$, and a h to calculate the second derivatives $\frac{{\partial}^{2}F}{\partial {x}_{i}\partial {x}_{j}}$.

$$\mathrm{Minimize}F\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)\text{subject to}{l}_{j}\le {x}_{j}\le {u}_{j}\text{, \hspace{1em}}j=1,2,\dots ,n\text{.}$$ |

A typical iteration starts at the current point $x$ where ${n}_{z}$ (say) variables are free from both their bounds. The vector of first derivatives of $F\left(x\right)$ with respect to the free variables, ${g}_{z}$, and the matrix of second derivatives with respect to the free variables, $H$, are obtained. (These both have dimension ${n}_{z}$.)

The equations

are solved to give a search direction ${p}_{z}$. (The matrix $E$ is chosen so that $H+E$ is positive definite.)

$$\left(H+E\right){p}_{z}=-{g}_{z}$$ |

${p}_{z}$ is then expanded to an $n$-vector $p$ by the insertion of appropriate zero elements; $\alpha $ is found such that $F\left(x+\alpha p\right)$ is approximately a minimum (subject to the fixed bounds) with respect to $\alpha $, and $x$ is replaced by $x+\alpha p$. (If a saddle point is found, a special search is carried out so as to move away from the saddle point.)

If any variable actually reaches a bound, it is fixed and ${n}_{z}$ is reduced for the next iteration.

There are two sets of convergence criteria – a weaker and a stronger. Whenever the weaker criteria are satisfied, the Lagrange multipliers are estimated for all active constraints. If any Lagrange multiplier estimate is significantly negative, then one of the variables associated with a negative Lagrange multiplier estimate is released from its bound and the next search direction is computed in the extended subspace (i.e., ${n}_{z}$ is increased). Otherwise, minimization continues in the current subspace until the stronger criteria are satisfied. If at this point there are no negative or near-zero Lagrange multiplier estimates, the process is terminated.

If you specify that the problem is unconstrained, e04lb sets the ${l}_{j}$ to $-{10}^{6}$ and the ${u}_{j}$ to ${10}^{6}$. Thus, provided that the problem has been sensibly scaled, no bounds will be encountered during the minimization process and e04lb will act as an unconstrained minimization algorithm.

# References

Gill P E and Murray W (1973) Safeguarded steplength algorithms for optimization using descent methods

*NPL Report NAC 37*National Physical LaboratoryGill P E and Murray W (1974) Newton-type methods for unconstrained and linearly constrained optimization

*Math. Programming***7**311–350Gill P E and Murray W (1976) Minimization subject to bounds on the variables

*NPL Report NAC 72*National Physical Laboratory# Error Indicators and Warnings

**Note:**e04lb may return useful information for one or more of the following detected errors or warnings.

Errors or warnings detected by the method:

Some error messages may refer to parameters that are dropped from this interface
(IW, LIW, W, LW) In these
cases, an error in another parameter has usually caused an incorrect value to be inferred.

- ${\mathbf{ifail}}<0$

- ${\mathbf{ifail}}=1$
On entry, ${\mathbf{n}}<1$, or ${\mathbf{maxcal}}<1$, or ${\mathbf{eta}}<0.0$, or ${\mathbf{eta}}\ge 1.0$, or ${\mathbf{xtol}}<0.0$, or ${\mathbf{stepmx}}<{\mathbf{xtol}}$, or ${\mathbf{ibound}}<0$, or ${\mathbf{ibound}}>4$, or ${\mathbf{bl}}\left[j-1\right]>{\mathbf{bu}}\left[j-1\right]$ for some $j$ if ${\mathbf{ibound}}=0$, or ${\mathbf{bl}}\left[0\right]>{\mathbf{bu}}\left[0\right]$ if ${\mathbf{ibound}}=3$, or ${\mathbf{lh}}<\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\times \left({\mathbf{n}}-1\right)/2\right)$,

- ${\mathbf{ifail}}=2$
- There have been maxcal function evaluations. If steady reductions in $F\left(x\right)$ were monitored up to the point where this exit occurred, then the exit probably occurred simply because maxcal was set too small, so the calculations should be restarted from the final point held in x. This exit may also indicate that $F\left(x\right)$ has no minimum.

- ${\mathbf{ifail}}=3$
- The conditions for a minimum have not all been met, but a lower point could not be found.Provided that, on exit, the first derivatives of $F\left(x\right)$ with respect to the free variables are sufficiently small, and that the estimated condition number of the second derivative matrix is not too large, this error exit may simply mean that, although it has not been possible to satisfy the specified requirements, the algorithm has in fact found the minimum as far as the accuracy of the machine permits. Such a situation can arise, for instance, if xtol has been set so small that rounding errors in the evaluation of $F\left(x\right)$ or its derivatives make it impossible to satisfy the convergence conditions.If the estimated condition number of the second derivative matrix at the final point is large, it could be that the final point is a minimum, but that the smallest eigenvalue of the Hessian matrix is so close to zero that it is not possible to recognize the point as a minimum.

- ${\mathbf{ifail}}=4$

- ${\mathbf{ifail}}=5$
- All the Lagrange multiplier estimates which are not indisputably positive lie relatively close to zero, but it is impossible either to continue minimizing on the current subspace or to find a feasible lower point by releasing and perturbing any of the fixed variables. You should investigate as for ${\mathbf{ifail}}={3}$.

- ${\mathbf{ifail}}=-9000$
- An error occured, see message report.
- ${\mathbf{ifail}}=-8000$
- Negative dimension for array $\u2329\mathit{\text{value}}\u232a$
- ${\mathbf{ifail}}=-6000$
- Invalid Parameters $\u2329\mathit{\text{value}}\u232a$

The values ${\mathbf{ifail}}={2}$, ${3}$ or ${5}$ may also be caused by mistakes in user-supplied delegates funct or h, by the formulation of the problem or by an awkward function. If there are no such mistakes, it is worth restarting the calculations from a different starting point (not the point at which the failure occurred) in order to avoid the region which caused the failure.

# Accuracy

A successful exit (${\mathbf{ifail}}={0}$) is made from e04lb when ${H}^{\left(k\right)}$ is positive definite and when (B1, B2 and B3) or B4 hold, where

(Quantities with superscript $k$ are the values at the $k$th iteration of the quantities mentioned in [Description]. $\epsilon $ is the machine precision and $\Vert .\Vert $ denotes the Euclidean norm.)

$$\begin{array}{lll}\mathrm{B1}& \equiv & {\alpha}^{\left(k\right)}\times \Vert {p}^{\left(k\right)}\Vert <\left({\mathbf{xtol}}+\sqrt{\epsilon}\right)\times \left(1.0+\Vert {x}^{\left(k\right)}\Vert \right)\\ \mathrm{B2}& \equiv & \left|{F}^{\left(k\right)}-{F}^{\left(k-1\right)}\right|<\left({{\mathbf{xtol}}}^{2}+\epsilon \right)\times \left(1.0+\left|{F}^{\left(k\right)}\right|\right)\\ \mathrm{B3}& \equiv & \Vert {g}_{z}^{\left(k\right)}\Vert <\left({\epsilon}^{1/3}+{\mathbf{xtol}}\right)\times \left(1.0+\left|{F}^{\left(k\right)}\right|\right)\\ \mathrm{B4}& \equiv & \Vert {g}_{z}^{\left(k\right)}\Vert <0.01\times \sqrt{\epsilon}\text{.}\end{array}$$ |

If ${\mathbf{ifail}}={0}$, then the vector in x on exit, ${x}_{\mathrm{sol}}$, is almost certainly an estimate of the position of the minimum, ${x}_{\mathrm{true}}$, to the accuracy specified by xtol.

If ${\mathbf{ifail}}={3}$ or ${5}$, ${x}_{\mathrm{sol}}$ may still be a good estimate of ${x}_{\mathrm{true}}$, but the following checks should be made. Let the largest of the first ${n}_{z}$ elements of hesd be ${\mathbf{hesd}}\left[b-1\right]$, let the smallest be ${\mathbf{hesd}}\left[s-1\right]$, and define $k={\mathbf{hesd}}\left[b-1\right]/{\mathbf{hesd}}\left[s-1\right]$. The scalar $k$ is usually a good estimate of the condition number of the projected Hessian matrix at ${x}_{\mathrm{sol}}$. If

then it is almost certain that ${x}_{\mathrm{sol}}$ is a close approximation to the position of a minimum. When (ii) is true, then usually $F\left({x}_{\mathrm{sol}}\right)$ is a close approximation to $F\left({x}_{\mathrm{true}}\right)$. The quantities needed for these checks are all available via monit; in particular the value of cond in the last call of monit before exit gives $k$.

(i) | the sequence $\left\{F\left({x}^{\left(k\right)}\right)\right\}$ converges to $F\left({x}_{\mathrm{sol}}\right)$ at a superlinear or fast linear rate, |

(ii) | ${\Vert {g}_{z}\left({x}_{\mathrm{sol}}\right)\Vert}^{2}<10.0\times \epsilon $, and |

(iii) | $k<1.0/\Vert {g}_{z}\left({x}_{\mathrm{sol}}\right)\Vert $, |

Further suggestions about confirmation of a computed solution are given in the

**E04**class.# Parallelism and Performance

None.

# Further Comments

# Timing

The number of iterations required depends on the number of variables, the behaviour of $F\left(x\right)$, the accuracy demanded and the distance of the starting point from the solution. The number of multiplications performed in an iteration of e04lb is $\frac{{n}_{z}^{3}}{6}+\mathit{O}\left({n}_{z}^{2}\right)$. In addition, each iteration makes one call of h and at least one call of funct. So, unless $F\left(x\right)$ and its derivatives can be evaluated very quickly, the run time will be dominated by the time spent in funct and h.

# Scaling

Ideally, the problem should be scaled so that, at the solution, $F\left(x\right)$ and the corresponding values of the ${x}_{j}$ are each in the range $\left(-1,+1\right)$, and so that at points one unit away from the solution, $F\left(x\right)$ differs from its value at the solution by approximately one unit. This will usually imply that the Hessian matrix at the solution is well-conditioned. It is unlikely that you will be able to follow these recommendations very closely, but it is worth trying (by guesswork), as sensible scaling will reduce the difficulty of the minimization problem, so that e04lb will take less computer time.

# Unconstrained Minimization

If a problem is genuinely unconstrained and has been scaled sensibly, the following points apply:

(a) | ${n}_{z}$ will always be $n$, |

(b) | hesl and hesd will be factors of the full second derivative matrix with elements stored in the natural order, |

(c) | the elements of $g$ should all be close to zero at the final point, |

(d) | the values of the ${\mathbf{istate}}\left[j-1\right]$ given by monit and on exit from e04lb are unlikely to be of interest (unless they are negative, which would indicate that the modulus of one of the ${x}_{j}$ has reached ${10}^{6}$ for some reason), |

(e) | monit's parameter gpjnrm simply gives the norm of the first derivative vector. |