# NAG Library Function Document

## 1Purpose

nag_opt_lsq_no_deriv (e04fcc) is a comprehensive algorithm for finding an unconstrained minimum of a sum of squares of $m$ nonlinear functions in $n$ variables $\left(m\ge n\right)$. No derivatives are required.
nag_opt_lsq_no_deriv (e04fcc) is intended for objective functions which have continuous first and second derivatives (although it will usually work even if the derivatives have occasional discontinuities).

## 2Specification

 #include #include
void  nag_opt_lsq_no_deriv (Integer m, Integer n,
 void (*lsqfun)(Integer m, Integer n, const double x[], double fvec[], Nag_Comm *comm),
double x[], double *fsumsq, double fvec[], double fjac[], Integer tdfjac, Nag_E04_Opt *options, Nag_Comm *comm, NagError *fail)

## 3Description

nag_opt_lsq_no_deriv (e04fcc) is applicable to problems of the form:
 $Minimize ​ F x = ∑ i=1 m f i x 2$
where $x={\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)}^{\mathrm{T}}$ and $m\ge n$. (The functions ${f}_{i}\left(x\right)$ are often referred to as ‘residuals’.) You must supply a C function, lsqfun, to calculate the values of the ${f}_{i}\left(x\right)$ at any point $x$.
From a starting point ${x}^{\left(1\right)}$ nag_opt_lsq_no_deriv (e04fcc) generates a sequence of points ${x}^{\left(2\right)},{x}^{\left(3\right)},\dots ,$ which is intended to converge to a local minimum of $F\left(x\right)$. The sequence of points is given by
 $x k+1 = x k + α k p k$
where the vector ${p}^{\left(k\right)}$ is a direction of search, and ${\alpha }^{\left(k\right)}$ is chosen such that $F\left({x}^{\left(k\right)}+{\alpha }^{\left(k\right)}{p}^{\left(k\right)}\right)$ is approximately a minimum with respect to ${\alpha }^{\left(k\right)}$.
The vector ${p}^{\left(k\right)}$ used depends upon the reduction in the sum of squares obtained during the last iteration. If the sum of squares was sufficiently reduced, then ${p}^{\left(k\right)}$ is an approximation to the Gauss–Newton direction; otherwise additional function evaluations are made so as to enable ${p}^{\left(k\right)}$ to be a more accurate approximation to the Newton direction.
The method is designed to ensure that steady progress is made whatever the starting point, and to have the rapid ultimate convergence of Newton's method.

## 4References

Gill P E and Murray W (1978) Algorithms for the solution of the nonlinear least squares problem SIAM J. Numer. Anal. 15 977–992

## 5Arguments

1:    $\mathbf{m}$IntegerInput
On entry: $m$, the number of residuals, ${f}_{i}\left(x\right)$.
2:    $\mathbf{n}$IntegerInput
On entry: $n$, the number of variables, ${x}_{j}$.
Constraint: $1\le {\mathbf{n}}\le {\mathbf{m}}$.
3:    $\mathbf{lsqfun}$function, supplied by the userExternal Function
lsqfun must calculate the vector of values ${f}_{i}\left(x\right)$ at any point $x$. (However, if you do not wish to calculate the residuals at a particular $x$, there is the option of setting an argument to cause nag_opt_lsq_no_deriv (e04fcc) to terminate immediately.)
The specification of lsqfun is:
 void lsqfun (Integer m, Integer n, const double x[], double fvec[], Nag_Comm *comm)
1:    $\mathbf{m}$IntegerInput
2:    $\mathbf{n}$IntegerInput
On entry: the numbers $m$ and $n$ of residuals and variables, respectively.
3:    $\mathbf{x}\left[{\mathbf{n}}\right]$const doubleInput
On entry: the point $x$ at which the values of the ${f}_{i}$ are required.
4:    $\mathbf{fvec}\left[{\mathbf{m}}\right]$doubleOutput
On exit: unless $\mathbf{comm}\mathbf{\to }\mathbf{flag}$ is reset to a negative number, on exit ${\mathbf{fvec}}\left[\mathit{i}-1\right]$ must contain the value of ${f}_{\mathit{i}}$ at the point $x$, for $\mathit{i}=1,2,\dots ,m$.
5:    $\mathbf{comm}$Nag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to lsqfun.
flagIntegerInput/Output
On entry: $\mathbf{comm}\mathbf{\to }\mathbf{flag}$ contains a non-negative number.
On exit: if lsqfun resets $\mathbf{comm}\mathbf{\to }\mathbf{flag}$ to some negative number then nag_opt_lsq_no_deriv (e04fcc) will terminate immediately with the error indicator NE_USER_STOP. If fail is supplied to nag_opt_lsq_no_deriv (e04fcc), ${\mathbf{fail}}\mathbf{.}\mathbf{errnum}$ will be set to the user's setting of $\mathbf{comm}\mathbf{\to }\mathbf{flag}$.
firstNag_BooleanInput
On entry: the value Nag_TRUE on the first call to lsqfun and Nag_FALSE for all subsequent calls.
nfIntegerInput
On entry: the number of calls made to lsqfun including the current one.
userdouble *
iuserInteger *
pPointer
The type Pointer will be void * with a C compiler that defines void * and char * otherwise. Before calling nag_opt_lsq_no_deriv (e04fcc) these pointers may be allocated memory and initialized with various quantities for use by lsqfun when called from nag_opt_lsq_no_deriv (e04fcc).
Note: lsqfun should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by nag_opt_lsq_no_deriv (e04fcc). If your code inadvertently does return any NaNs or infinities, nag_opt_lsq_no_deriv (e04fcc) is likely to produce unexpected results.
Note: lsqfun should be tested separately before being used in conjunction with nag_opt_lsq_no_deriv (e04fcc). The array x must not be changed within lsqfun.
4:    $\mathbf{x}\left[{\mathbf{n}}\right]$doubleInput/Output
On entry: ${\mathbf{x}}\left[\mathit{j}-1\right]$ must be set to a guess at the $\mathit{j}$th component of the position of the minimum, for $\mathit{j}=1,2,\dots ,n$.
On exit: the final point ${x}^{*}$. On successful exit, ${\mathbf{x}}\left[j-1\right]$ is the $j$th component of the estimated position of the minimum.
5:    $\mathbf{fsumsq}$double *Output
On exit: the value of $F\left(x\right)$, the sum of squares of the residuals ${f}_{i}\left(x\right)$, at the final point given in x.
6:    $\mathbf{fvec}\left[{\mathbf{m}}\right]$doubleOutput
On exit: ${\mathbf{fvec}}\left[\mathit{i}-1\right]$ is the value of the residual ${f}_{\mathit{i}}\left(x\right)$ at the final point given in x, for $\mathit{i}=1,2,\dots ,m$.
7:    $\mathbf{fjac}\left[{\mathbf{m}}×{\mathbf{tdfjac}}\right]$doubleOutput
On exit: ${\mathbf{fjac}}\left[\left(\mathit{i}-1\right)×{\mathbf{tdfjac}}+\mathit{j}-1\right]$ contains the estimate of the first derivative $\frac{\partial {f}_{\mathit{i}}}{\partial {x}_{\mathit{j}}}$ at the final point given in x, for $\mathit{i}=1,2,\dots ,m$ and $\mathit{j}=1,2,\dots ,n$.
8:    $\mathbf{tdfjac}$IntegerInput
On entry: the stride separating matrix column elements in the array fjac.
Constraint: ${\mathbf{tdfjac}}\ge {\mathbf{n}}$.
9:    $\mathbf{options}$Nag_E04_Opt *Input/Output
On entry/exit: a pointer to a structure of type Nag_E04_Opt whose members are optional parameters for nag_opt_lsq_no_deriv (e04fcc). These structure members offer the means of adjusting some of the argument values of the algorithm and on output will supply further details of the results. A description of the members of options is given in Section 11.2.
If any of these optional parameters are required then the structure options should be declared and initialized by a call to nag_opt_init (e04xxc) and supplied as an argument to nag_opt_lsq_no_deriv (e04fcc). However, if the optional parameters are not required the NAG defined null pointer, E04_DEFAULT, can be used in the function call.
10:  $\mathbf{comm}$Nag_Comm *Input/Output
Note: comm is a NAG defined type (see Section 3.3.1.1 in How to Use the NAG Library and its Documentation).
On entry/exit: structure containing pointers for communication to user-supplied functions; see the above description of lsqfun for details. If you do not need to make use of this communication feature the null pointer NAGCOMM_NULL may be used in the call to nag_opt_lsq_no_deriv (e04fcc); comm will then be declared internally for use in calls to user-supplied functions.
11:  $\mathbf{fail}$NagError *Input/Output
The NAG error argument (see Section 3.7 in How to Use the NAG Library and its Documentation).

## 6Error Indicators and Warnings

If one of NE_USER_STOP, NE_2_INT_ARG_LT, NE_OPT_NOT_INIT, NE_BAD_PARAM, NE_2_REAL_ARG_LT, NE_INVALID_INT_RANGE_1, NE_INVALID_REAL_RANGE_EF, NE_INVALID_REAL_RANGE_FF and NE_ALLOC_FAIL occurs, no values will have been assigned to fsumsq, or to the elements of fvec, fjac, ${\mathbf{options}}\mathbf{.}{\mathbf{s}}$ or ${\mathbf{options}}\mathbf{.}{\mathbf{v}}$.
The exits NW_TOO_MANY_ITER, NW_COND_MIN, and NE_SVD_FAIL may also be caused by mistakes in lsqfun, by the formulation of the problem or by an awkward function. If there are no such mistakes it is worth restarting the calculations from a different starting point (not the point at which the failure occurred) in order to avoid the region which caused the failure.
NE_2_INT_ARG_LT
On entry, ${\mathbf{m}}=〈\mathit{\text{value}}〉$ while ${\mathbf{n}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{m}}\ge {\mathbf{n}}$.
On entry, ${\mathbf{options}}\mathbf{.}{\mathbf{tdv}}=〈\mathit{\text{value}}〉$ while ${\mathbf{n}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{options}}\mathbf{.}{\mathbf{tdv}}\ge {\mathbf{n}}$.
On entry, ${\mathbf{tdfjac}}=〈\mathit{\text{value}}〉$ while ${\mathbf{n}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{tdfjac}}\ge {\mathbf{n}}$.
NE_2_REAL_ARG_LT
On entry, ${\mathbf{options}}\mathbf{.}{\mathbf{step_max}}=〈\mathit{\text{value}}〉$ while ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{options}}\mathbf{.}{\mathbf{step_max}}\ge {\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$.
NE_ALLOC_FAIL
Dynamic memory allocation failed.
On entry, argument ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}$ had an illegal value.
NE_INT_ARG_LT
On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{n}}\ge 1$.
NE_INVALID_INT_RANGE_1
Value $〈\mathit{\text{value}}〉$ given to ${\mathbf{options}}\mathbf{.}{\mathbf{max_iter}}$ not valid. Correct range is ${\mathbf{options}}\mathbf{.}{\mathbf{max_iter}}\ge 0$.
NE_INVALID_REAL_RANGE_EF
Value $〈\mathit{\text{value}}〉$ given to ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$ not valid. Correct range is $〈\mathit{\text{value}}〉$ $\le {\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}<1.0$.
NE_INVALID_REAL_RANGE_FF
Value $〈\mathit{\text{value}}〉$ given to ${\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}$ not valid. Correct range is $0.0\le {\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}<1.0$.
NE_NOT_APPEND_FILE
Cannot open file $〈\mathit{string}〉$ for appending.
NE_NOT_CLOSE_FILE
Cannot close file $〈\mathit{string}〉$.
NE_OPT_NOT_INIT
Options structure not initialized.
NE_SVD_FAIL
The computation of the singular value decomposition of the Jacobian matrix has failed to converge in a reasonable number of sub-iterations.
It may be worth applying nag_opt_lsq_no_deriv (e04fcc) again starting with an initial approximation which is not too close to the point at which the failure occurred.
NE_USER_STOP
User requested termination, user flag value $\text{}=〈\mathit{\text{value}}〉$.
This exit occurs if you set $\mathbf{comm}\mathbf{\to }\mathbf{flag}$ to a negative value in lsqfun. If fail is supplied the value of ${\mathbf{fail}}\mathbf{.}\mathbf{errnum}$ will be the same as your setting of $\mathbf{comm}\mathbf{\to }\mathbf{flag}$.
NE_WRITE_ERROR
Error occurred when writing to file $〈\mathit{string}〉$.
NW_COND_MIN
The conditions for a minimum have not all been satisfied, but a lower point could not be found.
This could be because ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$ has been set so small that rounding errors in the evaluation of the residuals make attainment of the convergence conditions impossible.
NW_TOO_MANY_ITER
The maximum number of iterations, $〈\mathit{\text{value}}〉$, have been performed.
If steady reductions in the sum of squares, $F\left(x\right)$, were monitored up to the point where this exit occurred, then the exit probably occurred simply because ${\mathbf{options}}\mathbf{.}{\mathbf{max_iter}}$ was set too small, so the calculations should be restarted from the final point held in x. This exit may also indicate that $F\left(x\right)$ has no minimum.

## 7Accuracy

If the problem is reasonably well scaled and a successful exit is made, then, for a computer with a mantissa of $t$ decimals, one would expect to get about $t/2-1$ decimals accuracy in the components of $x$ and between $t-1$ (if $F\left(x\right)$ is of order 1 at the minimum) and $2t-2$ (if $F\left(x\right)$ is close to zero at the minimum) decimals accuracy in $F\left(x\right)$.
A successful exit (NE_NOERROR) is made from nag_opt_lsq_no_deriv (e04fcc) when (B1, B2 and B3) or B4 or B5 hold, where
• $\mathrm{B}1\equiv {\alpha }^{\left(k\right)}×‖{p}^{\left(k\right)}‖<\left({\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}+\epsilon \right)×\left(1.0+‖{x}^{\left(k\right)}‖\right)$
• $\mathrm{B}2\equiv \left|{F}^{\left(k\right)}-{F}^{\left(k-1\right)}\right|<{\left({\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}+\epsilon \right)}^{2}×\left(1.0+{F}^{\left(k\right)}\right)$
• $\mathrm{B}3\equiv ‖{g}^{\left(k\right)}‖<\left({\epsilon }^{1/3}+{\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}\right)×\left(1.0+{F}^{\left(k\right)}\right)$
• $\mathrm{B}4\equiv {F}^{\left(k\right)}<{\epsilon }^{2}$
• $\mathrm{B}5\equiv ‖{g}^{\left(k\right)}‖<{\left(\epsilon ×\sqrt{{F}^{\left(k\right)}}\right)}^{1/2}$
and where $‖\text{.}‖$, $\epsilon$ and the optional parameter ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$ are as defined in Section 11.2, while ${F}^{\left(k\right)}$ and ${g}^{\left(k\right)}$ are the values of $F\left(x\right)$ and its vector of estimated first derivatives at ${x}^{\left(k\right)}$.
If ${\mathbf{fail}}\mathbf{.}\mathbf{code}=\mathrm{NE_NOERROR}$ then the vector in x on exit, ${x}_{\mathit{sol}}$, is almost certainly an estimate of ${x}_{\mathrm{true}}$, the position of the minimum to the accuracy specified by ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$.
If ${\mathbf{fail}}\mathbf{.}\mathbf{code}={\mathbf{NW_COND_MIN}}$, then ${x}_{\mathit{sol}}$ may still be a good estimate of ${x}_{\mathrm{true}}$, but to verify this you should make the following checks. If
 (a) the sequence $\left\{F\left({x}^{\left(k\right)}\right)\right\}$ converges to $F\left({x}_{\mathit{sol}}\right)$ at a superlinear or a fast linear rate, and (b) $g{\left({x}_{\mathit{sol}}\right)}^{\mathrm{T}}g\left({x}_{\mathit{sol}}\right)<10\epsilon$,
where $\mathrm{T}$ denotes transpose, then it is almost certain that ${x}_{\mathit{sol}}$ is a close approximation to the minimum. When (b) is true, then usually $F\left({x}_{\mathit{sol}}\right)$ is a close approximation to $F\left({x}_{\mathrm{true}}\right)$.
Further suggestions about confirmation of a computed solution are given in the e04 Chapter Introduction.

## 8Parallelism and Performance

nag_opt_lsq_no_deriv (e04fcc) is not threaded in any implementation.

The number of iterations required depends on the number of variables, the number of residuals, the behaviour of $F\left(x\right)$, the accuracy demanded and the distance of the starting point from the solution. The number of multiplications performed per iteration of nag_opt_lsq_no_deriv (e04fcc) varies, but for $m>>n$ is approximately $n×{m}^{2}+O\left({n}^{3}\right)$. In addition, each iteration makes at least $n+1$ calls of lsqfun. So, unless the residuals can be evaluated very quickly, the run time will be dominated by the time spent in lsqfun.
Ideally, the problem should be scaled so that, at the solution, $F\left(x\right)$ and the corresponding values of the ${x}_{j}$ are each in the range $\left(-1,+1\right)$, and so that at points one unit away from the solution, $F\left(x\right)$ differs from its value at the solution by approximately one unit. This will usually imply that the Hessian matrix of $F\left(x\right)$ at the solution is well-conditioned. It is unlikely that you will be able to follow these recommendations very closely, but it is worth trying (by guesswork), as sensible scaling will reduce the difficulty of the minimization problem, so that nag_opt_lsq_no_deriv (e04fcc) will take less computer time.
When the sum of squares represents the goodness-of-fit of a nonlinear model to observed data, elements of the variance-covariance matrix of the estimated regression coefficients can be computed by a subsequent call to nag_opt_lsq_covariance (e04ycc), using information returned in the arrays ${\mathbf{options}}\mathbf{.}{\mathbf{s}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{v}}$. See nag_opt_lsq_covariance (e04ycc) for further details.

## 10Example

This example shows option values being assigned directly within the program text and by reading values from a data file. The options structure is declared and initialized by nag_opt_init (e04xxc). Values are then assigned directly to options ${\mathbf{options}}\mathbf{.}{\mathbf{outfile}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$ and two further options are read from the data file by use of nag_opt_read (e04xyc). The memory freeing function nag_opt_free (e04xzc) is used to free the memory assigned to the pointers in the option structure. You must not use the standard C function free() for this purpose.

### 10.1Program Text

Program Text (e04fcce.c)

### 10.2Program Data

Program Data (e04fcce.d)

Program Options (e04fcce.opt)

### 10.3Program Results

Program Results (e04fcce.r)

## 11Optional Parameters

A number of optional input and output arguments to nag_opt_lsq_no_deriv (e04fcc) are available through the structure argument options, type Nag_E04_Opt. An argument may be selected by assigning an appropriate value to the relevant structure member; those arguments not selected will be assigned default values. If no use is to be made of any of the optional parameters you should use the NAG defined null pointer, E04_DEFAULT, in place of options when calling nag_opt_lsq_no_deriv (e04fcc); the default settings will then be used for all arguments.
Before assigning values to options directly the structure must be initialized by a call to the function nag_opt_init (e04xxc). Values may then be assigned to the structure members in the normal C manner.
After return from nag_opt_lsq_no_deriv (e04fcc), the options structure may only be re-used for future calls of nag_opt_lsq_no_deriv (e04fcc) if the dimensions of the new problem are the same. Otherwise, the structure must be cleared by a call of nag_opt_free (e04xzc)) and re-initialized by a call of nag_opt_init (e04xxc) before future calls. Failure to do this will result in unpredictable behaviour.
Option settings may also be read from a text file using the function nag_opt_read (e04xyc) in which case initialization of the options structure will be performed automatically if not already done. Any subsequent direct assignment to the options structure must not be preceded by initialization.
If assignment of functions and memory to pointers in the options structure is required, this must be done directly in the calling program, they cannot be assigned using nag_opt_read (e04xyc).

### 11.1Optional Parameter Checklist and Default Values

For easy reference, the following list shows the members of options which are valid for nag_opt_lsq_no_deriv (e04fcc) together with their default values where relevant. The number $\epsilon$ is a generic notation for machine precision (see nag_machine_precision (X02AJC)).
 Boolean list Nag_TRUE Nag_PrintType print_level $\mathrm{Nag_Soln_Iter}$ char outfile[512] stdout void (*print_fun)() NULL Integer max_iter $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(50,5{\mathbf{n}}\right)$ double optim_tol $\sqrt{\epsilon }$ double linesearch_tol 0.5 (0.0 if ${\mathbf{n}}=1$) double step_max 100000.0 double *s size n double *v size ${\mathbf{n}}×{\mathbf{n}}$ Integer tdv n Integer grade Integer iter Integer nf

### 11.2Description of the Optional Parameters

 list – Nag_Boolean Default $\text{}=\mathrm{Nag_TRUE}$
On entry: if ${\mathbf{options}}\mathbf{.}{\mathbf{list}}=\mathrm{Nag_TRUE}$ the argument settings in the call to nag_opt_lsq_no_deriv (e04fcc) will be printed.
 print_level – Nag_PrintType Default $\text{}=\mathrm{Nag_Soln_Iter}$
On entry: the level of results printout produced by nag_opt_lsq_no_deriv (e04fcc). The following values are available:
 $\mathrm{Nag_NoPrint}$ No output. $\mathrm{Nag_Soln}$ The final solution. $\mathrm{Nag_Iter}$ One line of output for each iteration. $\mathrm{Nag_Soln_Iter}$ The final solution and one line of output for each iteration. $\mathrm{Nag_Soln_Iter_Full}$ The final solution and detailed printout at each iteration.
Details of each level of results printout are described in Section 9.
Constraint: ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_NoPrint}$, $\mathrm{Nag_Soln}$, $\mathrm{Nag_Iter}$, $\mathrm{Nag_Soln_Iter}$ or $\mathrm{Nag_Soln_Iter_Full}$.
 outfile – const char[512] Default $\text{}=\mathtt{stdout}$
On entry: the name of the file to which results should be printed. If ${\mathbf{options}}\mathbf{.}{\mathbf{outfile}}\left[0\right]=\text{' \0 '}$ then the stdout stream is used.
 print_fun – pointer to function Default $\text{}=\text{}$NULL
On entry: printing function defined by you; the prototype of ${\mathbf{options}}\mathbf{.}{\mathbf{print_fun}}$ is
`void (*print_fun)(const Nag_Search_State *st, Nag_Comm *comm);`
See Section 9 for further details.
 max_iter – Integer Default $\text{}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(50,5{\mathbf{n}}\right)$
On entry: the limit on the number of iterations allowed before termination.
Constraint: ${\mathbf{options}}\mathbf{.}{\mathbf{max_iter}}\ge 0$.
 optim_tol – double Default $\text{}=\sqrt{\epsilon }$
On entry: the accuracy in $x$ to which the solution is required. If ${x}_{\mathrm{true}}$ is the true value of $x$ at the minimum, then ${x}_{\mathit{sol}}$, the estimated position prior to a normal exit, is such that
• $‖{x}_{\mathit{sol}}-{x}_{\mathrm{true}}‖<{\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}×\left(1.0+‖{x}_{\mathrm{true}}‖\right)\text{,}$
where $‖y‖={\left({\sum }_{j=1}^{n}{y}_{j}^{2}\right)}^{1/2}$. For example, if the elements of ${x}_{\mathit{sol}}$ are not much larger than 1.0 in modulus and if ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}=1.0×{10}^{-5}$, then ${x}_{\mathit{sol}}$ is usually accurate to about 5 decimal places. (For further details see Section 9.) If $F\left(x\right)$ and the variables are scaled roughly as described in Section 9 and $\epsilon$ is the machine precision, then a setting of order ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}=\sqrt{\epsilon }$ will usually be appropriate.
Constraint: $10\epsilon \le {\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}<1.0$.
 linesearch_tol – double Default $\text{}=0.5$. (If ${\mathbf{n}}=1$, default $\text{}=0.0$)
On entry: every iteration of nag_opt_lsq_no_deriv (e04fcc) involves a linear minimization, i.e., minimization of $F\left({x}^{\left(k\right)}+{\alpha }^{\left(k\right)}{p}^{\left(k\right)}\right)$ with respect to ${\alpha }^{\left(k\right)}$. ${\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}$ specifies how accurately the linear minimizations are to be performed. The minimum with respect to ${\alpha }^{\left(k\right)}$ will be located more accurately for small values of ${\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}$ (say 0.01) than for large values (say 0.9). Although accurate linear minimizations will generally reduce the number of iterations performed by nag_opt_lsq_no_deriv (e04fcc), they will increase the number of calls of lsqfun made each iteration. On balance it is usually more efficient to perform a low accuracy minimization.
Constraint: $0.0\le {\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}<1.0$.
 step_max – double Default $\text{}=100000.0$
On entry: an estimate of the Euclidean distance between the solution and the starting point supplied. (For maximum efficiency, a slight overestimate is preferable.) nag_opt_lsq_no_deriv (e04fcc) will ensure that, for each iteration,
• ${\sum }_{j=1}^{n}{\left({x}_{j}^{\left(k\right)}-{x}_{j}^{\left(k-1\right)}\right)}^{2}\le {\left({\mathbf{options}}\mathbf{.}{\mathbf{step_max}}\right)}^{2}$
where $k$ is the iteration number. Thus, if the problem has more than one solution, nag_opt_lsq_no_deriv (e04fcc) is most likely to find the one nearest to the starting point. On difficult problems, a realistic choice can prevent the sequence ${x}^{\left(k\right)}$ entering a region where the problem is ill-behaved and can help avoid overflow in the evaluation of $F\left(x\right)$. However, an underestimate of ${\mathbf{options}}\mathbf{.}{\mathbf{step_max}}$ can lead to inefficiency.
Constraint: ${\mathbf{options}}\mathbf{.}{\mathbf{step_max}}\ge {\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$.
 s – double * Default memory $\text{}={\mathbf{n}}$
On entry: n values of memory will be automatically allocated by nag_opt_lsq_no_deriv (e04fcc) and this is the recommended method of use of ${\mathbf{options}}\mathbf{.}{\mathbf{s}}$. However you may supply memory from the calling program.
On exit: the singular values of the estimated Jacobian matrix at the final point. Thus ${\mathbf{options}}\mathbf{.}{\mathbf{s}}$ may be useful as information about the structure of your problem.
 v – double * Default memory $\text{}={\mathbf{n}}×{\mathbf{n}}$
On entry: ${\mathbf{n}}×{\mathbf{n}}$ values of memory will be automatically allocated by nag_opt_lsq_no_deriv (e04fcc) and this is the recommended method of use of ${\mathbf{options}}\mathbf{.}{\mathbf{v}}$. However you may supply memory from the calling program.
On exit: the matrix $V$ associated with the singular value decomposition
 $J = USVT$
of the estimated Jacobian matrix at the final point, stored by rows. This matrix may be useful for statistical purposes, since it is the matrix of orthonormalized eigenvectors of ${J}^{\mathrm{T}}J$.
 tdv – Integer Default $\text{}={\mathbf{n}}$
On entry: if memory is supplied then ${\mathbf{options}}\mathbf{.}{\mathbf{tdv}}$ must contain the last dimension of the array assigned to ${\mathbf{options}}\mathbf{.}{\mathbf{tdv}}$ as declared in the function from which nag_opt_lsq_no_deriv (e04fcc) is called.
On exit: the trailing dimension used by ${\mathbf{options}}\mathbf{.}{\mathbf{v}}$. If the Nag default memory allocation has been used this value will be n.
Constraint: ${\mathbf{options}}\mathbf{.}{\mathbf{tdv}}\ge {\mathbf{n}}$.
On exit: the grade of the Jacobian at the final point. nag_opt_lsq_no_deriv (e04fcc) estimates the dimension of the subspace for which the Jacobian matrix can be used as a valid approximation to the curvature (see Gill and Murray (1978)); this estimate is called the grade.
 iter – Integer
On exit: the number of iterations which have been performed in nag_opt_lsq_no_deriv (e04fcc).
 nf – Integer
On exit: the number of times the residuals have been evaluated (i.e., number of calls of lsqfun).

### 11.3Description of Printed Output

The level of printed output can be controlled with the structure members ${\mathbf{options}}\mathbf{.}{\mathbf{list}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}$ (see Section 11.2). If ${\mathbf{options}}\mathbf{.}{\mathbf{list}}=\mathrm{Nag_TRUE}$ then the argument values to nag_opt_lsq_no_deriv (e04fcc) are listed, whereas the printout of results is governed by the value of ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}$. The default of ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_Soln_Iter}$ provides a single line of output at each iteration and the final result. This section describes all of the possible levels of results printout available from nag_opt_lsq_no_deriv (e04fcc).
When ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_Iter}$ or $\mathrm{Nag_Soln_Iter}$ a single line of output is produced on completion of each iteration, this gives the following values:
 Itn the current iteration number $k$. Nfun the cumulative number of calls to lsqfun. Objective the value of the objective function, $F\left({x}^{\left(k\right)}\right)$. Norm g the Euclidean norm of the gradient of $F\left({x}^{\left(k\right)}\right)$. Norm x the Euclidean norm of ${x}^{\left(k\right)}$. Norm(x(k-1)-x(k)) the Euclidean norm of ${x}^{\left(k-1\right)}-{x}^{\left(k\right)}$. Step the step ${\alpha }^{\left(k\right)}$ taken along the computed search direction ${p}^{\left(k\right)}$.
When ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_Soln_Iter_Full}$ more detailed results are given at each iteration. Additional values output are:
 Grade the grade of the Jacobian matrix. (See description of ${\mathbf{options}}\mathbf{.}{\mathbf{grade}}$, Section 11.2 x the current point ${x}^{\left(k\right)}$. g the current estimate of the gradient of $F\left({x}^{\left(k\right)}\right)$. Singular values the singular values of the current approximation to the Jacobian matrix.
If ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_Soln}$, $\mathrm{Nag_Soln_Iter}$ or $\mathrm{Nag_Soln_Iter_Full}$ the final result is printed out. This consists of:
 x the final point ${x}^{*}$. g the estimate of the gradient of $F$ at the final point. Residuals the values of the residuals ${f}_{i}$ at the final point. Sum of squares the value of $F\left({x}^{*}\right)$, the sum of squares of the residuals at the final point.
If ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_NoPrint}$ then printout will be suppressed; you can print the final solution when nag_opt_lsq_no_deriv (e04fcc) returns to the calling program.

#### 11.3.1Output of results via a user-defined printing function

You may also specify your own print function for output of iteration results and the final solution by use of the ${\mathbf{options}}\mathbf{.}{\mathbf{print_fun}}$ function pointer, which has prototype
`void (*print_fun)(const Nag_Search_State *st, Nag_Comm *comm);`
The rest of this section can be skipped if the default printing facilities provide the required functionality.
When a user-defined function is assigned to ${\mathbf{options}}\mathbf{.}{\mathbf{print_fun}}$ this will be called in preference to the internal print function of nag_opt_lsq_no_deriv (e04fcc). Calls to the user-defined function are again controlled by means of the ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}$ member. Information is provided through st and comm, the two structure arguments to ${\mathbf{options}}\mathbf{.}{\mathbf{print_fun}}$. If $\mathbf{comm}\mathbf{\to }\mathbf{it_prt}=\mathrm{Nag_TRUE}$ then the results from the last iteration of nag_opt_lsq_no_deriv (e04fcc) are in the following members of st:
mInteger
The number of residuals.
nInteger
The number of variables.
xdouble *
Points to the $\mathbf{st}\mathbf{\to }\mathbf{n}$ memory locations holding the current point ${x}^{\left(k\right)}$.
fvecdouble *
Points to the $\mathbf{st}\mathbf{\to }\mathbf{m}$ memory locations holding the values of the residuals ${f}_{i}$ at the current point ${x}^{\left(k\right)}$.
fjacdouble *
Points to $\mathbf{st}\mathbf{\to }\mathbf{m}×\mathbf{st}\mathbf{\to }\mathbf{tdj}$ memory locations. $\mathbf{st}\mathbf{\to }\mathbf{fjac}\left[\left(\mathit{i}-1\right)×\mathbf{st}\mathbf{\to }\mathbf{tdj}+\left(\mathit{j}-1\right)\right]$ contains the value of $\frac{\partial {f}_{\mathit{i}}}{\partial {x}_{\mathit{j}}}$, for $\mathit{i}=1,2,\dots ,m$ and $\mathit{j}=1,2,\dots ,n$, at the current point ${x}^{\left(k\right)}$.
tdjInteger
The trailing dimension for $\mathbf{st}\mathbf{\to }\mathbf{fjac}\left[\right]$.
stepdouble
The step ${\alpha }^{\left(k\right)}$ taken along the search direction ${p}^{\left(k\right)}$.
xk_normdouble
The Euclidean norm of ${x}^{\left(k-1\right)}-{x}^{\left(k\right)}$.
gdouble *
Points to the $\mathbf{st}\mathbf{\to }\mathbf{n}$ memory locations holding the estimated gradient of $F$ at the current point ${x}^{\left(k\right)}$.
The grade of the Jacobian matrix.
sdouble *
Points to the $\mathbf{st}\mathbf{\to }\mathbf{n}$ memory locations holding the singular values of the current approximation to the Jacobian.
iterInteger
The number of iterations, $k$, performed by nag_opt_lsq_no_deriv (e04fcc).
nfInteger
The cumulative number of calls made to lsqfun.
The relevant members of the structure comm are:
it_prtNag_Boolean
Will be Nag_TRUE when the print function is called with the result of the current iteration.
sol_prtNag_Boolean
Will be Nag_TRUE when the print function is called with the final result.
userdouble *
iuserInteger *
pPointer
Pointers for communication of user information. If used they must be allocated memory either before entry to nag_opt_lsq_no_deriv (e04fcc) or during a call to lsqfun or ${\mathbf{options}}\mathbf{.}{\mathbf{print_fun}}$. The type Pointer will be void * with a C compiler that defines void * and char * otherwise.
© The Numerical Algorithms Group Ltd, Oxford, UK. 2017