hide long namesshow long names
hide short namesshow short names
Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

NAG Toolbox: nag_opt_lsq_gencon_deriv (e04us)

 Contents

    1  Purpose
    2  Syntax
    7  Accuracy
    9  Example

Purpose

nag_opt_lsq_gencon_deriv (e04us) is designed to minimize an arbitrary smooth sum of squares function subject to constraints (which may include simple bounds on the variables, linear constraints and smooth nonlinear constraints) using a sequential quadratic programming (SQP) method. As many first derivatives as possible should be supplied by you; any unspecified derivatives are approximated by finite differences. See the description of the optional parameter Derivative Level, in Description of the s. It is not intended for large sparse problems.
nag_opt_lsq_gencon_deriv (e04us) may also be used for unconstrained, bound-constrained and linearly constrained optimization.

Syntax

[iter, istate, c, cjac, f, fjac, clamda, objf, r, x, user, lwsav, iwsav, rwsav, ifail] = e04us(a, bl, bu, y, confun, objfun, istate, cjac, fjac, clamda, r, x, lwsav, iwsav, rwsav, 'm', m, 'n', n, 'nclin', nclin, 'ncnln', ncnln, 'user', user)
[iter, istate, c, cjac, f, fjac, clamda, objf, r, x, user, lwsav, iwsav, rwsav, ifail] = nag_opt_lsq_gencon_deriv(a, bl, bu, y, confun, objfun, istate, cjac, fjac, clamda, r, x, lwsav, iwsav, rwsav, 'm', m, 'n', n, 'nclin', nclin, 'ncnln', ncnln, 'user', user)
Before calling nag_opt_lsq_gencon_deriv (e04us), or the option setting function nag_opt_lsq_gencon_deriv_option_string (e04ur), nag_opt_init (e04wb) must be called.

Description

nag_opt_lsq_gencon_deriv (e04us) is designed to solve the nonlinear least squares programming problem – the minimization of a smooth nonlinear sum of squares function subject to a set of constraints on the variables. The problem is assumed to be stated in the following form:
minimize xRn Fx = 12 i=1 m yi- fi x 2   subject to   l x ALx cx u, (1)
where Fx (the objective function) is a nonlinear function which can be represented as the sum of squares of m subfunctions y1-f1x,y2-f2x,,ym-fmx, the yi are constant, AL is an nL by n constant matrix, and cx is an nN element vector of nonlinear constraint functions. (The matrix AL and the vector cx may be empty.) The objective function and the constraint functions are assumed to be smooth, i.e., at least twice-continuously differentiable. (The method of nag_opt_lsq_gencon_deriv (e04us) will usually solve (1) if any isolated discontinuities are away from the solution.)
Note that although the bounds on the variables could be included in the definition of the linear constraints, we prefer to distinguish between them for reasons of computational efficiency. For the same reason, the linear constraints should not be included in the definition of the nonlinear constraints. Upper and lower bounds are specified for all the variables and for all the constraints. An equality constraint can be specified by setting li=ui. If certain bounds are not present, the associated elements of l or u can be set to special values that will be treated as - or +. (See the description of the optional parameter Infinite Bound Size.)
You must supply an initial estimate of the solution to (1), together with functions that define fx = f1x,f2x,,fmxT , cx and as many first partial derivatives as possible; unspecified derivatives are approximated by finite differences.
The subfunctions are defined by the array y and objfun, and the nonlinear constraints are defined by confun. On every call, these functions must return appropriate values of fx and cx. You should also provide the available partial derivatives. Any unspecified derivatives are approximated by finite differences for a discussion of the optional parameter Derivative Level. Note that if there are any nonlinear constraints, then the first call to confun will precede the first call to objfun.
For maximum reliability, it is preferable for you to provide all partial derivatives (see Chapter 8 of Gill et al. (1981) for a detailed discussion). If all gradients cannot be provided, it is similarly advisable to provide as many as possible. While developing objfun and confun, the optional parameter Verify should be used to check the calculation of any known gradients.

References

Gill P E, Murray W and Wright M H (1981) Practical Optimization Academic Press
Hock W and Schittkowski K (1981) Test Examples for Nonlinear Programming Codes. Lecture Notes in Economics and Mathematical Systems 187 Springer–Verlag

Parameters

Compulsory Input Parameters

1:     alda: – double array
The first dimension of the array a must be at least max1,nclin.
The second dimension of the array a must be at least n if nclin>0, and at least 1 otherwise.
The ith row of a contains the ith row of the matrix AL of general linear constraints in (1). That is, the ith row contains the coefficients of the ith general linear constraint, for i=1,2,,nclin.
If nclin=0, the array a is not referenced.
2:     bln+nclin+ncnln – double array
3:     bun+nclin+ncnln – double array
Must contain the lower bounds and bu the upper bounds, for all the constraints in the following order. The first n elements of each array must contain the bounds on the variables, the next nL elements the bounds for the general linear constraints (if any) and the next nN elements the bounds for the general nonlinear constraints (if any). To specify a nonexistent lower bound (i.e., lj=-), set blj-bigbnd, and to specify a nonexistent upper bound (i.e., uj=+), set bujbigbnd; the default value of bigbnd is 1020, but this may be changed by the optional parameter Infinite Bound Size. To specify the jth constraint as an equality, set blj=buj=β, say, where β<bigbnd.
Constraints:
  • bljbuj, for j=1,2,,n+nclin+ncnln;
  • if blj=buj=β, β<bigbnd.
4:     ym – double array
The coefficients of the constant vector y of the objective function.
5:     confun – function handle or string containing name of m-file
confun must calculate the vector cx of nonlinear constraint functions and (optionally) its Jacobian (= c x ) for a specified n-element vector x. If there are no nonlinear constraints (i.e., ncnln=0), confun will never be called by nag_opt_lsq_gencon_deriv (e04us) and confun may be the string nag_opt_nlp1_dummy_confun (e04udm)nag_opt_nlp1_dummy_confun (e04udm)If there are nonlinear constraints, the first call to confun will occur before the first call to objfun.
[mode, c, cjac, user] = confun(mode, ncnln, n, ldcj, needc, x, cjac, nstate, user)

Input Parameters

1:     mode int64int32nag_int scalar
Indicates which values must be assigned during each call of confun. Only the following values need be assigned, for each value of i such that needci>0:
mode=0
ci.
mode=1
All available elements in the ith row of cjac.
mode=2
ci and all available elements in the ith row of cjac.
2:     ncnln int64int32nag_int scalar
nN, the number of nonlinear constraints.
3:     n int64int32nag_int scalar
n, the number of variables.
4:     ldcj int64int32nag_int scalar
The first dimension of the array cjac.
5:     needcncnln int64int32nag_int array
The indices of the elements of c and/or cjac that must be evaluated by confun. If needci>0, then the ith element of c and/or the available elements of the ith row of cjac (see argument mode) must be evaluated at x.
6:     xn – double array
x, the vector of variables at which the constraint functions and/or all available elements of the constraint Jacobian are to be evaluated.
7:     cjacldcjn – double array
Is set to a special value.
8:     nstate int64int32nag_int scalar
If nstate=1 then nag_opt_lsq_gencon_deriv (e04us) is calling confun for the first time. This argument setting allows you to save computation time if certain data must be read or calculated only once.
9:     user – Any MATLAB object
confun is called from nag_opt_lsq_gencon_deriv (e04us) with the object supplied to nag_opt_lsq_gencon_deriv (e04us).

Output Parameters

1:     mode int64int32nag_int scalar
May be set to a negative value if you wish to terminate the solution to the current problem, and in this case nag_opt_lsq_gencon_deriv (e04us) will terminate with ifail set to mode.
2:     cncnln – double array
If needci>0 and mode=0 or 2, ci must contain the value of the ith constraint at x. The remaining elements of c, corresponding to the non-positive elements of needc, are ignored.
3:     cjacldcjn – double array
If needci>0 and mode=1 or 2, the ith row of cjac must contain the available elements of the vector ci given by
ci= ci x1 , ci x2 ,, ci xn T,  
where ci xj is the partial derivative of the ith constraint with respect to the jth variable, evaluated at the point x. See also the argument nstate. The remaining rows of cjac, corresponding to non-positive elements of needc, are ignored.
If all elements of the constraint Jacobian are known (i.e., Derivative Level=2 or 3), any constant elements may be assigned to cjac one time only at the start of the optimization. An element of cjac that is not subsequently assigned in confun will retain its initial value throughout. Constant elements may be loaded into cjac either before the call to nag_opt_lsq_gencon_deriv (e04us) or during the first call to confun (signalled by the value nstate=1). The ability to preload constants is useful when many Jacobian elements are identically zero, in which case cjac may be initialized to zero and nonzero elements may be reset by confun.
Note that constant nonzero elements do affect the values of the constraints. Thus, if cjacij is set to a constant value, it need not be reset in subsequent calls to confun, but the value cjacij×xj must nonetheless be added to ci. For example, if cjac11=2 and cjac12=-5, then the term 2×x1-5×x2 must be included in the definition of c1.
It must be emphasized that, if Derivative Level=0 or 1, unassigned elements of cjac are not treated as constant; they are estimated by finite differences, at nontrivial expense. If you do not supply a value for the optional parameter Difference Interval, an interval for each element of x is computed automatically at the start of the optimization. The automatic procedure can usually identify constant elements of cjac, which are then computed once only by finite differences.
4:     user – Any MATLAB object
Note:  confun should be tested separately before being used in conjunction with nag_opt_lsq_gencon_deriv (e04us). See also the description of the optional parameter Verify.
6:     objfun – function handle or string containing name of m-file
objfun must calculate either the ith element of the vector fx = f1x,f2x,,fmxT  or all m elements of fx and (optionally) its Jacobian (= f x ) for a specified n-element vector x.
[mode, f, fjac, user] = objfun(mode, m, n, ldfj, needfi, x, fjac, nstate, user)

Input Parameters

1:     mode int64int32nag_int scalar
Indicates which values must be assigned during each call of objfun. Only the following values need be assigned:
mode=0 and needfi=i, where i>0
fi.
mode=0 and needfi<0
f.
mode=1 and needfi<0
All available elements of fjac.
mode=2 and needfi<0
f and all available elements of fjac.
2:     m int64int32nag_int scalar
m, the number of subfunctions.
3:     n int64int32nag_int scalar
n, the number of variables.
4:     ldfj int64int32nag_int scalar
The first dimension of the array fjac.
5:     needfi int64int32nag_int scalar
If needfi=i>0, only the ith element of fx needs to be evaluated at x; the remaining elements need not be set. This can result in significant computational savings when mn.
6:     xn – double array
x, the vector of variables at which fx and/or all available elements of its Jacobian are to be evaluated.
7:     fjacldfjn – double array
Is set to a special value.
8:     nstate int64int32nag_int scalar
If nstate=1 then nag_opt_lsq_gencon_deriv (e04us) is calling objfun for the first time. This argument setting allows you to save computation time if certain data must be read or calculated only once.
9:     user – Any MATLAB object
objfun is called from nag_opt_lsq_gencon_deriv (e04us) with the object supplied to nag_opt_lsq_gencon_deriv (e04us).

Output Parameters

1:     mode int64int32nag_int scalar
May be set to a negative value if you wish to terminate the solution to the current problem, and in this case nag_opt_lsq_gencon_deriv (e04us) will terminate with ifail set to mode.
2:     fm – double array
If mode=0 and needfi=i>0, fi must contain the value of fi at x.
If mode=0 or 2 and needfi<0, fi must contain the value of fi at x, for i=1,2,,m.
3:     fjacldfjn – double array
If mode=1 or 2 and needfi<0, the ith row of fjac must contain the available elements of the vector fi given by
fi= fi x1 , fi x2 ,, fi xn T,  
evaluated at the point x. See also the argument nstate.
4:     user – Any MATLAB object
Note:  objfun should be tested separately before being used in conjunction with nag_opt_lsq_gencon_deriv (e04us). See also the description of the optional parameter Verify.
7:     istaten+nclin+ncnln int64int32nag_int array
Need not be set if the (default) optional parameter Cold Start is used.
If the optional parameter Warm Start has been chosen, the elements of istate corresponding to the bounds and linear constraints define the initial working set for the procedure that finds a feasible point for the linear constraints and bounds. The active set at the conclusion of this procedure and the elements of istate corresponding to nonlinear constraints then define the initial working set for the first QP subproblem. More precisely, the first n elements of istate refer to the upper and lower bounds on the variables, the next nL elements refer to the upper and lower bounds on ALx, and the next nN elements refer to the upper and lower bounds on cx. Possible values for istatej are as follows:
istatej Meaning
0 The corresponding constraint is not in the initial QP working set.
1 This inequality constraint should be in the working set at its lower bound.
2 This inequality constraint should be in the working set at its upper bound.
3 This equality constraint should be in the initial working set. This value must not be specified unless blj=buj.
The values -2, -1 and 4 are also acceptable but will be modified by the function. If nag_opt_lsq_gencon_deriv (e04us) has been called previously with the same values of n, nclin and ncnln, istate already contains satisfactory information. (See also the description of the optional parameter Warm Start.) The function also adjusts (if necessary) the values supplied in x to be consistent with istate.
Constraint: -2istatej4, for j=1,2,,n+nclin+ncnln.
8:     cjacldcj: – double array
The first dimension of the array cjac must be at least max1,ncnln.
The second dimension of the array cjac must be at least n if ncnln>0, and at least 1 otherwise.
In general, cjac need not be initialized before the call to nag_opt_lsq_gencon_deriv (e04us). However, if Derivative Level=3, you may optionally set the constant elements of cjac (see argument nstate in the description of confun). Such constant elements need not be re-assigned on subsequent calls to confun.
9:     fjacldfjn – double array
ldfj, the first dimension of the array, must satisfy the constraint ldfjm.
In general, fjac need not be initialized before the call to nag_opt_lsq_gencon_deriv (e04us). However, if Derivative Level=3, you may optionally set the constant elements of fjac (see argument nstate in the description of objfun). Such constant elements need not be re-assigned on subsequent calls to objfun.
10:   clamdan+nclin+ncnln – double array
Need not be set if the (default) optional parameter Cold Start is used.
If the optional parameter Warm Start has been chosen, clamdaj must contain a multiplier estimate for each nonlinear constraint with a sign that matches the status of the constraint specified by the istate array, for j=n+nclin+1,,n+nclin+ncnln. The remaining elements need not be set. Note that if the jth constraint is defined as ‘inactive’ by the initial value of the istate array (i.e., istatej=0), clamdaj should be zero; if the jth constraint is an inequality active at its lower bound (i.e., istatej=1), clamdaj should be non-negative; if the jth constraint is an inequality active at its upper bound (i.e., istatej=2, clamdaj should be non-positive. If necessary, the function will modify clamda to match these rules.
11:   rldrn – double array
ldr, the first dimension of the array, must satisfy the constraint ldrn.
Need not be initialized if the (default) optional parameter Cold Start is used.
If the optional parameter Warm Start has been chosen, r must contain the upper triangular Cholesky factor R  of the initial approximation of the Hessian of the Lagrangian function, with the variables in the natural order. Elements not in the upper triangular part of r are assumed to be zero and need not be assigned.
12:   xn – double array
An initial estimate of the solution.
13:   lwsav120 – logical array
14:   iwsav610 int64int32nag_int array
15:   rwsav475 – double array
The arrays lwsav, iwsav and rwsav must not be altered between calls to any of the functions nag_opt_lsq_gencon_deriv (e04us), nag_opt_lsq_gencon_deriv_option_string (e04ur).

Optional Input Parameters

1:     m int64int32nag_int scalar
Default: the dimension of the array y and the first dimension of the array fjac. (An error is raised if these dimensions are not equal.)
m, the number of subfunctions associated with Fx.
Constraint: m>0.
2:     n int64int32nag_int scalar
Default: the dimension of the array x and the first dimension of the array r and the second dimension of the arrays a, cjac, fjac, r. (An error is raised if these dimensions are not equal.)
n, the number of variables.
Constraint: n>0.
3:     nclin int64int32nag_int scalar
Default: the first dimension of the array a.
nL, the number of general linear constraints.
Constraint: nclin0.
4:     ncnln int64int32nag_int scalar
Default: the first dimension of the array cjac.
nN, the number of nonlinear constraints.
Constraint: ncnln0.
5:     user – Any MATLAB object
user is not used by nag_opt_lsq_gencon_deriv (e04us), but is passed to confun and objfun. Note that for large objects it may be more efficient to use a global variable which is accessible from the m-files than to use user.

Output Parameters

1:     iter int64int32nag_int scalar
The number of major iterations performed.
2:     istaten+nclin+ncnln int64int32nag_int array
The status of the constraints in the QP working set at the point returned in x. The significance of each possible value of istatej is as follows:
istatej Meaning
-2 This constraint violates its lower bound by more than the appropriate feasibility tolerance (see the optional parameters Linear Feasibility Tolerance and Nonlinear Feasibility Tolerance). This value can occur only when no feasible point can be found for a QP subproblem.
-1 This constraint violates its upper bound by more than the appropriate feasibility tolerance (see the optional parameters Linear Feasibility Tolerance and Nonlinear Feasibility Tolerance). This value can occur only when no feasible point can be found for a QP subproblem.
-0 The constraint is satisfied to within the feasibility tolerance, but is not in the QP working set.
-1 This inequality constraint is included in the QP working set at its lower bound.
-2 This inequality constraint is included in the QP working set at its upper bound.
-3 This constraint is included in the QP working set as an equality. This value of istate can occur only when blj=buj.
3:     cmax1,ncnln – double array
If ncnln>0, ci contains the value of the ith nonlinear constraint function ci at the final iterate, for i=1,2,,ncnln.
If ncnln=0, the array c is not referenced.
4:     cjacldcj: – double array
The first dimension of the array cjac will be max1,ncnln.
The second dimension of the array cjac will be n if ncnln>0 and 1 otherwise.
If ncnln>0, cjac contains the Jacobian matrix of the nonlinear constraint functions at the final iterate, i.e., cjacij contains the partial derivative of the ith constraint function with respect to the jth variable, for i=1,2,,ncnln and j=1,2,,n. (See the discussion of argument cjac under confun.)
If ncnln=0, the array cjac is not referenced.
5:     fm – double array
fi contains the value of the ith function fi at the final iterate, for i=1,2,,m.
6:     fjacldfjn – double array
The Jacobian matrix of the functions f1,f2,,fm at the final iterate, i.e., fjacij contains the partial derivative of the ith function with respect to the jth variable, for i=1,2,,m and j=1,2,,n. (See also the discussion of argument fjac under objfun.)
7:     clamdan+nclin+ncnln – double array
The values of the QP multipliers from the last QP subproblem. clamdaj should be non-negative if istatej=1 and non-positive if istatej=2.
8:     objf – double scalar
The value of the objective function at the final iterate.
9:     rldrn – double array
If Hessian=NO, r contains the upper triangular Cholesky factor R of QTH~Q, an estimate of the transformed and reordered Hessian of the Lagrangian at x (see (6) in nag_opt_nlp1_rcomm (e04uf)). If Hessian=YES, r contains the upper triangular Cholesky factor R of H, the approximate (untransformed) Hessian of the Lagrangian, with the variables in the natural order.
10:   xn – double array
The final estimate of the solution.
11:   user – Any MATLAB object
12:   lwsav120 – logical array
13:   iwsav610 int64int32nag_int array
14:   rwsav475 – double array
15:   ifail int64int32nag_int scalar
ifail=0 unless the function detects an error (see Error Indicators and Warnings).
nag_opt_lsq_gencon_deriv (e04us) returns with ifail=0 if the iterates have converged to a point x that satisfies the first-order Kuhn–Tucker conditions (see Overview in nag_opt_nlp1_rcomm (e04uf)) to the accuracy requested by the optional parameter Optimality Tolerance (default value=εr0.8, where εr is the value of the optional parameter Function Precision (default value=ε0.9, where ε is the machine precision)), i.e., the projected gradient and active constraint residuals are negligible at x.
You should check whether the following four conditions are satisfied:
(i) the final value of Norm Gz (see Description of Printed output) is significantly less than that at the starting point;
(ii) during the final major iterations, the values of Step and Mnr (see Description of Printed output) are both one;
(iii) the last few values of both Norm Gz and Violtn (see Description of Printed output) become small at a fast linear rate; and
(iv) Cond Hz (see Description of Printed output) is small.
If all these conditions hold, x is almost certainly a local minimum of (1).

Error Indicators and Warnings

Note: nag_opt_lsq_gencon_deriv (e04us) may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the function:

Cases prefixed with W are classified as warnings and do not generate an error of type NAG:error_n. See nag_issue_warnings.

W  ifail<0
A negative value of ifail indicates an exit from nag_opt_lsq_gencon_deriv (e04us) because you set mode<0 in objfun or confun. The value of ifail will be the same as your setting of mode.
W  ifail=1
The final iterate x satisfies the first-order Kuhn–Tucker conditions (see Overview in nag_opt_nlp1_rcomm (e04uf)) to the accuracy requested, but the sequence of iterates has not yet converged. nag_opt_lsq_gencon_deriv (e04us) was terminated because no further improvement could be made in the merit function (see Description of Printed output).
This value of ifail may occur in several circumstances. The most common situation is that you ask for a solution with accuracy that is not attainable with the given precision of the problem (as specified by the optional parameter Function Precision (default value=ε0.9, where ε is the machine precision)). This condition will also occur if, by chance, an iterate is an ‘exact’ Kuhn–Tucker point, but the change in the variables was significant at the previous iteration. (This situation often happens when minimizing very simple functions, such as quadratics.)
If the four conditions listed in Arguments for ifail=0 are satisfied, x is likely to be a solution of (1) even if ifail=1.
W  ifail=2
nag_opt_lsq_gencon_deriv (e04us) has terminated without finding a feasible point for the linear constraints and bounds, which means that either no feasible point exists for the given value of the optional parameter Linear Feasibility Tolerance (default value=ε, where ε is the machine precision), or no feasible point could be found in the number of iterations specified by the optional parameter Minor Iteration Limit (default value=max50,3n+nL+nN). You should check that there are no constraint redundancies. If the data for the constraints are accurate only to an absolute precision σ, you should ensure that the value of the optional parameter Linear Feasibility Tolerance is greater than σ. For example, if all elements of AL are of order unity and are accurate to only three decimal places, Linear Feasibility Tolerance should be at least 10-3.
W  ifail=3
No feasible point could be found for the nonlinear constraints. The problem may have no feasible solution. This means that there has been a sequence of QP subproblems for which no feasible point could be found (indicated by I at the end of each line of intermediate printout produced by the major iterations; see Description of Printed output). This behaviour will occur if there is no feasible point for the nonlinear constraints. (However, there is no general test that can determine whether a feasible point exists for a set of nonlinear constraints.) If the infeasible subproblems occur from the very first major iteration, it is highly likely that no feasible point exists. If infeasibilities occur when earlier subproblems have been feasible, small constraint inconsistencies may be present. You should check the validity of constraints with negative values of istate. If you are convinced that a feasible point does exist, nag_opt_lsq_gencon_deriv (e04us) should be restarted at a different starting point.
W  ifail=4
The limiting number of iterations (as determined by the optional parameter Major Iteration Limit (default value=max50,3n+nL+10nN) has been reached.
If the algorithm appears to be making satisfactory progress, then Major Iteration Limit may be too small. If so, either increase its value and rerun nag_opt_lsq_gencon_deriv (e04us) or, alternatively, rerun nag_opt_lsq_gencon_deriv (e04us) using the optional parameter Warm Start. If the algorithm seems to be making little or no progress however, then you should check for incorrect gradients or ill-conditioning as described under ifail=6.
Note that ill-conditioning in the working set is sometimes resolved automatically by the algorithm, in which case performing additional iterations may be helpful. However, ill-conditioning in the Hessian approximation tends to persist once it has begun, so that allowing additional iterations without altering r  is usually inadvisable. If the quasi-Newton update of the Hessian approximation was reset during the latter major iterations (i.e., an r occurs at the end of each line of intermediate printout; see Description of Printed output), it may be worthwhile to try a Warm Start at the final point as suggested above.
   ifail=5
Not used by this function.
W  ifail=6
x does not satisfy the first-order Kuhn–Tucker conditions (see Overview in nag_opt_nlp1_rcomm (e04uf)), and no improved point for the merit function (see Description of Printed output) could be found during the final linesearch.
This sometimes occurs because an overly stringent accuracy has been requested, i.e., the value of the optional parameter Optimality Tolerance (default value=εr0.8, where εr is the value of the optional parameter Function Precision (default value=ε0.9, where ε is the machine precision)) is too small. In this case you should apply the four tests described under ifail=0 to determine whether or not the final solution is acceptable (see Gill et al. (1981), for a discussion of the attainable accuracy).
If many iterations have occurred in which essentially no progress has been made and nag_opt_lsq_gencon_deriv (e04us) has failed completely to move from the initial point then user-supplied functions objfun and/or confun may be incorrect. You should refer to comments under ifail=7 and check the gradients using the optional parameter Verify (default value=0). Unfortunately, there may be small errors in the objective and constraint gradients that cannot be detected by the verification process. Finite difference approximations to first derivatives are catastrophically affected by even small inaccuracies. An indication of this situation is a dramatic alteration in the iterates if the finite difference interval is altered. One might also suspect this type of error if a switch is made to central differences even when Norm Gz and Violtn (see Description of Printed output) are large.
Another possibility is that the search direction has become inaccurate because of ill-conditioning in the Hessian approximation or the matrix of constraints in the working set; either form of ill-conditioning tends to be reflected in large values of Mnr (the number of iterations required to solve each QP subproblem; see Description of Printed output).
If the condition estimate of the projected Hessian (Cond Hz; see Description of Monitoring Information) is extremely large, it may be worthwhile rerunning nag_opt_lsq_gencon_deriv (e04us) from the final point with the optional parameter Warm Start. In this situation, istate and clamda should be left unaltered and r  should be reset to the identity matrix.
If the matrix of constraints in the working set is ill-conditioned (i.e., Cond T is extremely large; see Description of Monitoring Information), it may be helpful to run nag_opt_lsq_gencon_deriv (e04us) with a relaxed value of the optional parameter Feasibility Tolerance (default value=ε, where ε is the machine precision). (Constraint dependencies are often indicated by wide variations in size in the diagonal elements of the matrix T, whose diagonals will be printed if Major Print Level30).
W  ifail=7
The user-supplied derivatives of the subfunctions and/or nonlinear constraints appear to be incorrect.
Large errors were found in the derivatives of the subfunctions and/or nonlinear constraints. This value of ifail will occur if the verification process indicated that at least one Jacobian element had no correct figures. You should refer to the printed output to determine which elements are suspected to be in error.
As a first-step, you should check that the code for the subfunction and constraint values is correct – for example, by computing the subfunctions at a point where the correct value of Fx is known. However, care should be taken that the chosen point fully tests the evaluation of the subfunctions. It is remarkable how often the values x=0 or x=1 are used to test function evaluation procedures, and how often the special properties of these numbers make the test meaningless.
Special care should be used in this test if computation of the subfunctions involves subsidiary data communicated in global storage. Although the first evaluation of the subfunctions may be correct, subsequent calculations may be in error because some of the subsidiary data has accidentally been overwritten.
Gradient checking will be ineffective if the objective function uses information computed by the constraints, since they are not necessarily computed before each function evaluation.
Errors in programming the subfunctions may be quite subtle in that the subfunction values are ‘almost’ correct. For example, a subfunction may not be accurate to full precision because of the inaccurate calculation of a subsidiary quantity, or the limited accuracy of data upon which the subfunction depends. A common error on machines where numerical calculations are usually performed in double precision is to include even one single precision constant in the calculation of the subfunction; since some compilers do not convert such constants to double precision, half the correct figures may be lost by such a seemingly trivial error.
   ifail=8
Not used by this function.
   ifail=9
An input argument is invalid.
   overflow
If overflow occurs then either an element of C is very large, or the singular values or singular vectors have been incorrectly supplied.
   ifail=-99
An unexpected error has been triggered by this routine. Please contact NAG.
   ifail=-399
Your licence key may have expired or may not have been installed correctly.
   ifail=-999
Dynamic memory allocation failed.

Accuracy

If ifail=0 on exit, then the vector returned in the array x is an estimate of the solution to an accuracy of approximately Optimality Tolerance (default value=ε0.8, where ε is the machine precision).

Further Comments

Description of the Printed Output

This section describes the intermediate printout and final printout produced by nag_opt_lsq_gencon_deriv (e04us). The intermediate printout is a subset of the monitoring information produced by the function at every iteration (see Description of Monitoring Information). You can control the level of printed output (see the description of the optional parameter Major Print Level). Note that the intermediate printout and final printout are produced only if Major Print Level10 (the default for nag_opt_lsq_gencon_deriv (e04us), by default no output is produced by nag_opt_lsq_gencon_deriv (e04us)). (by default no output is produced by nag_opt_lsq_gencon_deriv (e04us)).
The following line of summary output (<80 characters) is produced at every major iteration. In all cases, the values of the quantities printed are those in effect on completion of the given iteration.
Maj is the major iteration count.
Mnr is the number of minor iterations required by the feasibility and optimality phases of the QP subproblem. Generally, Mnr will be 1 in the later iterations, since theoretical analysis predicts that the correct active set will be identified near the solution (see Algorithmic Details in nag_opt_nlp1_rcomm (e04uf)).
Note that Mnr may be greater than the optional parameter Minor Iteration Limit if some iterations are required for the feasibility phase.
Step is the step αk taken along the computed search direction. On reasonably well-behaved problems, the unit step (i.e., αk=1) will be taken as the solution is approached.
Merit Function is the value of the augmented Lagrangian merit function (see (12) in nag_opt_nlp1_rcomm (e04uf)) at the current iterate. This function will decrease at each iteration unless it was necessary to increase the penalty parameters (see The Merit Function in nag_opt_nlp1_rcomm (e04uf)). As the solution is approached, Merit Function will converge to the value of the objective function at the solution.
If the QP subproblem does not have a feasible point (signified by I at the end of the current output line) then the merit function is a large multiple of the constraint violations, weighted by the penalty parameters. During a sequence of major iterations with infeasible subproblems, the sequence of Merit Function values will decrease monotonically until either a feasible subproblem is obtained or nag_opt_lsq_gencon_deriv (e04us) terminates with ifail=3 (no feasible point could be found for the nonlinear constraints).
If there are no nonlinear constraints present (i.e., ncnln=0) then this entry contains Objective, the value of the objective function Fx. The objective function will decrease monotonically to its optimal value when there are no nonlinear constraints.
Norm Gz is ZTgFR, the Euclidean norm of the projected gradient (see Solution of the Quadratic Programming Subproblem in nag_opt_nlp1_rcomm (e04uf)). Norm Gz will be approximately zero in the neighbourhood of a solution.
Violtn is the Euclidean norm of the residuals of constraints that are violated or in the predicted active set (not printed if ncnln is zero). Violtn will be approximately zero in the neighbourhood of a solution.
Cond Hz is a lower bound on the condition number of the projected Hessian approximation HZ ( HZ = ZT HFR Z = RZT RZ ; see (6) and (11) in nag_opt_nlp1_rcomm (e04uf)). The larger this number, the more difficult the problem.
M is printed if the quasi-Newton update has been modified to ensure that the Hessian approximation is positive definite (see The Quasi-Newton Update in nag_opt_nlp1_rcomm (e04uf)).
I is printed if the QP subproblem has no feasible point.
C is printed if central differences have been used to compute the unspecified objective and constraint gradients. If the value of Step is zero then the switch to central differences was made because no lower point could be found in the linesearch. (In this case, the QP subproblem is resolved with the central difference gradient and Jacobian.) If the value of Step is nonzero then central differences were computed because Norm Gz and Violtn imply that x is close to a Kuhn–Tucker point (see Overview in nag_opt_nlp1_rcomm (e04uf)).
L is printed if the linesearch has produced a relative change in x greater than the value defined by the optional parameter Step Limit. If this output occurs frequently during later iterations of the run, optional parameter Step Limit should be set to a larger value.
R is printed if the approximate Hessian has been refactorized. If the diagonal condition estimator of R indicates that the approximate Hessian is badly conditioned then the approximate Hessian is refactorized using column interchanges. If necessary, R is modified so that its diagonal condition estimator is bounded.
The final printout includes a listing of the status of every variable and constraint.
The following describes the printout for each variable. A full stop (.) is printed for any numerical value that is zero.
Varbl gives the name (V) and index j, for j=1,2,,n, of the variable.
State gives the state of the variable (FR if neither bound is in the working set, EQ if a fixed variable, LL if on its lower bound, UL if on its upper bound, TF if temporarily fixed at its current value). If Value lies outside the upper or lower bounds by more than the Feasibility Tolerance, State will be ++ or -- respectively.
A key is sometimes printed before State.
A Alternative optimum possible. The variable is active at one of its bounds, but its Lagrange multiplier is essentially zero. This means that if the variable were allowed to start moving away from its bound then there would be no change to the objective function. The values of the other free variables might change, giving a genuine alternative solution. However, if there are any degenerate variables (labelled D), the actual change might prove to be zero, since one of them could encounter a bound immediately. In either case the values of the Lagrange multipliers might also change.
D Degenerate. The variable is free, but it is equal to (or very close to) one of its bounds.
I Infeasible. The variable is currently violating one of its bounds by more than the Feasibility Tolerance.
Value is the value of the variable at the final iteration.
Lower Bound is the lower bound specified for the variable. None indicates that blj-bigbnd.
Upper Bound is the upper bound specified for the variable. None indicates that bujbigbnd.
Lagr Mult is the Lagrange multiplier for the associated bound. This will be zero if State is FR unless blj-bigbnd and bujbigbnd, in which case the entry will be blank. If x is optimal, the multiplier should be non-negative if State is LL and non-positive if State is UL.
Slack is the difference between the variable Value and the nearer of its (finite) bounds blj and buj. A blank entry indicates that the associated variable is not bounded (i.e., blj-bigbnd and bujbigbnd).
The meaning of the printout for linear and nonlinear constraints is the same as that given above for variables, with ‘variable’ replaced by ‘constraint’, blj and buj are replaced by bln+j and bun+j respectively, and with the following changes in the heading:
L Con gives the name (L) and index j, for j=1,2,,nL, of the linear constraint.
N Con gives the name (N) and index (j-nL), for j=nL+1,,nL+nN, of the nonlinear constraint.
Note that movement off a constraint (as opposed to a variable moving away from its bound) can be interpreted as allowing the entry in the Slack column to become positive.
Numerical values are output with a fixed number of digits; they are not guaranteed to be accurate to this precision.

Example

This example is based on Problem 57 in Hock and Schittkowski (1981) and involves the minimization of the sum of squares function
Fx = 12 i=1 44 yi - fi x 2 ,  
where
fi x = x1 + 0.49-x1 e -x2 ai- 8  
and
i yi ai i yi ai 1 0.49 8 23 0.41 22 2 0.49 8 24 0.40 22 3 0.48 10 25 0.42 24 4 0.47 10 26 0.40 24 5 0.48 10 27 0.40 24 6 0.47 10 28 0.41 26 7 0.46 12 29 0.40 26 8 0.46 12 30 0.41 26 9 0.45 12 31 0.41 28 10 0.43 12 32 0.40 28 11 0.45 14 33 0.40 30 12 0.43 14 34 0.40 30 13 0.43 14 35 0.38 30 14 0.44 16 36 0.41 32 15 0.43 16 37 0.40 32 16 0.43 16 38 0.40 34 17 0.46 18 39 0.41 36 18 0.45 18 40 0.38 36 19 0.42 20 41 0.40 38 20 0.42 20 42 0.40 38 21 0.43 20 43 0.39 40 22 0.41 22 44 0.39 42  
subject to the bounds
x1-0.4 x2-4.0  
to the general linear constraint
x1+x21.0  
and to the nonlinear constraint
0.49x2-x1 x2 0.09 .  
The initial point, which is infeasible, is
x0=0.4,0.0T  
and Fx0=0.002241.
The optimal solution (to five figures) is
x*=0.41995,1.28484T,  
and Fx*=0.01423. The nonlinear constraint is active at the solution.
function e04us_example


fprintf('e04us example results\n\n');

% Hoxk and Schittkowski problem specifications
m   = 44;
n   = 2;
n_l = 1;
n_n = 1;
a  = [1, 1];
bl = [0.4;   -4;    1;    0];
bu = [1e25;  1e25;  1e25; 1e25];
y  = [0.49;  0.49; 0.48; 0.47; 0.48; 0.47; 0.46; 0.46; 0.45; 0.43; 0.45;
      0.43;  0.43; 0.44; 0.43; 0.43; 0.46; 0.45; 0.42; 0.42; 0.43; 0.41;
      0.41;  0.40; 0.42; 0.40; 0.40; 0.41; 0.40; 0.41; 0.41; 0.40; 0.40;
      0.4;   0.38; 0.41; 0.40; 0.40; 0.41; 0.38; 0.40; 0.40; 0.39; 0.39];

istate = zeros(4, 1, 'int64');
cjac   = zeros(1,n);
fjac   = zeros(m,n);
clamda = zeros(n+n_l+n_n,1);
r      = zeros(n,n);

% Initialize
x      = [0.4;  0];
[cwsav,lwsav,iwsav,rwsav,ifail] = e04wb('e04us');
[iter, istate, c, cjac, f, fjac, clamda, objf, r, x, user, ...
 lwsav, iwsav, rwsav, ifail] = ...
     e04us(...
           a, bl, bu, y, @confun, @objfun, istate, cjac, ...
           fjac, clamda, r, x, lwsav, iwsav, rwsav);

fprintf('Minimum value     :  %9.4f\n\n',objf);
fprintf('Found after %3d iterations at x:\n     ',iter);
fprintf(' %9.4f',x);
fprintf('\nNonlinear contraints at x:\n   ');
fprintf(' %11.3e',c);
fprintf('\n');



function [mode, c, cjac, user] = ...
    confun(mode, ncnln, n, ldcj, needc, x, cjac, nstate, user)
  c = zeros(ncnln, 1);

  if (nstate == 1)
%   first call to confun. set all jacobian elements to zero.
%   note that this will only work when 'derivative level = 3'
%   (the default; see section 11.2).
    cjac = zeros(ncnln, n);
  end
  if (needc(1) > 0)
    if (mode == 0 || mode == 2)
      c(1) = -0.09 - x(1)*x(2) + 0.49*x(2);
    end
    if (mode == 1 || mode == 2)
      cjac(1,1) = -x(2);
      cjac(1,2) = -x(1) + 0.49;
    end
  end


function [mode, f, fjac, user] = ...
      objfun(mode, m, n, ldfj, needfi, x, fjac, nstate, user)
  f = zeros(m, 1);

  a = [ 8,  8, 10, 10, 10, 10, 12, 12, 12, 12, 14, ...
       14, 14, 16, 16, 16, 18, 18, 20, 20, 20, 22, ...
       22, 22, 24, 24, 24, 26, 26, 26, 28, 28, 30, ...
       30, 30, 32, 32, 34, 36, 36, 38, 38, 40, 42];

  for i = 1:double(m)
    temp = exp(-x(2)*(a(i)-8));
    if (mode  ==  0 || mode  ==  2)
      f(i) = x(1) + (0.49-x(1))*temp;
    end
    if (mode  ==  1 || mode  ==  2)
      fjac(i,1) = 1 - temp;
      fjac(i,2) = -(0.49-x(1))*(a(i)-8)*temp;
    end
  end
e04us example results

Minimum value     :     0.0142

Found after   6 iterations at x:
         0.4200    1.2848
Nonlinear contraints at x:
     -9.768e-13
Note: the remainder of this document is intended for more advanced users. Optional Parameters describes the optional parameters which may be set by calls to nag_opt_lsq_gencon_deriv_option_string (e04ur). Description of Monitoring Information describes the quantities which can be requested to monitor the course of the computation.

Algorithmic Details

nag_opt_lsq_gencon_deriv (e04us) implements a sequential quadratic programming (SQP) method incorporating an augmented Lagrangian merit function and a BFGS (Broyden–Fletcher–Goldfarb–Shanno) quasi-Newton approximation to the Hessian of the Lagrangian, and is based on nag_opt_nlp2_solve (e04wd). The documents for nag_opt_lsq_lincon_solve (e04nc), nag_opt_nlp1_rcomm (e04uf) and nag_opt_nlp2_solve (e04wd) should be consulted for details of the method.

Optional Parameters

Several optional parameters in nag_opt_lsq_gencon_deriv (e04us) define choices in the problem specification or the algorithm logic. In order to reduce the number of formal arguments of nag_opt_lsq_gencon_deriv (e04us) these optional parameters have associated default values that are appropriate for most problems. Therefore you need only specify those optional parameters whose values are to be different from their default values.
The remainder of this section can be skipped if you wish to use the default values for all optional parameters.
The following is a list of the optional parameters available. A full description of each optional parameter is provided in Description of the s.
Optional parameters may be specified by calling nag_opt_lsq_gencon_deriv_option_string (e04ur) before a call to nag_opt_lsq_gencon_deriv (e04us).
nag_opt_lsq_gencon_deriv_option_string (e04ur) can be called to supply options directly, one call being necessary for each optional parameter. For example,
[lwsav, iwsav, rwsav, inform] = e04ur('Print Level = 1', lwsav, iwsav, rwsav);
nag_opt_lsq_gencon_deriv_option_string (e04ur) should be consulted for a full description of this method of supplying optional parameters.
All optional parameters not specified by you are set to their default values. Optional parameters specified by you are unaltered by nag_opt_lsq_gencon_deriv (e04us) (unless they define invalid values) and so remain in effect for subsequent calls to nag_opt_lsq_gencon_deriv (e04us), unless altered by you.

Description of the Optional Parameters

For each option, we give a summary line, a description of the optional parameter and details of constraints.
The summary line contains:
Keywords and character values are case and white space insensitive.
Further details of other quantities not explicitly defined in this section may be found by consulting the document for nag_opt_nlp1_rcomm (e04uf).
Central Difference Interval  r
Default values are computed
If the algorithm switches to central differences because the forward-difference approximation is not sufficiently accurate, the value of r is used as the difference interval for every element of x. The switch to central differences is indicated by C at the end of each line of intermediate printout produced by the major iterations (see Description of Printed output). The use of finite differences is discussed further under the optional parameter Difference Interval.
If you supply a value for this optional parameter, a small value between 0.0 and 1.0 is appropriate.
Cold Start  
Default
Warm Start  
This option controls the specification of the initial working set in both the procedure for finding a feasible point for the linear constraints and bounds, and in the first QP subproblem thereafter. With a Cold Start, the first working set is chosen by nag_opt_lsq_gencon_deriv (e04us) based on the values of the variables and constraints at the initial point. Broadly speaking, the initial working set will include equality constraints and bounds or inequality constraints that violate or ‘nearly’ satisfy their bounds (to within Crash Tolerance).
With a Warm Start, you must set the istate array and define clamda and r as discussed in Arguments. istate values associated with bounds and linear constraints determine the initial working set of the procedure to find a feasible point with respect to the bounds and linear constraints. istate values associated with nonlinear constraints determine the initial working set of the first QP subproblem after such a feasible point has been found. nag_opt_lsq_gencon_deriv (e04us) will override your specification of istate if necessary, so that a poor choice of the working set will not cause a fatal error. For instance, any elements of istate which are set to -2, -1​ or ​4 will be reset to zero, as will any elements which are set to 3 when the corresponding elements of bl and bu are not equal. A Warm Start will be advantageous if a good estimate of the initial working set is available – for example, when nag_opt_lsq_gencon_deriv (e04us) is called repeatedly to solve related problems.
Crash Tolerance  r
Default =0.01
This value is used in conjunction with the optional parameter Cold Start (the default value) when nag_opt_lsq_gencon_deriv (e04us) selects an initial working set. If 0r1, the initial working set will include (if possible) bounds or general inequality constraints that lie within r of their bounds. In particular, a constraint of the form ajT xl  will be included in the initial working set if ajT x-l r 1+l . If r<0 or r>1, the default value is used.
Defaults  
This special keyword may be used to reset all optional parameters to their default values.
Derivative Level  i
Default =3
This parameter indicates which derivatives are provided in user-supplied functions objfun and confun. The possible choices for i are the following.
i Meaning
3 All elements of the objective Jacobian and the constraint Jacobian are provided by you.
2 All elements of the constraint Jacobian are provided, but some elements of the objective Jacobian are not specified by you.
1 All elements of the objective Jacobian are provided, but some elements of the constraint Jacobian are not specified by you.
0 Some elements of both the objective Jacobian and the constraint Jacobian are not specified by you.
The value i=3 should be used whenever possible, since nag_opt_lsq_gencon_deriv (e04us) is more reliable (and will usually be more efficient) when all derivatives are exact.
If i=0​ or ​2, nag_opt_lsq_gencon_deriv (e04us) will approximate unspecified elements of the objective Jacobian, using finite differences. The computation of finite difference approximations usually increases the total run-time, since a call to objfun is required for each unspecified element. Furthermore, less accuracy can be attained in the solution (see Chapter 8 of Gill et al. (1981), for a discussion of limiting accuracy).
If i=0​ or ​1, nag_opt_lsq_gencon_deriv (e04us) will approximate unspecified elements of the constraint Jacobian. One call to confun is needed for each variable for which partial derivatives are not available. For example, if the constraint Jacobian has the form
* * * * * ? ? * * * ? * * * * *  
where ‘*’ indicates an element provided by you and ‘?’ indicates an unspecified element, nag_opt_lsq_gencon_deriv (e04us) will call confun twice: once to estimate the missing element in column 2, and again to estimate the two missing elements in column 3. (Since columns 1 and 4 are known, they require no calls to confun.)
At times, central differences are used rather than forward differences, in which case twice as many calls to objfun and confun are needed. (The switch to central differences is not under your control.)
If i<0 or i>3, the default value is used.
Difference Interval  r
Default values are computed
This option defines an interval used to estimate derivatives by finite differences in the following circumstances:
(a) For verifying the objective and/or constraint gradients (see the description of the optional parameter Verify).
(b) For estimating unspecified elements of the objective and/or constraint Jacobian matrix.
In general, a derivative with respect to the jth variable is approximated using the interval δj, where δj=r1+x^j, with x^ the first point feasible with respect to the bounds and linear constraints. If the functions are well scaled, the resulting derivative approximation should be accurate to Or. See Gill et al. (1981) for a discussion of the accuracy in finite difference approximations.
If a difference interval is not specified, a finite difference interval will be computed automatically for each variable by a procedure that requires up to six calls of confun and objfun for each element. This option is recommended if the function is badly scaled or you wish to have nag_opt_lsq_gencon_deriv (e04us) determine constant elements in the objective and constraint gradients (see the descriptions of confun and objfun in Arguments).
If you supply a value for this optional parameter, a small value between 0.0 and 1.0 is appropriate.
Feasibility Tolerance  r
Default =ε
The scalar r defines the maximum acceptable absolute violations in linear and nonlinear constraints at a ‘feasible’ point; i.e., a constraint is considered satisfied if its violation does not exceed r. If r<ε or r1, the default value is used. Using this keyword sets both optional parameters Linear Feasibility Tolerance and Nonlinear Feasibility Tolerance to r, if εr<1. (Additional details are given under the descriptions of these optional parameters.)
Function Precision  r
Default =ε0.9
This parameter defines εr, which is intended to be a measure of the accuracy with which the problem functions Fx and cx can be computed. If r<ε or r1, the default value is used.
The value of εr should reflect the relative precision of 1+Fx; i.e., εr acts as a relative precision when F is large and as an absolute precision when F is small. For example, if Fx is typically of order 1000 and the first six significant digits are known to be correct, an appropriate value for εr would be 10-6. In contrast, if Fx is typically of order 10-4 and the first six significant digits are known to be correct, an appropriate value for εr would be 10-10. The choice of εr can be quite complicated for badly scaled problems; see Chapter 8 of Gill et al. (1981) for a discussion of scaling techniques. The default value is appropriate for most simple functions that are computed with full accuracy. However, when the accuracy of the computed function values is known to be significantly worse than full precision, the value of εr should be large enough so that nag_opt_lsq_gencon_deriv (e04us) will not attempt to distinguish between function values that differ by less than the error inherent in the calculation.
Hessian  No
Default =NO
This option controls the contents of the upper triangular matrix R (see Arguments). nag_opt_lsq_gencon_deriv (e04us) works exclusively with the transformed and reordered Hessian HQ, and hence extra computation is required to form the Hessian itself. If Hessian=NO, r contains the Cholesky factor of the transformed and reordered Hessian. If Hessian=YES, the Cholesky factor of the approximate Hessian itself is formed and stored in r. You should select Hessian=YES if a Warm Start will be used for the next call to nag_opt_lsq_gencon_deriv (e04us).
Infinite Bound Size  r
Default =1020
If r>0, r defines the ‘infinite’ bound bigbnd in the definition of the problem constraints. Any upper bound greater than or equal to bigbnd will be regarded as + (and similarly any lower bound less than or equal to -bigbnd will be regarded as -). If r<0, the default value is used.
Infinite Step Size  r
Default =maxbigbnd,1020
If r>0, r specifies the magnitude of the change in variables that is treated as a step to an unbounded solution. If the change in x during an iteration would exceed the value of r, the objective function is considered to be unbounded below in the feasible region. If r0, the default value is used.
JTJ Initial Hessian  
Default
Unit Initial Hessian  
This option controls the initial value of the upper triangular matrix R. If J denotes the objective Jacobian matrix fx, then JTJ is often a good approximation to the objective Hessian matrix 2Fx (see also optional parameter Reset Frequency).
Line Search Tolerance  r
Default =0.9
The value r (0r<1) controls the accuracy with which the step α taken during each iteration approximates a minimum of the merit function along the search direction (the smaller the value of r, the more accurate the linesearch). The default value r=0.9 requests an inaccurate search and is appropriate for most problems, particularly those with any nonlinear constraints.
If there are no nonlinear constraints, a more accurate search may be appropriate when it is desirable to reduce the number of major iterations – for example, if the objective function is cheap to evaluate, or if a substantial number of derivatives are unspecified. If r<0 or r1, the default value is used.
Linear Feasibility Tolerance  r1
Default =ε
Nonlinear Feasibility Tolerance  r2
Default =ε0.33 or ε
The default value of r2 is ε0.33 if Derivative Level=0 or 1, and ε otherwise.
The scalars r1 and r2 define the maximum acceptable absolute violations in linear and nonlinear constraints at a ‘feasible’ point; i.e., a linear constraint is considered satisfied if its violation does not exceed r1, and similarly for a nonlinear constraint and r2. If rm<ε or rm1, the default value is used, for m=1,2.
On entry to nag_opt_lsq_gencon_deriv (e04us), an iterative procedure is executed in order to find a point that satisfies the linear constraints and bounds on the variables to within the tolerance r1. All subsequent iterates will satisfy the linear constraints to within the same tolerance (unless r1 is comparable to the finite difference interval).
For nonlinear constraints, the feasibility tolerance r2 defines the largest constraint violation that is acceptable at an optimal point. Since nonlinear constraints are generally not satisfied until the final iterate, the value of optional parameter Nonlinear Feasibility Tolerance acts as a partial termination criterion for the iterative sequence generated by nag_opt_lsq_gencon_deriv (e04us) (see also optional parameter Optimality Tolerance).
These tolerances should reflect the precision of the corresponding constraints. For example, if the variables and the coefficients in the linear constraints are of order unity, and the latter are correct to about 6 decimal digits, it would be appropriate to specify r1 as 10-6.
List  
Nolist  
Default for nag_opt_lsq_gencon_deriv (e04us) =Nolist
Normally each optional parameter specification is printed as it is supplied. Optional parameter Nolist may be used to suppress the printing and optional parameter List may be used to restore printing.
Major Iteration Limit  i
Default =max50,3n+nL+10nN
Iteration Limit  
Iters  
Itns  
The value of i specifies the maximum number of major iterations allowed before termination. Setting i=0 and Major Print Level>0 means that the workspace needed will be computed and printed, but no iterations will be performed. If i<0, the default value is used.
Major Print Level  i
Print Level  
Default for nag_opt_lsq_gencon_deriv (e04us) =0
The value of i controls the amount of printout produced by the major iterations of nag_opt_lsq_gencon_deriv (e04us), as indicated below. A detailed description of the printed output is given in Description of Printed output (summary output at each major iteration and the final solution) and Description of Monitoring Information (monitoring information at each major iteration). (See also the description of the optional parameter Minor Print Level.)
The following printout is sent to the current advisory message unit (as defined by nag_file_set_unit_advisory (x04ab)):
i Output
00 No output.
01 The final solution only.
05 One line of summary output (<80 characters; see Description of Printed output) for each major iteration (no printout of the final solution).
10 The final solution and one line of summary output for each major iteration.
The following printout is sent to the logical unit number defined by the optional parameter Monitoring File:
i Output
<5 No output.
5 One long line of output (>80 characters; see Description of Monitoring Information) for each major iteration (no printout of the final solution).
20 At each major iteration, the objective function, the Euclidean norm of the nonlinear constraint violations, the values of the nonlinear constraints (the vector c), the values of the linear constraints (the vector ALx), and the current values of the variables (the vector x).
30 At each major iteration, the diagonal elements of the matrix T associated with the TQ factorization (see (5) in nag_opt_nlp1_rcomm (e04uf)) of the QP working set, and the diagonal elements of R, the triangular factor of the transformed and reordered Hessian (see (6) in nag_opt_nlp1_rcomm (e04uf)).
If Major Print Level5 and the unit number defined by the optional parameter Monitoring File is the same as that defined by nag_file_set_unit_advisory (x04ab), then the summary output for each major iteration is suppressed.
Minor Iteration Limit  i
Default =max50,3n+nL+nN
The value of i specifies the maximum number of iterations for finding a feasible point with respect to the bounds and linear constraints (if any). The value of i also specifies the maximum number of minor iterations for the optimality phase of each QP subproblem. If i0, the default value is used.
Minor Print Level  i
Default =0
The value of i controls the amount of printout produced by the minor iterations of nag_opt_lsq_gencon_deriv (e04us) (i.e., the iterations of the quadratic programming algorithm), as indicated below. A detailed description of the printed output is given in Description of Printed output (summary output at each minor iteration and the final QP solution) and Description of Monitoring Information (monitoring information at each minor iteration). (See also the description of the optional parameter Major Print Level.)
The following printout is sent to the current advisory message unit (as defined by nag_file_set_unit_advisory (x04ab)):
i Output
00 No output.
01 The final QP solution only.
05 One line of summary output (<80 characters; see Description of Printed output) for each minor iteration (no printout of the final QP solution).
10 The final QP solution and one line of summary output for each minor iteration.
The following printout is sent to the logical unit number defined by the optional parameter Monitoring File:
i Output
<5 No output.
5 One long line of output (>80 characters; see Description of Monitoring Information) for each minor iteration (no printout of the final QP solution).
20 At each minor iteration, the current estimates of the QP multipliers, the current estimate of the QP search direction, the QP constraint values, and the status of each QP constraint.
30 At each minor iteration, the diagonal elements of the matrix T associated with the TQ factorization (see (5) in nag_opt_nlp1_rcomm (e04uf)) of the QP working set, and the diagonal elements of the Cholesky factor R  of the transformed Hessian (see (6) in nag_opt_nlp1_rcomm (e04uf)).
If Minor Print Level5 and the unit number defined by the optional parameter Monitoring File is the same as that defined by nag_file_set_unit_advisory (x04ab), then the summary output for each minor iteration is suppressed.
Monitoring File  i
Default =-1
If i0 and Major Print Level5 or i0 and Minor Print Level5, monitoring information produced by nag_opt_lsq_gencon_deriv (e04us) at every iteration is sent to a file with logical unit number i. If i<0 and/or Major Print Level<5 and Minor Print Level<5, no monitoring information is produced.
Optimality Tolerance  r
Default =εR0.8
The parameter r (εRr<1) specifies the accuracy to which you wish the final iterate to approximate a solution of the problem. Broadly speaking, r indicates the number of correct figures desired in the objective function at the solution. For example, if r is 10-6 and nag_opt_lsq_gencon_deriv (e04us) terminates successfully, the final value of F should have approximately six correct figures. If r<εR or r1, the default value is used.
nag_opt_lsq_gencon_deriv (e04us) will terminate successfully if the iterative sequence of x values is judged to have converged and the final point satisfies the first-order Kuhn–Tucker conditions (see Overview in nag_opt_nlp1_rcomm (e04uf)). The sequence of iterates is considered to have converged at x if
αpr1+x, (2)
where p is the search direction and α the step length. An iterate is considered to satisfy the first-order conditions for a minimum if
ZT g FR r 1 + max 1 + F x , g FR (3)
and
resjftol  for all  j, (4)
where ZTgFR is the projected gradient, gFR is the gradient of Fx with respect to the free variables, resj is the violation of the jth active nonlinear constraint, and ftol is the Nonlinear Feasibility Tolerance.
Reset Frequency  i
Default =2
If i>0, this parameter allows you to reset the approximate Hessian matrix to JTJ every i iterations, where J is the objective Jacobian matrix fx (see also the description of the optional parameter JTJ Initial Hessian).
At any point where there are no nonlinear constraints active and the values of f are small in magnitude compared to the norm of J, JTJ will be a good approximation to the objective Hessian 2Fx. Under these circumstances, frequent resetting can significantly improve the convergence rate of nag_opt_lsq_gencon_deriv (e04us).
Resetting is suppressed at any iteration during which there are nonlinear constraints active.
If i0, the default value is used.
Start Objective Check At Variable  i1
Default =1
Stop Objective Check At Variable  i2
Default =n
Start Constraint Check At Variable  i3
Default =1
Stop Constraint Check At Variable  i4
Default =n
These keywords take effect only if Verify Level>0. They may be used to control the verification of Jacobian elements computed by user-supplied functions objfun and confun. For example, if the first 30 columns of the objective Jacobian appeared to be correct in an earlier run, so that only column 31 remains questionable, it is reasonable to specify Start Objective Check At Variable=31. If the first 30 variables appear linearly in the subfunctions, so that the corresponding Jacobian elements are constant, the above choice would also be appropriate.
If i2m-10 or i2m-1>minn,i2m, the default value is used, for m=1,2. If i2m0 or i2m>n, the default value is used, for m=1,2.
Step Limit  r
Default =2.0
If r>0,r specifies the maximum change in variables at the first step of the linesearch. In some cases, such as Fx=aebx or Fx=axb, even a moderate change in the elements of x can lead to floating-point overflow. The parameter r is therefore used to encourage evaluation of the problem functions at meaningful points. Given any major iterate x, the first point x~ at which F and c are evaluated during the linesearch is restricted so that
x~-x2r1+x2.  
The linesearch may go on and evaluate F and c at points further from x if this will result in a lower value of the merit function (indicated by L at the end of each line of output produced by the major iterations; see Description of Printed output). If L is printed for most of the iterations, r should be set to a larger value.
Wherever possible, upper and lower bounds on x should be used to prevent evaluation of nonlinear functions at wild values. The default value Step Limit=2.0 should not affect progress on well-behaved functions, but values such as 0.1 ​ or ​ 0.01 may be helpful when rapidly varying functions are present. If a small value of Step Limit is selected, a good starting point may be required. An important application is to the class of nonlinear least squares problems. If r0, the default value is used.
Verify Level  i
Default =0
Verify  
Verify Constraint Gradients  
Verify Gradients  
Verify Objective Gradients  
These keywords refer to finite difference checks on the gradient elements computed by objfun and confun. (Unspecified gradient elements are not checked.) The possible choices for i are the following:
i Meaning
-1 No checks are performed.
-0 Only a ‘cheap’ test will be performed, requiring one call to objfun.
-1 Individual gradient elements will also be checked using a reliable (but more expensive) test.
For example, the nonlinear objective gradient (if any) will be verified if either Verify Objective Gradients or Verify Level=1 is specified. Similarly, the objective and the constraint gradients will be verified if Verify=YES or Verify Level=3 or Verify is specified.
If i=-1, no checking will be performed.
If 0i3, gradients will be verified at the first point that satisfies the linear constraints and bounds. If i=0, only a ‘cheap’ test will be performed, requiring one call to objfun and (if appropriate) one call to confun. If 1i3, a more reliable (but more expensive) check will be made on individual gradient elements, within the ranges specified by the Start Objective Check At Variable and Stop Objective Check At Variable keywords. A result of the form OK or BAD? is printed by nag_opt_lsq_gencon_deriv (e04us) to indicate whether or not each element appears to be correct.
If 10i13, the action is the same as for i-10, except that it will take place at the user-specified initial value of x.
If i<-1 or 4i9 or i>13, the default value is used.
We suggest that Verify Level=3 be used whenever a new function function is being developed.

Description of Monitoring Information

This section describes the long line of output (>80 characters) which forms part of the monitoring information produced by nag_opt_lsq_gencon_deriv (e04us). (See also the description of the optional parameters Major Print Level, Minor Print Level and Monitoring File.) You can control the level of printed output.
When Major Print Level5 and Monitoring File0, the following line of output is produced at every major iteration of nag_opt_lsq_gencon_deriv (e04us) on the unit number specified by optional parameter Monitoring File. In all cases, the values of the quantities printed are those in effect on completion of the given iteration.
Maj is the major iteration count.
Mnr is the number of minor iterations required by the feasibility and optimality phases of the QP subproblem. Generally, Mnr will be 1 in the later iterations, since theoretical analysis predicts that the correct active set will be identified near the solution (see Algorithmic Details in nag_opt_nlp1_rcomm (e04uf)).
Note that Mnr may be greater than the optional parameter Minor Iteration Limit if some iterations are required for the feasibility phase.
Step is the step αk taken along the computed search direction. On reasonably well-behaved problems, the unit step (i.e., αk=1) will be taken as the solution is approached.
Nfun is the cumulative number of evaluations of the objective function needed for the linesearch. Evaluations needed for the estimation of the gradients by finite differences are not included. Nfun is printed as a guide to the amount of work required for the linesearch.
Merit Function is the value of the augmented Lagrangian merit function (see (12) in nag_opt_nlp1_rcomm (e04uf)) at the current iterate. This function will decrease at each iteration unless it was necessary to increase the penalty parameters (see The Merit Function in nag_opt_nlp1_rcomm (e04uf)). As the solution is approached, Merit Function will converge to the value of the objective function at the solution.
If the QP subproblem does not have a feasible point (signified by I at the end of the current output line) then the merit function is a large multiple of the constraint violations, weighted by the penalty parameters. During a sequence of major iterations with infeasible subproblems, the sequence of Merit Function values will decrease monotonically until either a feasible subproblem is obtained or nag_opt_lsq_gencon_deriv (e04us) terminates with ifail=3 (no feasible point could be found for the nonlinear constraints).
If there are no nonlinear constraints present (i.e., ncnln=0) then this entry contains Objective, the value of the objective function Fx. The objective function will decrease monotonically to its optimal value when there are no nonlinear constraints.
Norm Gz is ZTgFR, the Euclidean norm of the projected gradient (see Solution of the Quadratic Programming Subproblem in nag_opt_nlp1_rcomm (e04uf)). Norm Gz will be approximately zero in the neighbourhood of a solution.
Violtn is the Euclidean norm of the residuals of constraints that are violated or in the predicted active set (not printed if ncnln is zero). Violtn will be approximately zero in the neighbourhood of a solution.
Nz is the number of columns of Z (see Solution of the Quadratic Programming Subproblem in nag_opt_nlp1_rcomm (e04uf)). The value of Nz is the number of variables minus the number of constraints in the predicted active set; i.e., Nz=n-Bnd+Lin+Nln.
Bnd is the number of simple bound constraints in the current working set.
Lin is the number of general linear constraints in the current working set.
Nln is the number of nonlinear constraints in the predicted active set (not printed if ncnln is zero).
Penalty is the Euclidean norm of the vector of penalty parameters used in the augmented Lagrangian merit function (not printed if ncnln is zero).
Cond H is a lower bound on the condition number of the Hessian approximation H.
Cond Hz is a lower bound on the condition number of the projected Hessian approximation HZ ( HZ = ZT HFR Z = RZT RZ ; see (6) and (11) in nag_opt_nlp1_rcomm (e04uf)). The larger this number, the more difficult the problem.
Cond T is a lower bound on the condition number of the matrix of predicted active constraints.
Conv is a three-letter indication of the status of the three convergence tests (2)(4) defined in the description of the optional parameter Optimality Tolerance. Each letter is T if the test is satisfied and F otherwise. The three tests indicate whether:
(i) the sequence of iterates has converged;
(ii) the projected gradient (Norm Gz) is sufficiently small; and
(iii) the norm of the residuals of constraints in the predicted active set (Violtn) is small enough.
If any of these indicators is F when nag_opt_lsq_gencon_deriv (e04us) terminates with ifail=0, you should check the solution carefully.
M is printed if the quasi-Newton update has been modified to ensure that the Hessian approximation is positive definite (see The Quasi-Newton Update in nag_opt_nlp1_rcomm (e04uf)).
I is printed if the QP subproblem has no feasible point.
C is printed if central differences have been used to compute the unspecified objective and constraint gradients. If the value of Step is zero then the switch to central differences was made because no lower point could be found in the linesearch. (In this case, the QP subproblem is resolved with the central difference gradient and Jacobian.) If the value of Step is nonzero then central differences were computed because Norm Gz and Violtn imply that x is close to a Kuhn–Tucker point (see Overview in nag_opt_nlp1_rcomm (e04uf)).
L is printed if the linesearch has produced a relative change in x greater than the value defined by the optional parameter Step Limit. If this output occurs frequently during later iterations of the run, optional parameter Step Limit should be set to a larger value.
R is printed if the approximate Hessian has been refactorized. If the diagonal condition estimator of R indicates that the approximate Hessian is badly conditioned then the approximate Hessian is refactorized using column interchanges. If necessary, R is modified so that its diagonal condition estimator is bounded.

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2015