e04nf solves general quadratic programming problems. It is not intended for large sparse problems.
Syntax
C# 

public static void e04nf( int n, int nclin, double[,] a, double[] bl, double[] bu, double[] cvec, double[,] h, E04..::..E04NF_QPHESS qphess, int[] istate, double[] x, out int iter, out double obj, double[] ax, double[] clamda, E04..::..e04nfOptions options, out int ifail ) 
Visual Basic 

Public Shared Sub e04nf ( _ n As Integer, _ nclin As Integer, _ a As Double(,), _ bl As Double(), _ bu As Double(), _ cvec As Double(), _ h As Double(,), _ qphess As E04..::..E04NF_QPHESS, _ istate As Integer(), _ x As Double(), _ <OutAttribute> ByRef iter As Integer, _ <OutAttribute> ByRef obj As Double, _ ax As Double(), _ clamda As Double(), _ options As E04..::..e04nfOptions, _ <OutAttribute> ByRef ifail As Integer _ ) 
Visual C++ 

public: static void e04nf( int n, int nclin, array<double,2>^ a, array<double>^ bl, array<double>^ bu, array<double>^ cvec, array<double,2>^ h, E04..::..E04NF_QPHESS^ qphess, array<int>^ istate, array<double>^ x, [OutAttribute] int% iter, [OutAttribute] double% obj, array<double>^ ax, array<double>^ clamda, E04..::..e04nfOptions^ options, [OutAttribute] int% ifail ) 
F# 

static member e04nf : n : int * nclin : int * a : float[,] * bl : float[] * bu : float[] * cvec : float[] * h : float[,] * qphess : E04..::..E04NF_QPHESS * istate : int[] * x : float[] * iter : int byref * obj : float byref * ax : float[] * clamda : float[] * options : E04..::..e04nfOptions * ifail : int byref > unit 
Parameters
 n
 Type: System..::..Int32On entry: $n$, the number of variables.Constraint: ${\mathbf{n}}>0$.
 nclin
 Type: System..::..Int32On entry: ${m}_{L}$, the number of general linear constraints.Constraint: ${\mathbf{nclin}}\ge 0$.
 a
 Type: array<System..::..Double,2>[,](,)[,][,]An array of size [dim1, dim2]Note: dim1 must satisfy the constraint: $\mathrm{dim1}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{nclin}}\right)$Note: the second dimension of the array a must be at least ${\mathbf{n}}$ if ${\mathbf{nclin}}>0$ and at least $1$ if ${\mathbf{nclin}}=0$.
 bl
 Type: array<System..::..Double>[]()[][]An array of size [${\mathbf{n}}+{\mathbf{nclin}}$]On entry: bl must contain the lower bounds and bu the upper bounds, for all the constraints in the following order. The first $n$ elements of each array must contain the bounds on the variables, and the next ${m}_{L}$ elements the bounds for the general linear constraints (if any). To specify a nonexistent lower bound (i.e., ${l}_{j}=\infty $), set ${\mathbf{bl}}\left[j1\right]\le \mathit{bigbnd}$, and to specify a nonexistent upper bound (i.e., ${u}_{j}=+\infty $), set ${\mathbf{bu}}\left[j1\right]\ge \mathit{bigbnd}$; the default value of $\mathit{bigbnd}$ is ${10}^{20}$, but this may be changed by the optional parameter Infinite Bound Size. To specify the $j$th constraint as an equality, set ${\mathbf{bl}}\left[j1\right]={\mathbf{bu}}\left[j1\right]=\beta $, say, where $\left\beta \right<\mathit{bigbnd}$.Constraints:
 ${\mathbf{bl}}\left[\mathit{j}1\right]\le {\mathbf{bu}}\left[\mathit{j}1\right]$, for $\mathit{j}=1,2,\dots ,{\mathbf{n}}+{\mathbf{nclin}}$;
 if ${\mathbf{bl}}\left[j1\right]={\mathbf{bu}}\left[j1\right]=\beta $, $\left\beta \right<\mathit{bigbnd}$.
 bu
 Type: array<System..::..Double>[]()[][]An array of size [${\mathbf{n}}+{\mathbf{nclin}}$]On entry: bl must contain the lower bounds and bu the upper bounds, for all the constraints in the following order. The first $n$ elements of each array must contain the bounds on the variables, and the next ${m}_{L}$ elements the bounds for the general linear constraints (if any). To specify a nonexistent lower bound (i.e., ${l}_{j}=\infty $), set ${\mathbf{bl}}\left[j1\right]\le \mathit{bigbnd}$, and to specify a nonexistent upper bound (i.e., ${u}_{j}=+\infty $), set ${\mathbf{bu}}\left[j1\right]\ge \mathit{bigbnd}$; the default value of $\mathit{bigbnd}$ is ${10}^{20}$, but this may be changed by the optional parameter Infinite Bound Size. To specify the $j$th constraint as an equality, set ${\mathbf{bl}}\left[j1\right]={\mathbf{bu}}\left[j1\right]=\beta $, say, where $\left\beta \right<\mathit{bigbnd}$.Constraints:
 ${\mathbf{bl}}\left[\mathit{j}1\right]\le {\mathbf{bu}}\left[\mathit{j}1\right]$, for $\mathit{j}=1,2,\dots ,{\mathbf{n}}+{\mathbf{nclin}}$;
 if ${\mathbf{bl}}\left[j1\right]={\mathbf{bu}}\left[j1\right]=\beta $, $\left\beta \right<\mathit{bigbnd}$.
 cvec
 Type: array<System..::..Double>[]()[][]An array of size [dim1]Note: the dimension of the array cvec must be at least ${\mathbf{n}}$ if the problem is of type LP, QP2 (the default) or QP4, and at least $1$ otherwise.On entry: the coefficients of the explicit linear term of the objective function when the problem is of type LP, QP2 (the default) and QP4.If the problem is of type FP, QP1, or QP3, cvec is not referenced.
 h
 Type: array<System..::..Double,2>[,](,)[,][,]An array of size [dim1, dim2]Note: dim1 must satisfy the constraint:
 if the problem is of type QP1, QP2 (the default), QP3 or QP4, $\mathrm{dim1}\ge {\mathbf{n}}$ or at least the value of the optional parameter Hessian Rows;
 if the problem is of type FP or LP, $\mathrm{dim1}\ge 1$.
 if ${\mathbf{iwsav}}\left[43\right]=1$ or $2$, $\mathrm{dim1}\ge 1$;
 otherwise $\mathrm{dim1}\ge {\mathbf{n}}$.
Note: the second dimension of the array h must be at least ${\mathbf{n}}$ if it is to be used to store $H$ explicitly, and at least $1$ otherwise.On entry: may be used to store the quadratic term $H$ of the QP objective function if desired. In some cases, you need not use h to store $H$ explicitly (see the specification of method qphess). The elements of h are referenced only by method qphess. The number of rows of $H$ is denoted by $m$, whose default value is $n$. (The optional parameter Hessian Rows may be used to specify a value of $m<n$.)If the default version of qphess is used and the problem is of type QP1 or QP2 (the default), the first $m$ rows and columns of h must contain the leading $m$ by $m$ rows and columns of the symmetric Hessian matrix $H$. Only the diagonal and upper triangular elements of the leading $m$ rows and columns of h are referenced. The remaining elements need not be assigned.If the default version of qphess is used and the problem is of type QP3 or QP4, the first $m$ rows of h must contain an $m$ by $n$ upper trapezoidal factor of the symmetric Hessian matrix ${H}^{\mathrm{T}}H$. The factor need not be of full rank, i.e., some of the diagonal elements may be zero. However, as a general rule, the larger the dimension of the leading nonsingular submatrix of h, the fewer iterations will be required. Elements outside the upper trapezoidal part of the first $m$ rows of h need not be assigned.In other situations, it may be desirable to compute $Hx$ or ${H}^{\mathrm{T}}Hx$ without accessing h – for example, if $H$ or ${H}^{\mathrm{T}}H$ is sparse or has special structure. The parameters h and ldh may then refer to any convenient array.If the problem is of type FP or LP, h is not referenced.
 qphess
 Type: NagLibrary..::..E04..::..E04NF_QPHESSIn general, you need not provide a version of qphess, because a ‘default’ method with name E04NFU/E54NFU is included in the Library. However, the algorithm of e04nf requires only the product of $H$ or ${H}^{\mathrm{T}}H$ and a vector $x$; and in some cases you may obtain increased efficiency by providing a version of qphess that avoids the need to define the elements of the matrices $H$ or ${H}^{\mathrm{T}}H$ explicitly.qphess is not referenced if the problem is of type FP or LP, in which case qphess may be the method E04NFU/E54NFU.
A delegate of type E04NF_QPHESS.
 istate
 Type: array<System..::..Int32>[]()[][]An array of size [${\mathbf{n}}+{\mathbf{nclin}}$]On entry: need not be set if the (default) optional parameter Cold Start is used.If the optional parameter Warm Start has been chosen, istate specifies the desired status of the constraints at the start of the feasibility phase. More precisely, the first $n$ elements of istate refer to the upper and lower bounds on the variables, and the next ${m}_{L}$ elements refer to the general linear constraints (if any). Possible values for ${\mathbf{istate}}\left[j1\right]$ are as follows:
${\mathbf{istate}}\left[j1\right]$ Meaning 0 The corresponding constraint should not be in the initial working set. 1 The constraint should be in the initial working set at its lower bound. 2 The constraint should be in the initial working set at its upper bound. 3 The constraint should be in the initial working set as an equality. This value must not be specified unless ${\mathbf{bl}}\left[j1\right]={\mathbf{bu}}\left[j1\right]$. The values $2$, $1$ and $4$ are also acceptable but will be reset to zero by the method. If e04nf has been called previously with the same values of n and nclin, istate already contains satisfactory information. (See also the description of the optional parameter Warm Start.) The method also adjusts (if necessary) the values supplied in x to be consistent with istate.Constraint: $2\le {\mathbf{istate}}\left[\mathit{j}1\right]\le 4$, for $\mathit{j}=1,2,\dots ,{\mathbf{n}}+{\mathbf{nclin}}$.On exit: the status of the constraints in the working set at the point returned in x. The significance of each possible value of ${\mathbf{istate}}\left[j1\right]$ is as follows:${\mathbf{istate}}\left[j1\right]$ Meaning $2$ The constraint violates its lower bound by more than the feasibility tolerance. $1$ The constraint violates its upper bound by more than the feasibility tolerance. $\phantom{}0$ The constraint is satisfied to within the feasibility tolerance, but is not in the working set. $\phantom{}1$ This inequality constraint is included in the working set at its lower bound. $\phantom{}2$ This inequality constraint is included in the working set at its upper bound. $\phantom{}3$ This constraint is included in the working set as an equality. This value of istate can occur only when ${\mathbf{bl}}\left[j1\right]={\mathbf{bu}}\left[j1\right]$. $\phantom{}4$ This corresponds to optimality being declared with ${\mathbf{x}}\left[j1\right]$ being temporarily fixed at its current value. This value of istate can occur only when ${\mathbf{ifail}}={1}$ on exit.
 x
 Type: array<System..::..Double>[]()[][]An array of size [n]On entry: an initial estimate of the solution.
 iter
 Type: System..::..Int32%On exit: the total number of iterations performed.
 obj
 Type: System..::..Double%On exit: the value of the objective function at $x$ if $x$ is feasible, or the sum of infeasibilities at $x$ otherwise. If the problem is of type FP and $x$ is feasible, obj is set to zero.
 ax
 Type: array<System..::..Double>[]()[][]An array of size [$\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{nclin}}\right)$]On exit: the final values of the linear constraints $Ax$.If ${\mathbf{nclin}}=0$, ax is not referenced.
 clamda
 Type: array<System..::..Double>[]()[][]An array of size [${\mathbf{n}}+{\mathbf{nclin}}$]On exit: the values of the Lagrange multipliers for each constraint with respect to the current working set. The first $n$ elements contain the multipliers for the bound constraints on the variables, and the next ${m}_{L}$ elements contain the multipliers for the general linear constraints (if any). If ${\mathbf{istate}}\left[j1\right]=0$ (i.e., constraint $j$ is not in the working set), ${\mathbf{clamda}}\left[j1\right]$ is zero. If $x$ is optimal, ${\mathbf{clamda}}\left[j1\right]$ should be nonnegative if ${\mathbf{istate}}\left[j1\right]=1$, nonpositive if ${\mathbf{istate}}\left[j1\right]=2$ and zero if ${\mathbf{istate}}\left[j1\right]=4$.
 options
 Type: NagLibrary..::..E04..::..e04nfOptionsAn Object of type E04.e04nfOptions. Used to configure optional parameters to this method.
 ifail
 Type: System..::..Int32%On exit: ${\mathbf{ifail}}={0}$ unless the method detects an error or a warning has been flagged (see [Error Indicators and Warnings]).
Description
e04nf is designed to solve a class of quadratic programming problems that are assumed to be stated in the following general form:
where $A$ is an ${m}_{L}$ by $n$ matrix and $f\left(x\right)$ may be specified in a variety of ways depending upon the particular problem to be solved. The available forms for $f\left(x\right)$ are listed in Table 1, in which the prefixes FP, LP and QP stand for ‘feasible point’, ‘linear programming’ and ‘quadratic programming’ respectively and $c$ is an $n$element vector.
$$\underset{x\in {R}^{n}}{\mathrm{minimize}}\phantom{\rule{0.25em}{0ex}}f\left(x\right)\text{\hspace{1em} subject to \hspace{1em}}l\le \left(\begin{array}{c}x\\ Ax\end{array}\right)\le u\text{,}$$ 
Problem type  $f\left(x\right)$  Matrix $H$ 
FP  Not applicable  Not applicable 
LP  ${c}^{\mathrm{T}}x$  Not applicable 
QP1  $\phantom{{c}^{\mathrm{T}}x+}\frac{1}{2}{x}^{\mathrm{T}}Hx$  symmetric 
QP2  ${c}^{\mathrm{T}}x+\frac{1}{2}{x}^{\mathrm{T}}Hx$  symmetric 
QP3  $\phantom{{c}^{\mathrm{T}}x+}\frac{1}{2}{x}^{\mathrm{T}}{H}^{\mathrm{T}}Hx$  $m$ by $n$ upper trapezoidal 
QP4  ${c}^{\mathrm{T}}x+\frac{1}{2}{x}^{\mathrm{T}}{H}^{\mathrm{T}}Hx$  $m$ by $n$ upper trapezoidal 
There is no restriction on $H$ or ${H}^{\mathrm{T}}H$ apart from symmetry. If the quadratic function is convex, a global minimum is found; otherwise, a local minimum is found. The default problem type is QP2 and other objective functions are selected by using the optional parameter Problem Type. For problems of type FP, the objective function is omitted and the method attempts to find a feasible point for the set of constraints.
The constraints involving $A$ are called the general constraints. Note that upper and lower bounds are specified for all the variables and for all the general constraints. An equality constraint can be specified by setting ${l}_{i}={u}_{i}$. If certain bounds are not present, the associated elements of $l$ or $u$ can be set to special values that will be treated as $\infty $ or $+\infty $. (See the description of the optional parameter Infinite Bound Size.)
The defining feature of a quadratic function $f\left(x\right)$ is that the secondderivative matrix ${\nabla}^{2}f\left(x\right)$ (the Hessian matrix) is constant. For QP1 and QP2 (the default), ${\nabla}^{2}f\left(x\right)=H$; for QP3 and QP4, ${\nabla}^{2}f\left(x\right)={H}^{\mathrm{T}}H$; and for the LP case, ${\nabla}^{2}f\left(x\right)=0$. If $H$ is positive semidefinite, it is usually more efficient to use e04nc. If $H$ is defined as the zero matrix, e04nf will still attempt to solve the resulting linear programming problem; however, this can be accomplished more efficiently by setting the optional parameter ${\mathbf{Problem\; Type}}='\mathrm{LP}'$, or by using e04mf instead.
You must supply an initial estimate of the solution.
In the QP case, you may supply $H$ either explicitly as an $m$ by $n$ matrix, or implicitly in a method that computes the product $Hx$ or ${H}^{\mathrm{T}}Hx$ for any given vector $x$.
In general, a successful run of e04nf will indicate one of three situations:
(i)  a minimizer has been found; 
(ii)  the algorithm has terminated at a socalled deadpoint; or 
(iii)  the problem has no bounded solution. 
If a minimizer is found, and ${\nabla}^{2}f\left(x\right)$ is positive definite or positive semidefinite, e04nf will obtain a global minimizer; otherwise, the solution will be a local minimizer (which may or may not be a global minimizer). A deadpoint is a point at which the necessary conditions for optimality are satisfied but the sufficient conditions are not. At such a point, a feasible direction of decrease may or may not exist, so that the point is not necessarily a local solution of the problem. Verification of optimality in such instances requires further information, and is in general an NPhard problem (see Pardalos and Schnitger (1988)). Termination at a deadpoint can occur only if ${\nabla}^{2}f\left(x\right)$ is not positive definite. If ${\nabla}^{2}f\left(x\right)$ is positive semidefinite, the deadpoint will be a weak minimizer (i.e., with a unique optimal objective value, but an infinite set of optimal $x$).
The method used by e04nf (see [Algorithmic Details]) is most efficient when many constraints or bounds are active at the solution.
References
Gill P E, Hammarling S, Murray W, Saunders M A and Wright M H (1986) Users' guide for LSSOL (Version 1.0) Report SOL 861 Department of Operations Research, Stanford University
Gill P E and Murray W (1978) Numerically stable methods for quadratic programming Math. Programming 14 349–372
Gill P E, Murray W, Saunders M A and Wright M H (1984) Procedures for optimization problems with a mixture of bounds and general linear constraints ACM Trans. Math. Software 10 282–298
Gill P E, Murray W, Saunders M A and Wright M H (1989) A practical anticycling procedure for linearly constrained optimization Math. Programming 45 437–474
Gill P E, Murray W, Saunders M A and Wright M H (1991) Inertiacontrolling methods for general quadratic programming SIAM Rev. 33 1–36
Gill P E, Murray W and Wright M H (1981) Practical Optimization Academic Press
Pardalos P M and Schnitger G (1988) Checking local optimality in constrained quadratic programming is NPhard Operations Research Letters 7 33–35
Error Indicators and Warnings
Note: e04nf may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the method:
Some error messages may refer to parameters that are dropped from this interface
(LDA, LDH) In these
cases, an error in another parameter has usually caused an incorrect value to be inferred.
 ${\mathbf{ifail}}=1$
 The iterations were terminated at a deadpoint. The necessary conditions for optimality are satisfied but the sufficient conditions are not. (The reduced gradient is negligible, the Lagrange multipliers are optimal, but ${H}_{R}$ is singular or there are some very small multipliers.) If ${\nabla}^{2}f\left(x\right)$ is not positive definite, $x$ is not necessarily a local solution of the problem and verification of optimality requires further information. If ${\nabla}^{2}f\left(x\right)$ is positive semidefinite or the problem is of type LP, $x$ gives the global minimum value of the objective function, but the final $x$ is not unique.
 ${\mathbf{ifail}}=2$
 The solution appears to be unbounded, i.e., the objective function is not bounded below in the feasible region. This value of ifail occurs if a step larger than Infinite Step Size ($\text{default value}={10}^{20}$) would have to be taken in order to continue the algorithm, or the next step would result in an element of $x$ having magnitude larger than Infinite Bound Size ($\text{default value}={10}^{20}$).
 ${\mathbf{ifail}}=3$
 No feasible point was found, i.e., it was not possible to satisfy all the constraints to within the feasibility tolerance. In this case, the constraint violations at the final $x$ will reveal a value of the tolerance for which a feasible point will exist – for example, when the feasibility tolerance for each violated constraint exceeds its Slack (see [Description of the Printed Output]) at the final point. The modified problem (with an altered feasibility tolerance) may then be solved using a Warm Start. You should check that there are no constraint redundancies. If the data for the constraints are accurate only to the absolute precision $\sigma $, you should ensure that the value of the optional parameter Feasibility Tolerance ($\text{default value}=\sqrt{\epsilon}$, where $\epsilon $ is the machine precision) is greater than $\sigma $. For example, if all elements of $A$ are of order unity and are accurate only to three decimal places, the Feasibility Tolerance should be at least ${10}^{3}$.
 ${\mathbf{ifail}}=4$
 The limiting number of iterations was reached before normal termination occurred.The values of the optional parameters Feasibility Phase Iteration Limit ($\text{default value}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(50,5\left(n+{m}_{L}\right)\right)$) and Optimality Phase Iteration Limit ($\text{default value}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(50,5\left(n+{m}_{L}\right)\right)$) may be too small. If the method appears to be making progress (e.g., the objective function is being satisfactorily reduced), either increase the iterations limit and rerun e04nf or, alternatively, rerun e04nf using the Warm Start facility to specify the initial working set.
 ${\mathbf{ifail}}=5$
 The reduced Hessian exceeds its assigned dimension. The algorithm needed to expand the reduced Hessian when it was already at its maximum dimension, as specified by the optional parameter Maximum Degrees of Freedom ($\text{default value}=n$).The value of the optional parameter Maximum Degrees of Freedom is too small. Rerun e04nf with a larger value (possibly using the Warm Start facility to specify the initial working set).
 ${\mathbf{ifail}}=6$
 An input parameter is invalid.
 ${\mathbf{ifail}}=7$
 The designated problem type was not FP, LP, QP1, QP2, QP3 or QP4. Rerun e04nf with the optional parameter Problem Type set to one of these values.
 $\mathbf{\text{Overflow}}$
 If the printed output before the overflow error contains a warning about serious illconditioning in the working set when adding the $j$th constraint, it may be possible to avoid the difficulty by increasing the magnitude of the Feasibility Tolerance ($\text{default value}=\sqrt{\epsilon}$, where $\epsilon $ is the machine precision) and rerunning the program. If the message recurs even after this change, the offending linearly dependent constraint (with index ‘$j$’) must be removed from the problem.
 ${\mathbf{ifail}}=9000$
 An error occured, see message report.
 ${\mathbf{ifail}}=6000$
 Invalid Parameters $\u2329\mathit{\text{value}}\u232a$
 ${\mathbf{ifail}}=4000$
 Invalid dimension for array $\u2329\mathit{\text{value}}\u232a$
 ${\mathbf{ifail}}=8000$
 Negative dimension for array $\u2329\mathit{\text{value}}\u232a$
 ${\mathbf{ifail}}=6000$
 Invalid Parameters $\u2329\mathit{\text{value}}\u232a$
Accuracy
e04nf implements a numerically stable active set strategy and returns solutions that are as accurate as the condition of the problem warrants on the machine.
Parallelism and Performance
None.
Further Comments
This section contains some comments on scaling and a description of the printed output.
Scaling
Sensible scaling of the problem is likely to reduce the number of iterations required and make the problem less sensitive to perturbations in the data, thus improving the condition of the problem. In the absence of better information it is usually sensible to make the Euclidean lengths of each constraint of comparable magnitude. See the E04 class and Gill et al. (1981) for further information and advice.
Description of the Printed Output
This section describes the intermediate printout and final printout produced by e04nf. The intermediate printout is a subset of the monitoring information produced by the method at every iteration (see [Description of Monitoring Information]). You can control the level of printed output (see the description of the optional parameter Print Level). Note that the intermediate printout and final printout are produced only if ${\mathbf{Print\; Level}}\ge 10$ (the default for e04nf, by default no output is produced by ).
The following line of summary output ($\text{}<80$ characters) is produced at every iteration. In all cases, the values of the quantities printed are those in effect on completion of the given iteration.
Itn  is the iteration count. 
Step  is the step taken along the computed search direction. If a constraint is added during the current iteration, Step will be the step to the nearest constraint. When the problem is of type LP, the step can be greater than one during the optimality phase. 
Ninf  is the number of violated constraints (infeasibilities). This will be zero during the optimality phase. 
Sinf/Objective 
is the value of the current objective function. If $x$ is not feasible, Sinf gives a weighted sum of the magnitudes of constraint violations. If $x$ is feasible, Objective is the value of the objective function of (1). The output line for the final iteration of the feasibility phase (i.e., the first iteration for which Ninf is zero) will give the value of the true objective at the first feasible point. During the optimality phase the value of the objective function will be nonincreasing. During the feasibility phase the number of constraint infeasibilities will not increase until either a feasible point is found or the optimality of the multipliers implies that no feasible point exists. Once optimal multipliers are obtained the number of infeasibilities can increase, but the sum of infeasibilities will either remain constant or be reduced until the minimum sum of infeasibilities is found.

Norm Gz  is $\Vert {Z}_{R}^{\mathrm{T}}{g}_{\mathrm{FR}}\Vert $, the Euclidean norm of the reduced gradient with respect to ${Z}_{R}$. During the optimality phase, this norm will be approximately zero after a unit step. (See [Definition of Search Direction] and [Main Iteration].) 
The final printout includes a listing of the status of every variable and constraint.
The following describes the printout for each variable. A full stop (.) is printed for any numerical value that is zero.
Varbl  gives the name (V) and index $\mathit{j}$, for $\mathit{j}=1,2,\dots ,n$, of the variable.  
State 
gives the state of the variable (FR if neither bound is in the working set, EQ if a fixed variable, LL if on its lower bound, UL if on its upper bound, TF if temporarily fixed at its current value). If Value lies outside the upper or lower bounds by more than the Feasibility Tolerance, State will be ++ or  respectively.
A key is sometimes printed before State.


Value  is the value of the variable at the final iteration.  
Lower Bound  is the lower bound specified for the variable. None indicates that ${\mathbf{bl}}\left[j1\right]\le \mathit{bigbnd}$.  
Upper Bound  is the upper bound specified for the variable. None indicates that ${\mathbf{bu}}\left[j1\right]\ge \mathit{bigbnd}$.  
Lagr Mult  is the Lagrange multiplier for the associated bound. This will be zero if State is FR unless ${\mathbf{bl}}\left[j1\right]\le \mathit{bigbnd}$ and ${\mathbf{bu}}\left[j1\right]\ge \mathit{bigbnd}$, in which case the entry will be blank. If $x$ is optimal, the multiplier should be nonnegative if State is LL and nonpositive if State is UL.  
Slack  is the difference between the variable Value and the nearer of its (finite) bounds ${\mathbf{bl}}\left[j1\right]$ and ${\mathbf{bu}}\left[j1\right]$. A blank entry indicates that the associated variable is not bounded (i.e., ${\mathbf{bl}}\left[j1\right]\le \mathit{bigbnd}$ and ${\mathbf{bu}}\left[j1\right]\ge \mathit{bigbnd}$). 
The meaning of the printout for general constraints is the same as that given above for variables, with ‘variable’ replaced by ‘constraint’, ${\mathbf{bl}}\left[j1\right]$ and ${\mathbf{bu}}\left[j1\right]$ are replaced by ${\mathbf{bl}}\left[n+j1\right]$ and ${\mathbf{bu}}\left[n+j1\right]$ respectively, and with the following change in the heading:
L Con  gives the name (L) and index $\mathit{j}$, for $\mathit{j}=1,2,\dots ,{n}_{L}$, of the linear constraint. 
Note that movement off a constraint (as opposed to a variable moving away from its bound) can be interpreted as allowing the entry in the Slack column to become positive.
Numerical values are output with a fixed number of digits; they are not guaranteed to be accurate to this precision.
Example
This example minimizes the quadratic function $f\left(x\right)={c}^{\mathrm{T}}x+\frac{1}{2}{x}^{\mathrm{T}}Hx$, where
subject to the bounds
and to the general constraints
The initial point, which is infeasible, is
The optimal solution (to five figures) is
One bound constraint and four general constraints are active at the solution.
$$c={\left(0.02,0.2,0.2,0.2,0.2,0.04,0.04\right)}^{\mathrm{T}}$$ 
$$H=\left(\begin{array}{ccccccc}2& 0& 0& 0& 0& \phantom{}0& \phantom{}0\\ 0& 2& 0& 0& 0& \phantom{}0& \phantom{}0\\ 0& 0& 2& 2& 0& \phantom{}0& \phantom{}0\\ 0& 0& 2& 2& 0& \phantom{}0& \phantom{}0\\ 0& 0& 0& 0& 2& \phantom{}0& \phantom{}0\\ 0& 0& 0& 0& 0& 2& 2\\ 0& 0& 0& 0& 0& 2& 2\end{array}\right)$$ 
$$\begin{array}{c}0.01\le {x}_{1}\le 0.01\\ 0.1\phantom{0}\le {x}_{2}\le 0.15\\ 0.01\le {x}_{3}\le 0.03\\ 0.04\le {x}_{4}\le 0.02\\ 0.1\phantom{0}\le {x}_{5}\le 0.05\\ 0.01\le {x}_{6}\phantom{\le 0.00}\\ 0.01\le {x}_{7}\phantom{\le 0.00}\end{array}$$ 
$$\begin{array}{ccccccccccccccccc}\hfill & \hfill \hfill & \hfill {x}_{1}& \hfill +\hfill & \hfill {x}_{2}& \hfill +\hfill & \hfill {x}_{3}& \hfill +\hfill & \hfill {x}_{4}& \hfill +\hfill & \hfill {x}_{5}& \hfill +\hfill & \hfill {x}_{6}& \hfill +\hfill & \hfill {x}_{7}& \hfill =\hfill & 0.13\hfill \\ \hfill & \hfill \hfill & \hfill 0.15{x}_{1}& \hfill +\hfill & \hfill 0.04{x}_{2}& \hfill +\hfill & \hfill 0.02{x}_{3}& \hfill +\hfill & \hfill 0.04{x}_{4}& \hfill +\hfill & \hfill 0.02{x}_{5}& \hfill +\hfill & \hfill 0.01{x}_{6}& \hfill +\hfill & \hfill 0.03{x}_{7}& \hfill \le \hfill & 0.0049\hfill \\ \hfill & \hfill \hfill & \hfill 0.03{x}_{1}& \hfill +\hfill & \hfill 0.05{x}_{2}& \hfill +\hfill & \hfill 0.08{x}_{3}& \hfill +\hfill & \hfill 0.02{x}_{4}& \hfill +\hfill & \hfill 0.06{x}_{5}& \hfill +\hfill & \hfill 0.01{x}_{6}& \hfill \hfill & \hfill & \hfill \le \hfill & 0.0064\hfill \\ \hfill & \hfill \hfill & \hfill \phantom{0}0.02{x}_{1}& \hfill +\hfill & \hfill 0.04{x}_{2}& \hfill +\hfill & \hfill 0.01{x}_{3}& \hfill +\hfill & \hfill 0.02{x}_{4}& \hfill +\hfill & \hfill 0.02{x}_{5}& \hfill \hfill & \hfill & \hfill \hfill & \hfill & \hfill \le \hfill & 0.0037\hfill \\ \hfill & \hfill \hfill & \hfill 0.02{x}_{1}& \hfill +\hfill & \hfill 0.03{x}_{2}& \hfill \hfill & \hfill & \hfill \hfill & \hfill & \hfill +\hfill & \hfill 0.01{x}_{5}& \hfill \hfill & \hfill & \hfill \hfill & \hfill & \hfill \le \hfill & 0.0012\hfill \\ 0.0992\hfill & \hfill \le \hfill & \hfill 0.70{x}_{1}& \hfill +\hfill & \hfill 0.75{x}_{2}& \hfill +\hfill & \hfill 0.80{x}_{3}& \hfill +\hfill & \hfill 0.75{x}_{4}& \hfill +\hfill & \hfill 0.80{x}_{5}& \hfill +\hfill & \hfill 0.97{x}_{6}& \hfill \hfill & \hfill & \hfill \hfill & \hfill \\ 0.003\hfill & \hfill \le \hfill & \hfill 0.02{x}_{1}& \hfill +\hfill & \hfill 0.06{x}_{2}& \hfill +\hfill & \hfill 0.08{x}_{3}& \hfill +\hfill & \hfill 0.12{x}_{4}& \hfill +\hfill & \hfill 0.02{x}_{5}& \hfill +\hfill & \hfill 0.01{x}_{6}& \hfill +\hfill & \hfill 0.97{x}_{7}& \hfill \le \hfill & \phantom{}0.002\hfill \end{array}$$ 
$${x}_{0}={\left(0.01,0.03,0.0,0.01,0.1,0.02,0.01\right)}^{\mathrm{T}}\text{.}$$ 
$${x}^{*}={\left(0.01,0.069865,0.018259,0.24261,0.62006,0.013805,0.0040665\right)}^{\mathrm{T}}\text{.}$$ 
Example program (C#): e04nfe.cs
Algorithmic Details
This section contains a detailed description of the method used by e04nf.
Overview
e04nf is based on an inertiacontrolling method that maintains a Cholesky factorization of the reduced Hessian (see below). The method is based on that of Gill and Murray (1978), and is described in detail by Gill et al. (1991). Here we briefly summarise the main features of the method. Where possible, explicit reference is made to the names of variables that are parameters of e04nf or appear in the printed output. e04nf has two phases:
(i)  finding an initial feasible point by minimizing the sum of infeasibilities (the feasibility phase), and 
(ii)  minimizing the quadratic objective function within the feasible region (the optimality phase). 
The computations in both phases are performed by the same methods. The twophase nature of the algorithm is reflected by changing the function being minimized from the sum of infeasibilities to the quadratic objective function. The feasibility phase does not perform the standard simplex method (i.e., it does not necessarily find a vertex), except in the LP case when ${m}_{L}\le n$. Once any iterate is feasible, all subsequent iterates remain feasible.
e04nf has been designed to be efficient when used to solve a sequence of related problems – for example, within a sequential quadratic programming method for nonlinearly constrained optimization (e.g., e04uf or e04wd). In particular, you may specify an initial working set (the indices of the constraints believed to be satisfied exactly at the solution); see the discussion of the optional parameter Warm Start.
In general, an iterative process is required to solve a quadratic program. (For simplicity, we shall always consider a typical iteration and avoid reference to the index of the iteration.) Each new iterate $\stackrel{}{x}$ is defined by
where the step length
$\alpha $ is a nonnegative scalar and $p$ is called the search direction.
$$\stackrel{}{x}=x+\alpha p$$  (1) 
At each point $x$, a working set of constraints is defined to be a linearly independent subset of the constraints that are satisfied ‘exactly’ (to within the tolerance defined by the optional parameter Feasibility Tolerance). The working set is the current prediction of the constraints that hold with equality at the solution of a linearly constrained QP problem. The search direction is constructed so that the constraints in the working set remain unaltered for any value of the step length. For a bound constraint in the working set, this property is achieved by setting the corresponding element of the search direction to zero. Thus, the associated variable is fixed, and specification of the working set induces a partition of $x$ into fixed and free variables. During a given iteration, the fixed variables are effectively removed from the problem; since the relevant elements of the search direction are zero, the columns of $A$ corresponding to fixed variables may be ignored.
Let ${m}_{\mathrm{W}}$ denote the number of general constraints in the working set and let ${n}_{\mathrm{FX}}$ denote the number of variables fixed at one of their bounds (${m}_{\mathrm{W}}$ and ${n}_{\mathrm{FX}}$ are the quantities Lin and Bnd in the monitoring file output from e04nf; see [Description of Monitoring Information]). Similarly, let ${n}_{\mathrm{FR}}$ (${n}_{\mathrm{FR}}=n{n}_{\mathrm{FX}}$) denote the number of free variables. At every iteration, the variables are reordered so that the last
${n}_{\mathrm{FX}}$ variables are fixed, with all other relevant vectors and matrices ordered accordingly.
Definition of Search Direction
Let ${A}_{\mathrm{FR}}$ denote the ${m}_{\mathrm{W}}$ by ${n}_{\mathrm{FR}}$ submatrix of general constraints in the working set corresponding to the free variables and let ${p}_{\mathrm{FR}}$ denote the search direction with respect to the free variables only. The general constraints in the working set will be unaltered by any move along $p$ if
In order to compute ${p}_{\mathrm{FR}}$, the $TQ$ factorization of ${A}_{\mathrm{FR}}$ is used:
where $T$ is a nonsingular ${m}_{\mathrm{W}}$ by ${m}_{\mathrm{W}}$ upper triangular matrix (i.e., ${t}_{ij}=0$ if $i>j$), and the nonsingular ${n}_{\mathrm{FR}}$ by ${n}_{\mathrm{FR}}$ matrix ${Q}_{\mathrm{FR}}$ is the product of orthogonal transformations (see Gill et al. (1984)). If the columns of ${Q}_{\mathrm{FR}}$ are partitioned so that
where $Y$ is ${n}_{\mathrm{FR}}$ by ${m}_{\mathrm{W}}$, then the ${n}_{Z}$ $\left({n}_{Z}={n}_{\mathrm{FR}}{m}_{\mathrm{W}}\right)$ columns of $Z$ form a basis for the null space of ${A}_{\mathrm{FR}}$. Let ${n}_{R}$ be an integer such that $0\le {n}_{R}\le {n}_{Z}$, and let ${Z}_{R}$ denote a matrix whose ${n}_{R}$ columns are a subset of the columns of $Z$. (The integer ${n}_{R}$ is the quantity Zr in the monitoring output from e04nf. In many cases, ${Z}_{R}$ will include all the columns of $Z$.) The direction ${p}_{\mathrm{FR}}$ will satisfy (2) if
where ${p}_{R}$ is any ${n}_{R}$vector.
$${A}_{\mathrm{FR}}{p}_{\mathrm{FR}}=0\text{.}$$  (2) 
$${A}_{\mathrm{FR}}{Q}_{\mathrm{FR}}=\left(0\text{\hspace{1em}}T\right)\text{,}$$  (3) 
$${Q}_{\mathrm{FR}}=\left(Z\text{\hspace{1em}}Y\right)\text{,}$$ 
$${p}_{\mathrm{FR}}={Z}_{R}{p}_{R}\text{,}$$  (4) 
Let $Q$ denote the $n$ by $n$ matrix
where ${I}_{\mathrm{FX}}$ is the identity matrix of order ${n}_{\mathrm{FX}}$. Let ${H}_{Q}$ and ${g}_{Q}$ denote the $n$ by $n$ transformed Hessian and transformed gradient
and let the matrix of first ${n}_{R}$ rows and columns of ${H}_{Q}$ be denoted by ${H}_{R}$ and the vector of the first ${n}_{R}$ elements of ${g}_{Q}$ be denoted by ${g}_{R}$. The quantities ${H}_{R}$ and ${g}_{R}$ are known as the reduced Hessian and reduced gradient of $f\left(x\right)$, respectively. Roughly speaking, ${g}_{R}$ and ${H}_{R}$ describe the first and second derivatives of an unconstrained problem for the calculation of ${p}_{R}$.
$$Q=\left(\begin{array}{cc}{Q}_{\mathrm{FR}}& \\ & {I}_{\mathrm{FX}}\end{array}\right)\text{,}$$ 
$${H}_{Q}={Q}^{\mathrm{T}}HQ\text{\hspace{1em} and \hspace{1em}}{g}_{Q}={Q}^{\mathrm{T}}\left(c+Hx\right)$$ 
At each iteration, a triangular factorization of ${H}_{R}$ is available. If ${H}_{R}$ is positive definite, ${H}_{R}={R}^{\mathrm{T}}R$, where $R$ is the upper triangular Cholesky factor of ${H}_{R}$. If ${H}_{R}$ is not positive definite, ${H}_{R}={R}^{\mathrm{T}}DR$, where $D=\mathrm{diag}\left(1,1,\dots ,1,\mu \right)$, with $\mu \le 0$.
The computation is arranged so that the reducedgradient vector is a multiple of ${e}_{R}$, a vector of all zeros except in the last (i.e., ${n}_{R}$th) position. This allows the vector ${p}_{R}$ in (4) to be computed from a single backsubstitution
where $\gamma $ is a scalar that depends on whether or not the reduced Hessian is positive definite at $x$. In the positive definite case, $x+p$ is the minimizer of the objective function subject to the constraints (bounds and general) in the working set treated as equalities. If ${H}_{R}$ is not positive definite ${p}_{R}$ satisfies the conditions
which allow the objective function to be reduced by any positive step of the form $x+\alpha p$.
$$R{p}_{R}=\gamma {e}_{R}$$  (5) 
$${p}_{R}^{\mathrm{T}}{H}_{R}{p}_{R}<0\text{\hspace{1em} and \hspace{1em}}{g}_{R}^{\mathrm{T}}{p}_{R}\le 0\text{,}$$ 
Main Iteration
If the reduced gradient is zero, $x$ is a constrained stationary point in the subspace defined by $Z$. During the feasibility phase, the reduced gradient will usually be zero only at a vertex (although it may be zero at nonvertices in the presence of constraint dependencies). During the optimality phase a zero reduced gradient implies that $x$ minimizes the quadratic objective when the constraints in the working set are treated as equalities. At a constrained stationary point, Lagrange multipliers ${\lambda}_{C}$ and ${\lambda}_{B}$ for the general and bound constraints are defined from the equations
Given a positive constant $\delta $ of the order of the machine precision, a Lagrange multiplier ${\lambda}_{j}$ corresponding to an inequality constraint in the working set is said to be optimal if ${\lambda}_{j}\le \delta $ when the associated constraint is at its upper bound, or if ${\lambda}_{j}\ge \delta $ when the associated constraint is at its lower bound. If a multiplier is nonoptimal, the objective function (either the true objective or the sum of infeasibilities) can be reduced by deleting the corresponding constraint (with index Jdel; see [Description of Monitoring Information]) from the working set.
$${A}_{\mathrm{FR}}^{\mathrm{T}}{\lambda}_{C}={g}_{\mathrm{FR}}\text{\hspace{1em} and \hspace{1em}}{\lambda}_{B}={g}_{\mathrm{FX}}{A}_{\mathrm{FX}}^{\mathrm{T}}{\lambda}_{C}\text{.}$$  (6) 
If optimal multipliers occur during the feasibility phase and the sum of infeasibilities is nonzero, there is no feasible point, and you can force e04nf to continue until the minimum value of the sum of infeasibilities has been found; see the discussion of the optional parameter Minimum Sum of Infeasibilities. At such a point, the Lagrange multiplier ${\lambda}_{j}$ corresponding to an inequality constraint in the working set will be such that $\left(1+\delta \right)\le {\lambda}_{j}\le \delta $ when the associated constraint is at its upper bound, and $\delta \le {\lambda}_{j}\le \left(1+\delta \right)$ when the associated constraint is at its lower bound. Lagrange multipliers for equality constraints will satisfy $\left{\lambda}_{j}\right\le 1+\delta $.
If the reduced gradient is not zero, Lagrange multipliers need not be computed and the nonzero elements of the search direction $p$ are given by ${Z}_{R}{p}_{R}$ (see (4) and (5)). The choice of step length is influenced by the need to maintain feasibility with respect to the satisfied constraints. If ${H}_{R}$ is positive definite and $x+p$ is feasible, $\alpha $ will be taken as unity. In this case, the reduced gradient at $\stackrel{}{x}$ will be zero, and Lagrange multipliers are computed. Otherwise, $\alpha $ is set to ${\alpha}_{\mathrm{M}}$, the step to the ‘nearest’ constraint (with index Jadd; see [Description of Monitoring Information]), which is added to the working set at the next iteration.
Each change in the working set leads to a simple change to ${A}_{\mathrm{FR}}$: if the status of a general constraint changes, a row of ${A}_{\mathrm{FR}}$ is altered; if a bound constraint enters or leaves the working set, a column of ${A}_{\mathrm{FR}}$ changes. Explicit representations are recurred of the matrices $T$, ${Q}_{\mathrm{FR}}$ and $R$; and of vectors ${Q}^{\mathrm{T}}g$, and ${Q}^{\mathrm{T}}c$. The triangular factor $R$ associated with the reduced Hessian is only updated during the optimality phase.
One of the most important features of e04nf is its control of the conditioning of the working set, whose nearness to linear dependence is estimated by the ratio of the largest to smallest diagonal elements of the $TQ$ factor $T$ (the printed value Cond T; see [Description of Monitoring Information]). In constructing the initial working set, constraints are excluded that would result in a large value of Cond T.
e04nf includes a rigorous procedure that prevents the possibility of cycling at a point where the active constraints are nearly linearly dependent (see Gill et al. (1989)). The main feature of the anticycling procedure is that the feasibility tolerance is increased slightly at the start of every iteration. This not only allows a positive step to be taken at every iteration, but also provides, whenever possible, a choice of constraints to be added to the working set. Let ${\alpha}_{\mathrm{M}}$ denote the maximum step at which $x+{\alpha}_{\mathrm{M}}p$ does not violate any constraint by more than its feasibility tolerance. All constraints at a distance $\alpha $ ($\alpha \le {\alpha}_{\mathrm{M}}$) along $p$ from the current point are then viewed as acceptable candidates for inclusion in the working set. The constraint whose normal makes the largest angle with the search direction is added to the working set.
Choosing the Initial Working Set
At the start of the optimality phase, a positive definite ${H}_{R}$ can be defined if enough constraints are included in the initial working set. (The matrix with no rows and columns is positive definite by definition, corresponding to the case when ${A}_{\mathrm{FR}}$ contains ${n}_{\mathrm{FR}}$ constraints.) The idea is to include as many general constraints as necessary to ensure that the reduced Hessian is positive definite.
Let ${H}_{Z}$ denote the matrix of the first ${n}_{Z}$ rows and columns of the matrix ${H}_{Q}={Q}^{\mathrm{T}}HQ$ at the beginning of the optimality phase. A partial Cholesky factorization is used to find an upper triangular matrix $R$ that is the factor of the largest positive definite leading submatrix of ${H}_{Z}$. The use of interchanges during the factorization of ${H}_{Z}$ tends to maximize the dimension of $R$. (The condition of $R$ may be controlled using the optional parameter Rank Tolerance.) Let ${Z}_{R}$ denote the columns of $Z$ corresponding to $R$, and let $Z$ be partitioned as $Z=\left(\begin{array}{cc}{Z}_{R}& {Z}_{A}\end{array}\right)$. A working set for which ${Z}_{R}$ defines the null space can be obtained by including the rows of ${Z}_{A}^{\mathrm{T}}$ as ‘artificial constraints’. Minimization of the objective function then proceeds within the subspace defined by ${Z}_{R}$, as described in [Definition of Search Direction].
The artificially augmented working set is given by
so that ${p}_{\mathrm{FR}}$ will satisfy ${A}_{\mathrm{FR}}{p}_{\mathrm{FR}}=0$ and ${Z}_{A}^{\mathrm{T}}{p}_{\mathrm{FR}}=0$. By definition of the $TQ$ factorization,
${\stackrel{}{A}}_{\mathrm{FR}}$ automatically satisfies the following:
where
and hence the $TQ$ factorization of (7) is available trivially from $T$ and ${Q}_{\mathrm{FR}}$ without additional expense.
$${\stackrel{}{A}}_{\mathrm{FR}}=\left(\begin{array}{c}{Z}_{A}^{\mathrm{T}}\\ {A}_{\mathrm{FR}}\end{array}\right)\text{,}$$  (7) 
$${\stackrel{}{A}}_{\mathrm{FR}}{Q}_{\mathrm{FR}}=\left(\begin{array}{c}{Z}_{A}^{\mathrm{T}}\\ {A}_{\mathrm{FR}}\end{array}\right){Q}_{\mathrm{FR}}=\left(\begin{array}{c}{Z}_{A}^{\mathrm{T}}\\ {A}_{\mathrm{FR}}\end{array}\right)\left(\begin{array}{ccc}{Z}_{R}& {Z}_{A}& Y\end{array}\right)=\left(\begin{array}{cc}0& \stackrel{}{T}\end{array}\right)\text{,}$$ 
$$\stackrel{}{T}=\left(\begin{array}{cc}I& 0\\ 0& T\end{array}\right)\text{,}$$ 
The matrix ${Z}_{A}$ is not kept fixed, since its role is purely to define an appropriate null space; the $TQ$ factorization can therefore be updated in the normal fashion as the iterations proceed. No work is required to ‘delete’ the artificial constraints associated with ${Z}_{A}$ when ${Z}_{R}^{\mathrm{T}}{g}_{\mathrm{FR}}=0$, since this simply involves repartitioning ${Q}_{\mathrm{FR}}$. The ‘artificial’ multiplier vector associated with the rows of ${Z}_{A}^{\mathrm{T}}$ is equal to ${Z}_{A}^{\mathrm{T}}{g}_{\mathrm{FR}}$, and the multipliers corresponding to the rows of the ‘true’ working set are the multipliers that would be obtained if the artificial constraints were not present. If an artificial constraint is ‘deleted’ from the working set, an A appears alongside the entry in the Jdel column of the monitoring file output (see [Description of Monitoring Information]).
The number of columns in ${Z}_{A}$ and ${Z}_{R}$, the Euclidean norm of ${Z}_{R}^{\mathrm{T}}{g}_{\mathrm{FR}}$, and the condition estimator of $R$ appear in the monitoring file output as Art, Zr, Norm Gz and Cond Rz respectively (see [Description of Monitoring Information]).
Under some circumstances, a different type of artificial constraint isused when solving a linear program. Although the algorithm of e04nf does not usually perform simplex steps (in the traditional sense), there is one exception: a linear program with fewer general constraints than variables (i.e., ${m}_{L}\le n$). Use of the simplex method in this situation leads to savings in storage. At the starting point, the ‘natural’ working set (the set of constraints exactly or nearly satisfied at the starting point) is augmented with a suitable number of ‘temporary’ bounds, each of which has the effect of temporarily fixing a variable at its current value. In subsequent iterations, a temporary bound is treated as a standard constraint until it is deleted from the working set, in which case it is never added again. If a temporary bound is ‘deleted’ from the working set, an F (for ‘Fixed’) appears alongside the entry in the Jdel column of the monitoring file output (see [Description of Monitoring Information]).
Description of Monitoring Information
This section describes the long line of output ($\text{}>80$ characters) which forms part of the monitoring information produced by e04nf. (See also the description of the optional parameters Monitoring File and Print Level.) You can control the level of printed output.
To aid interpretation of the printed results the following convention is used for numbering the constraints: indices $1$ through $n$ refer to the bounds on the variables and indices $n+1$ through $n+{m}_{L}$ refer to the general constraints. When the status of a constraint changes, the index of the constraint is printed, along with the designation L (lower bound), U (upper bound), E (equality), F (temporarily fixed variable) or A (artificial constraint).
When ${\mathbf{Print\; Level}}\ge 5$ and ${\mathbf{Monitoring\; File}}\ge 0$, the following line of output is produced at every iteration on the unit number specified by the Monitoring File. In all cases the values of the quantities printed are those in effect on
completion of the given iteration.
Itn  is the iteration count. 
Jdel  is the index of the constraint deleted from the working set. If Jdel is zero, no constraint was deleted. 
Jadd  is the index of the constraint added to the working set. If Jadd is zero, no constraint was added. 
Step  is the step taken along the computed search direction. If a constraint is added during the current iteration, Step will be the step to the nearest constraint. When the problem is of type LP, the step can be greater than one during the optimality phase. 
Ninf  is the number of violated constraints (infeasibilities). This will be zero during the optimality phase. 
Sinf/Objective 
is the value of the current objective function. If $x$ is not feasible, Sinf gives a weighted sum of the magnitudes of constraint violations. If $x$ is feasible, Objective is the value of the objective function of (1). The output line for the final iteration of the feasibility phase (i.e., the first iteration for which Ninf is zero) will give the value of the true objective at the first feasible point. During the optimality phase the value of the objective function will be nonincreasing. During the feasibility phase the number of constraint infeasibilities will not increase until either a feasible point is found or the optimality of the multipliers implies that no feasible point exists. Once optimal multipliers are obtained the number of infeasibilities can increase, but the sum of infeasibilities will either remain constant or be reduced until the minimum sum of infeasibilities is found.

Bnd  is the number of simple bound constraints in the current working set. 
Lin  is the number of general linear constraints in the current working set. 
Art  is the number of artificial constraints in the working set, i.e., the number of columns of ${Z}_{A}$ (see [Choosing the Initial Working Set]). 
Zr 
is the number of columns of ${Z}_{1}$ (see [Definition of Search Direction]). Zr is the dimension of the subspace in which the objective function is currently being minimized. The value of Zr is the number of variables minus the number of constraints in the working set; i.e., $\mathtt{Zr}=n\left(\mathtt{Bnd}+\mathtt{Lin}+\mathtt{Art}\right)$. The value of ${n}_{Z}$, the number of columns of $Z$ (see [Definition of Search Direction]) can be calculated as ${n}_{Z}=n\left(\mathtt{Bnd}+\mathtt{Lin}\right)$. A zero value of ${n}_{Z}$ implies that $x$ lies at a vertex of the feasible region.

Norm Gz  is $\Vert {Z}_{R}^{\mathrm{T}}{g}_{\mathrm{FR}}\Vert $, the Euclidean norm of the reduced gradient with respect to ${Z}_{R}$. During the optimality phase, this norm will be approximately zero after a unit step. 
NOpt  is the number of nonoptimal Lagrange multipliers at the current point. NOpt is not printed if the current $x$ is infeasible or no multipliers have been calculated. At a minimizer, NOpt will be zero. 
Min Lm  is the value of the Lagrange multiplier associated with the deleted constraint. If Min Lm is negative, a lower bound constraint has been deleted, if Min Lm is positive, an upper bound constraint has been deleted. If no multipliers are calculated during a given iteration Min Lm will be zero. 
Cond T  is a lower bound on the condition number of the working set. 
Cond Rz  is a lower bound on the condition number of the triangular factor $R$ (the Cholesky factor of the current reduced Hessian; see [Definition of Search Direction]). If the problem is specified to be of type LP then Cond Rz is not printed. 
Rzz  is the last diagonal element $\mu $ of the matrix $D$ associated with the ${R}^{\mathrm{T}}DR$ factorization of the reduced Hessian ${H}_{R}$ (see [Definition of Search Direction]). Rzz is only printed if ${H}_{R}$ is not positive definite (in which case $\mu \ne 1$). If the printed value of Rzz is small in absolute value then ${H}_{R}$ is approximately singular. A negative value of Rzz implies that the objective function has negative curvature on the current working set. 