h02cb solves general quadratic programming problems with integer constraints on the variables. It is not intended for large sparse problems.

Syntax

C#
public static void h02cb(
	int n,
	int nclin,
	double[,] a,
	double[] bl,
	double[] bu,
	double[] cvec,
	double[,] h,
	H..::..H02CB_QPHESS qphess,
	int[] intvar,
	int mdepth,
	int[] istate,
	double[] xs,
	out double obj,
	double[] ax,
	double[] clamda,
	int strtgy,
	H..::..H02CB_MONIT monit,
	H..::..h02cbOptions options,
	out int ifail
)
Visual Basic
Public Shared Sub h02cb ( _
	n As Integer, _
	nclin As Integer, _
	a As Double(,), _
	bl As Double(), _
	bu As Double(), _
	cvec As Double(), _
	h As Double(,), _
	qphess As H..::..H02CB_QPHESS, _
	intvar As Integer(), _
	mdepth As Integer, _
	istate As Integer(), _
	xs As Double(), _
	<OutAttribute> ByRef obj As Double, _
	ax As Double(), _
	clamda As Double(), _
	strtgy As Integer, _
	monit As H..::..H02CB_MONIT, _
	options As H..::..h02cbOptions, _
	<OutAttribute> ByRef ifail As Integer _
)
Visual C++
public:
static void h02cb(
	int n, 
	int nclin, 
	array<double,2>^ a, 
	array<double>^ bl, 
	array<double>^ bu, 
	array<double>^ cvec, 
	array<double,2>^ h, 
	H..::..H02CB_QPHESS^ qphess, 
	array<int>^ intvar, 
	int mdepth, 
	array<int>^ istate, 
	array<double>^ xs, 
	[OutAttribute] double% obj, 
	array<double>^ ax, 
	array<double>^ clamda, 
	int strtgy, 
	H..::..H02CB_MONIT^ monit, 
	H..::..h02cbOptions^ options, 
	[OutAttribute] int% ifail
)
F#
static member h02cb : 
        n : int * 
        nclin : int * 
        a : float[,] * 
        bl : float[] * 
        bu : float[] * 
        cvec : float[] * 
        h : float[,] * 
        qphess : H..::..H02CB_QPHESS * 
        intvar : int[] * 
        mdepth : int * 
        istate : int[] * 
        xs : float[] * 
        obj : float byref * 
        ax : float[] * 
        clamda : float[] * 
        strtgy : int * 
        monit : H..::..H02CB_MONIT * 
        options : H..::..h02cbOptions * 
        ifail : int byref -> unit 

Parameters

n
Type: System..::..Int32
On entry: n, the number of variables.
Constraint: n>0.
nclin
Type: System..::..Int32
On entry: mL, the number of general linear constraints.
Constraint: nclin0.
a
Type: array<System..::..Double,2>[,](,)[,][,]
An array of size [dim1, dim2]
Note: dim1 must satisfy the constraint: dim1max1,nclin
Note: the second dimension of the array a must be at least n if nclin>0 and at least 1 if nclin=0.
On entry: the ith row of a must contain the coefficients of the ith general linear constraint, for i=1,2,,mL.
If nclin=0 then the array a is not referenced.
bl
Type: array<System..::..Double>[]()[][]
An array of size [n+nclin]
On entry: bl must contain the lower bounds and bu the upper bounds, for all the constraints in the following order. The first n elements of each array must contain the bounds on the variables, and the next mL elements the bounds for the general linear constraints (if any). To specify a nonexistent lower bound (i.e., lj=-), set bl[j]-bigbnd, and to specify a nonexistent upper bound (i.e., uj=+), set bu[j]bigbnd; the default value of bigbnd is 1020, but this may be changed by the Infinite Bound Size. To specify the jth constraint as an equality, set bl[j]=bu[j]=β, say, where β<bigbnd.
Constraints:
  • bl[j]bu[j], for j=0,1,,n+nclin-1;
  • if bl[j]=bu[j]=β, β<bigbnd.
bu
Type: array<System..::..Double>[]()[][]
An array of size [n+nclin]
On entry: bl must contain the lower bounds and bu the upper bounds, for all the constraints in the following order. The first n elements of each array must contain the bounds on the variables, and the next mL elements the bounds for the general linear constraints (if any). To specify a nonexistent lower bound (i.e., lj=-), set bl[j]-bigbnd, and to specify a nonexistent upper bound (i.e., uj=+), set bu[j]bigbnd; the default value of bigbnd is 1020, but this may be changed by the Infinite Bound Size. To specify the jth constraint as an equality, set bl[j]=bu[j]=β, say, where β<bigbnd.
Constraints:
  • bl[j]bu[j], for j=0,1,,n+nclin-1;
  • if bl[j]=bu[j]=β, β<bigbnd.
cvec
Type: array<System..::..Double>[]()[][]
An array of size [dim1]
Note: the dimension of the array cvec must be at least n if the problem is of type LP, QP2 (the default) or QP4, and at least 1 otherwise.
On entry: the coefficients of the explicit linear term of the objective function when the problem is of type LP, QP2 (the default) and QP4.
If the problem is of type FP, QP1, or QP3, cvec is not referenced.
h
Type: array<System..::..Double,2>[,](,)[,][,]
An array of size [dim1, dim2]
Note: dim1 must satisfy the constraint:
  • if the problem is of type QP1, QP2 (the default), QP3 or QP4, dim1n or at least the value of the optional parameter Hessian Rows (default value=n)
  • if the problem is of type FP or LP, dim11
  • Note: the second dimension of the array h must be at least n.
    On entry: may be used to store the quadratic term H of the QP objective function if desired. In some cases, you need not use h to store H explicitly (see the specification of qphess). The elements of h are referenced only by qphess. The number of rows of H is denoted by m, whose default value is n. (The Hessian Rows may be used to specify a value of m<n.)
    If the default version of qphess is used and the problem is of type QP1 or QP2 (the default), the first m rows and columns of h must contain the leading m by m rows and columns of the symmetric Hessian matrix H. Only the diagonal and upper triangular elements of the leading m rows and columns of h are referenced. The remaining elements need not be assigned.
    If the default version of qphess is used and the problem is of type QP3 or QP4, the first m rows of h must contain an m by n upper trapezoidal factor of the symmetric Hessian matrix HTH. The factor need not be of full rank, i.e., some of the diagonal elements may be zero. However, as a general rule, the larger the dimension of the leading nonsingular sub-matrix of h, the fewer iterations will be required. Elements outside the upper trapezoidal part of the first m rows of h need not be assigned.
    In other situations, it may be desirable to compute Hx or HTHx without accessing h – for example, if H or HTH is sparse or has special structure. The parameters h and ldh may then refer to any convenient array.
    If the problem is of type FP or LP, h is not referenced.
    qphess
    Type: NagLibrary..::..H..::..H02CB_QPHESS
    In general, you need not provide a version of qphess, because a ‘default’ method with name e04nfu is included in the Library. However, the algorithm of h02cb requires only the product of H or HTH and a vector x; and in some cases you may obtain increased efficiency by providing a version of qphess that avoids the need to define the elements of the matrices H or HTH explicitly. qphess is not referenced if the problem is of type FP or LP, in which case qphess may be the method e04nfu.

    A delegate of type H02CB_QPHESS.

    intvar
    Type: array<System..::..Int32>[]()[][]
    An array of size [lintvr]
    On entry: intvar[i] must contain the index of the solution vector x which is required to be integer. For example, if x1 and x3 are constrained to take integer values then intvar[0] might be set to 1 and intvar[1] to 3. The order in which the indices are specified is important, since this determines the order in which the sub-problems are generated. As a rule-of-thumb, the important variables should always be specified first. Thus, in the above example, if x3 relates to a more important quantity than x1, then it might be advantageous to set intvar[0]=3 and intvar[1]=1. If k is the smallest integer such that intvar[k] is less than or equal to zero then h02cb assumes that k-1 variables are constrained to be integer; components intvar[k+1], , intvar[lintvr-1] are not referenced.
    mdepth
    Type: System..::..Int32
    On entry: the maximum depth (i.e., number of extra constraints) that h02cb may insert before admitting failure.
    Suggested value: mdepth=3×n/2.
    Constraint: mdepth1.
    istate
    Type: array<System..::..Int32>[]()[][]
    An array of size [n+nclin]
    On entry: need not be set if the (default) optional parameter Cold Start is used.
    If the optional parameter Warm Start has been chosen, istate specifies the desired status of the constraints at the start of the feasibility phase. More precisely, the first n elements of istate refer to the upper and lower bounds on the variables, and the next mL elements refer to the general linear constraints (if any). Possible values for istate[j] are as follows:
    istate[j]Meaning
    0The corresponding constraint should not be in the initial working set.
    1The constraint should be in the initial working set at its lower bound.
    2The constraint should be in the initial working set at its upper bound.
    3The constraint should be in the initial working set as an equality. This value must not be specified unless bl[j]=bu[j].
    The values -2, -1 and 4 are also acceptable but will be reset to zero by the method. If h02cb has been called previously with the same values of n and nclin, istate already contains satisfactory information. (See also the description of the optional parameter Warm Start.) The method also adjusts (if necessary) the values supplied in xs to be consistent with istate.
    Constraint: -2istate[j]4, for j=0,1,,n+nclin-1.
    On exit: the status of the constraints in the working set at the point returned in xs. The significance of each possible value of istate[j] is as follows:
    istate[j]Meaning
    -2The constraint violates its lower bound by more than the feasibility tolerance.
    -1The constraint violates its upper bound by more than the feasibility tolerance.
    -0The constraint is satisfied to within the feasibility tolerance, but is not in the working set.
    -1This inequality constraint is included in the working set at its lower bound.
    -2This inequality constraint is included in the working set at its upper bound.
    -3This constraint is included in the working set as an equality. This value of istate can occur only when bl[j]=bu[j].
    -4This corresponds to optimality being declared with xs[j] being temporarily fixed at its current value. This value of istate can occur only when ifail=1 on exit.
    xs
    Type: array<System..::..Double>[]()[][]
    An array of size [n]
    On entry: an initial estimate of the solution.
    On exit: the point at which h02cb terminated. If ifail=01 or 3, xs contains an estimate of the solution.
    obj
    Type: System..::..Double%
    On exit: the value of the objective function at x if x is feasible, or the sum of infeasibilities at x otherwise. If the problem is of type FP and x is feasible, obj is set to zero.
    ax
    Type: array<System..::..Double>[]()[][]
    An array of size [max1,nclin]
    On exit: the final values of the linear constraints Ax.
    If nclin=0, ax is not referenced.
    clamda
    Type: array<System..::..Double>[]()[][]
    An array of size [n+nclin]
    On exit: the values of the Lagrange-multipliers for each constraint with respect to the current working set. The first n elements contain the multipliers for the bound constraints on the variables, and the next mL elements contain the multipliers for the general linear constraints (if any). If istate[j]=0 (i.e., constraint j is not in the working set), clamda[j] is zero. If x is optimal, clamda[j] should be non-negative if istate[j]=1, non-positive if istate[j]=2 and zero if istate[j]=4.
    strtgy
    Type: System..::..Int32
    On entry: determines a branching strategy to be used throughout the computation, as follows:
    strtgyMeaning
    0Always left branch first, i.e., impose an upper bound constraint on the variable first.
    1Always right branch first, i.e., impose a lower bound constraint on the variable first.
    2Branch towards the nearest integer, i.e., if xk=2.4 then impose an upper bound constraint xk2, whereas if xk=2.6 then impose the lower bound constraint xk3.0.
    3A random choice is made between a left-hand and a right-hand branch.
    Constraint: strtgy=0, 1, 2 or 3.
    monit
    Type: NagLibrary..::..H..::..H02CB_MONIT
    monit may be used to print out intermediate output and to affect the course of the computation. Specifically, it allows you to specify a realistic value for the cut-off value (see [Description]) and to terminate the algorithm. If you do not require any intermediate output, have no estimate of the cut-off value and require an exhaustive tree search then monit may be the dummy method H02CBU.

    A delegate of type H02CB_MONIT.

    options
    Type: NagLibrary..::..H..::..h02cbOptions
    An Object of type H.h02cbOptions. Used to configure optional parameters to this method.
    ifail
    Type: System..::..Int32%
    On exit: ifail=0 unless the method detects an error or a warning has been flagged (see [Error Indicators and Warnings]).

    Description

    h02cb uses a ‘Branch and Bound’ algorithm in conjunction with e04nf to try and determine integer solutions to a general quadratic programming problem. The problem is assumed to be stated in the following general form:
    minimizexRnfx  subject to  lxAxu,
    where A is an mL by n matrix and fx may be specified in a variety of ways depending upon the particular problem to be solved. The available forms for fx are listed in Table 1, in which the prefixes FP, LP and QP stand for ‘feasible point’, ‘linear programming’ and ‘quadratic programming’ respectively and c is an n-element vector.
    Problem typefxMatrix H
    FPNot applicableNot applicable
    LPcTxNot applicable
    QP1cTx+12xTHxsymmetric
    QP2cTx+12xTHxsymmetric
    QP3cTx+12xTHTHxm by n upper trapezoidal
    QP4cTx+12xTHTHxm by n upper trapezoidal
    Table 1
    Only when the problem is linear or the matrix H is positive definite can the technique be guaranteed to work; but often useful results can be obtained for a wider class of problems.
    The default problem type is QP2 and other objective functions are selected by using the optional parameter Problem Type. For problems of type FP, the objective function is omitted and h02cb attempts to find a feasible point for the set of constraints.
    Branch and bound consists firstly of obtaining a solution without any of the variables x=x1,x2,,xnT constrained to be integer. Suppose x1 ought to be integer, but at the optimal value just computed x1=2.4. A constraint x12 is added to the system and the second problem solved. A constraint x13 gives rise to a third sub-problem. In a similar manner a whole series of sub-problems may be generated, corresponding to integer constraints on the variables. The sub-problems are all solved using e04nf.
    In practice the method tries to compute an integer solution as quickly as possible using a depth-first approach, since this helps determine a realistic cut-off value. If we have a cut-off value, say the value of the function at this first integer solution, and any sub-problem, W say, has a solution value greater than this cut-off value, then subsequent sub-problems of W must have solutions greater than the value of the solution at W and therefore need not be computed. Thus a knowledge of a good cut-off value can result in fewer sub-problems being solved and thus speed up the operation of the method. (See the description of monit in [Parameters] for details of how you can supply your own cut-off value.)

    References

    Gill P E, Hammarling S, Murray W, Saunders M A and Wright M H (1986) Users' guide for LSSOL (Version 1.0) Report SOL 86-1 Department of Operations Research, Stanford University
    Gill P E and Murray W (1978) Numerically stable methods for quadratic programming Math. Programming 14 349–372
    Gill P E, Murray W, Saunders M A and Wright M H (1984) Procedures for optimization problems with a mixture of bounds and general linear constraints ACM Trans. Math. Software 10 282–298
    Gill P E, Murray W, Saunders M A and Wright M H (1989) A practical anti-cycling procedure for linearly constrained optimization Math. Programming 45 437–474
    Gill P E, Murray W, Saunders M A and Wright M H (1991) Inertia-controlling methods for general quadratic programming SIAM Rev. 33 1–36
    Gill P E, Murray W and Wright M H (1981) Practical Optimization Academic Press
    Pardalos P M and Schnitger G (1988) Checking local optimality in constrained quadratic programming is NP-hard Operations Research Letters 7 33–35

    Error Indicators and Warnings

    Errors or warnings detected by the method:
    Some error messages may refer to parameters that are dropped from this interface (LDA, LDH) In these cases, an error in another parameter has usually caused an incorrect value to be inferred.
    ifail=-1
    Algorithm terminated at your request (halt=true).
    ifail=1
    Input parameter error immediately detected.
    ifail=2
    No integer solution found. (Check that bstval has not been set too small.)
    ifail=3
    mdepth is too small. Increase the value of mdepth and re-enter h02cb.
    ifail=4
    The basic problem (without integer constraints) is unbounded.
    ifail=5
    The basic problem is infeasible.
    ifail=6
    The basic problem requires too many iterations.
    ifail=7
    The basic problem has a reduced Hessian which exceeds its assigned dimension.
    ifail=8
    The basic problem has an invalid parameter setting.
    ifail=9
    The basic problem, as defined, is not standard.
    ifail=10
    liwrk is too small.
    ifail=11
    lwrk is too small.
    ifail=12
    An internal error has occurred within the method. Please contact NAG with details of the call to h02cb.
    ifail=-9000
    An error occured, see message report.
    ifail=-6000
    Invalid Parameters value
    ifail=-4000
    Invalid dimension for array value
    ifail=-8000
    Negative dimension for array value
    ifail=-6000
    Invalid Parameters value

    Accuracy

    h02cb implements a numerically stable active set strategy and returns solutions that are as accurate as the condition of the problem warrants on the machine.

    Parallelism and Performance

    None.

    Further Comments

    This section contains some comments on scaling and a description of the printed output.

    Scaling

    Sensible scaling of the problem is likely to reduce the number of iterations required and make the problem less sensitive to perturbations in the data, thus improving the condition of the problem. In the absence of better information it is usually sensible to make the Euclidean lengths of each constraint of comparable magnitude. See E04 class and Gill et al. (1981) for further information and advice.

    Description of the Printed Output

    This section describes the (default) intermediate printout and final printout produced by h02cb. The intermediate printout is a subset of the monitoring information produced by the method at every iteration (see [Description of Monitoring Information]). You can control the level of printed output (see the description of the Print Level in [Description of the Optional Parameters]). Note that the intermediate printout and final printout are produced only if Print Level10 (the default).
    The following line of summary output (<80 characters) is produced at every iteration. In all cases, the values of the quantities printed are those in effect on completion of the given iteration.
    Itn is the iteration count.
    Step is the step taken along the computed search direction. If a constraint is added during the current iteration, Step will be the step to the nearest constraint. When the problem is of type LP, the step can be greater than one during the optimality phase.
    Ninf is the number of violated constraints (infeasibilities). This will be zero during the optimality phase.
    Sinf/Objective is the value of the current objective function. If x is not feasible, Sinf gives a weighted sum of the magnitudes of constraint violations. If x is feasible, Objective is the value of the objective function. The output line for the final iteration of the feasibility phase (i.e., the first iteration for which Ninf is zero) will give the value of the true objective at the first feasible point.
    During the optimality phase, the value of the objective function will be nonincreasing. During the feasibility phase, the number of constraint infeasibilities will not increase until either a feasible point is found, or the optimality of the multipliers implies that no feasible point exists. Once optimal multipliers are obtained, the number of infeasibilities can increase, but the sum of infeasibilities will either remain constant or be reduced until the minimum sum of infeasibilities is found.
    Norm Gz is ZRTgFR, the Euclidean norm of the reduced gradient with respect to ZR (see [Definition of the Search Direction] and [Choosing the Initial Working Set]). During the optimality phase, this norm will be approximately zero after a unit step.
    The final printout includes a listing of the status of every variable and constraint.
    The following describes the printout for each variable. A full stop (.) is printed for any numerical value that is zero.
    A key is sometimes printed before State to give some additional information about the state of a variable.
    Varbl gives the name (V) and index j, for j=1,2,,n, of the variable.
    State gives the state of the variable (FR if neither bound is in the working set, EQ if a fixed variable, LL if on its lower bound, UL if on its upper bound, TF if temporarily fixed at its current value). If Value lies outside the upper or lower bounds by more than the Feasibility Tolerance (default value=ε, where ε is the machine precision; see [Description of the Optional Parameters]), State will be ++ or -- respectively.
    A Alternative optimum possible. The variable is active at one of its bounds, but its Lagrange-multiplier is essentially zero. This means that if the variable were allowed to start moving away from its bound, there would be no change to the objective function. The values of the other free variables might change, giving a genuine alternative solution. However, if there are any degenerate variables (labelled D), the actual change might prove to be zero, since one of them could encounter a bound immediately. In either case the values of the Lagrange-multipliers might also change.
    D Degenerate. The variable is free, but it is equal to (or very close to) one of its bounds.
    I Infeasible. The variable is currently violating one of its bounds by more than the Feasibility Tolerance.
    Value is the value of the variable at the final iterate.
    Lower Bound is the lower bound specified for the variable. None indicates that bl[j]-bigbnd.
    Upper Bound is the upper bound specified for the variable. None indicates that bu[j]bigbnd.
    Slack is the difference between the variable Value and the nearer of its (finite) bounds bl[j] and bu[j]. A blank entry indicates that the associated variable is not bounded (i.e., bl[j]-bigbnd and bu[j]bigbnd).
    The meaning of the printout for general constraints is the same as that given above for variables, with ‘variable’ replaced by ‘constraint’, bl[j] and bu[j] are replaced by bl[n+j] and bu[n+j] respectively, and with the following change in the heading.
    L Con gives the name (L) and index j, for j=1,2,,m, of the constraint.
    Note that movement off a constraint (as opposed to a variable moving away from its bound) can be interpreted as allowing the entry in the Slack column to become positive.
    Numerical values are output with a fixed number of digits; they are not guaranteed to be accurate to this precision.

    Example

    This example minimizes the quadratic function fx=cTx+12xTHx, where
    c=-0.02,-0.2,-0.2,-0.2,-0.2,0.04,0.04T
    h=2000000020000000220000022000000020000000-2-200000-2-2
    subject to the bounds
    -0.01x10.01-0.10x20.15-0.01x30.03-0.04x40.02-0.10x50.05-0.01x6-0.01x7
    to the general constraints
    x1+x2+x3+x4+x5+x6+x7=-0.130.15x1+0.04x2+0.02x3+0.04x4+0.02x5+0.01x6+0.03x7-0.00490.03x1+0.05x2+0.08x3+0.02x4+0.06x5+0.01x6-0.00640.02x1+0.04x2+0.01x3+0.02x4+0.02x5-0.00370.02x1+0.03x2+0.01x5-0.0012-0.09920.70x1+0.75x2+0.80x3+0.75x4+0.80x5+0.97x6-0.0030.02x1+0.06x2+0.08x3+0.12x4+0.02x5+0.01x6+0.97x7-0.002
    and the variable x4 is constrained to be integer.
    The initial point, which is infeasible, is
    x0=-0.01,-0.03,0.0,-0.01,-0.1,0.02,0.01T.
    The optimal solution (to five figures) is
    x*=-0.01,-0.073328,-0.00025809,0.0,-0.063354,0.014109,0.0028312T.

    Example program (C#): h02cbe.cs

    Example program data: h02cbe.d

    Example program results: h02cbe.r

    Algorithmic Details

    h02cb implements a basic branch and bound algorithm (see [Description]) using e04nf as its basic sub-problem solver. See below for details of its algorithm.

    Overview

    h02cb is based on an inertia-controlling method that maintains a Cholesky factorization of the reduced Hessian (see below). The method is based on that of Gill and Murray (1978), and is described in detail by Gill et al. (1991). Here we briefly summarise the main features of the method. Where possible, explicit reference is made to the names of variables that are parameters of h02cb or appear in the printed output. h02cb has two phases:
    (i) finding an initial feasible point by minimizing the sum of infeasibilities (the feasibility phase), and
    (ii) minimizing the quadratic objective function within the feasible region (the optimality phase).
    The computations in both phases are performed by the same methods. The two-phase nature of the algorithm is reflected by changing the function being minimized from the sum of infeasibilities to the quadratic objective function. The feasibility phase does not perform the standard simplex method (i.e., it does not necessarily find a vertex), except in the LP case when mLn. Once any iterate is feasible, all subsequent iterates remain feasible.
    h02cb has been designed to be efficient when used to solve a sequence of related problems – for example, within a sequential quadratic programming method for nonlinearly constrained optimization (e.g., e04wd). In particular, you may specify an initial working set (the indices of the constraints believed to be satisfied exactly at the solution); see the discussion of the Warm Start in [Description of the Optional Parameters].
    In general, an iterative process is required to solve a quadratic program. (For simplicity, we shall always consider a typical iteration and avoid reference to the index of the iteration.) Each new iterate x- is defined by
    x-=x+αp (1)
    where the step length α is a non-negative scalar, and p is called the search direction.
    At each point x, a working set of constraints is defined to be a linearly independent subset of the constraints that are satisfied ‘exactly’ (to within the tolerance defined by the Feasibility Tolerance; see [Description of the Optional Parameters]). The working set is the current prediction of the constraints that hold with equality at the solution of a linearly constrained QP problem. The search direction is constructed so that the constraints in the working set remain unaltered for any value of the step length. For a bound constraint in the working set, this property is achieved by setting the corresponding element of the search direction to zero. Thus, the associated variable is fixed, and specification of the working set induces a partition of x into fixed and free variables. During a given iteration, the fixed variables are effectively removed from the problem; since the relevant elements of the search direction are zero, the columns of A corresponding to fixed variables may be ignored.
    Let mW denote the number of general constraints in the working set and let nFX denote the number of variables fixed at one of their bounds (mW and nFX are the quantities Lin and Bnd in the monitoring file output from h02cb; see [Description of Monitoring Information]). Similarly, let nFR (nFR=n-nFX) denote the number of free variables. At every iteration, the variables are reordered so that the last nFX variables are fixed, with all other relevant vectors and matrices ordered accordingly.

    Definition of the Search Direction

    Let AFR denote the mW by nFR sub-matrix of general constraints in the working set corresponding to the free variables, and let pFR denote the search direction with respect to the free variables only. The general constraints in the working set will be unaltered by any move along p if
    AFRpFR=0. (2)
    In order to compute pFR, the TQ factorization of AFR is used:
    AFRQFR=0T, (3)
    where T is a nonsingular mW by mW upper triangular matrix (i.e., tij=0 if i>j), and the nonsingular nFR by nFR matrix QFR is the product of orthogonal transformations (see Gill et al. (1984)). If the columns of QFR are partitioned so that
    QFR=ZY,
    where Y is nFR by mW, then the nZ nZ=nFR-mW columns of Z form a basis for the null space of AFR. Let nR be an integer such that 0nRnZ, and let ZR denote a matrix whose nR columns are a subset of the columns of Z. (The integer nR is the quantity Zr in the monitoring output from h02cb. In many cases, ZR will include all the columns of Z.) The direction pFR will satisfy (2) if
    pFR=ZRpR, (4)
    where pR is any nR-vector.
    Let Q denote the n by n matrix
    Q=QFRIFX,
    where IFX is the identity matrix of order nFX. Let HQ and gQ denote the n by n transformed Hessian and transformed gradient
    HQ=QTHQ  and  gQ=QTc+Hx
    and let the matrix of first nR rows and columns of HQ be denoted by HR and the vector of the first nR elements of gQ be denoted by gR. The quantities HR and gR are known as the reduced Hessian and reduced gradient of fx, respectively. Roughly speaking, gR and HR describe the first and second derivatives of an unconstrained problem for the calculation of pR.
    At each iteration, a triangular factorization of HR is available. If HR is positive definite, HR=RTR, where R is the upper triangular Cholesky factor of HR. If HR is not positive definite, HR=RTDR, where D=diag1,1,,1,μ, with μ0.
    The computation is arranged so that the reduced-gradient vector is a multiple of eR, a vector of all zeros except in the last (i.e., nRth) position. This allows the vector pR in (4) to be computed from a single back-substitution
    RpR=γeR (5)
    where γ is a scalar that depends on whether or not the reduced Hessian is positive definite at x. In the positive definite case, x+p is the minimizer of the objective function subject to the constraints (bounds and general) in the working set treated as equalities. If HR is not positive definite, pR satisfies the conditions
    pRTHRpR<0  and  gRTpR0,
    which allow the objective function to be reduced by any positive step of the form x+αp.

    The Main Iteration

    If the reduced gradient is zero, x is a constrained stationary point in the subspace defined by Z. During the feasibility phase, the reduced gradient will usually be zero only at a vertex (although it may be zero at non-vertices in the presence of constraint dependencies). During the optimality phase, a zero reduced gradient implies that x minimizes the quadratic objective when the constraints in the working set are treated as equalities. At a constrained stationary point, Lagrange-multipliers λC and λB for the general and bound constraints are defined from the equations
    AFRTλC=gFR  and  λB=gFX-AFXTλC. (6)
    Given a positive constant δ of the order of the machine precision, a Lagrange-multiplier λj corresponding to an inequality constraint in the working set is said to be optimal if λjδ when the associated constraint is at its upper bound, or if λj-δ when the associated constraint is at its lower bound. If a multiplier is nonoptimal, the objective function (either the true objective or the sum of infeasibilities) can be reduced by deleting the corresponding constraint (with index Jdel; see [Description of Monitoring Information]) from the working set.
    If optimal multipliers occur during the feasibility phase and the sum of infeasibilities is nonzero, there is no feasible point, and you can force h02cb to continue until the minimum value of the sum of infeasibilities has been found; see the discussion of the Minimum Sum of Infeasibilities in [Description of the Optional Parameters]. At such a point, the Lagrange-multiplier λj corresponding to an inequality constraint in the working set will be such that -1+δλjδ when the associated constraint is at its upper bound, and -δλj1+δ when the associated constraint is at its lower bound. Lagrange-multipliers for equality constraints will satisfy λj1+δ.
    If the reduced gradient is not zero, Lagrange-multipliers need not be computed and the nonzero elements of the search direction p are given by ZRpR (see (4) and (5)). The choice of step length is influenced by the need to maintain feasibility with respect to the satisfied constraints. If HR is positive definite and x+p is feasible, α will be taken as unity. In this case, the reduced gradient at x- will be zero, and Lagrange-multipliers are computed. Otherwise, α is set to αM, the step to the ‘nearest’ constraint (with index Jadd; see [Description of Monitoring Information]), which is added to the working set at the next iteration.
    Each change in the working set leads to a simple change to AFR: if the status of a general constraint changes, a row of AFR is altered; if a bound constraint enters or leaves the working set, a column of AFR changes. Explicit representations are recurred of the matrices T, QFR and R; and of vectors QTg, and QTc. The triangular factor R associated with the reduced Hessian is only updated during the optimality phase.
    One of the most important features of h02cb is its control of the conditioning of the working set, whose nearness to linear dependence is estimated by the ratio of the largest to smallest diagonal elements of the TQ factor T (the printed value Cond T; see [Description of Monitoring Information]). In constructing the initial working set, constraints are excluded that would result in a large value of Cond T.
    h02cb includes a rigorous procedure that prevents the possibility of cycling at a point where the active constraints are nearly linearly dependent (see Gill et al. (1989)). The main feature of the anti-cycling procedure is that the feasibility tolerance is increased slightly at the start of every iteration. This not only allows a positive step to be taken at every iteration, but also provides, whenever possible, a choice of constraints to be added to the working set. Let αM denote the maximum step at which x+αMp does not violate any constraint by more than its feasibility tolerance. All constraints at a distance α (ααM) along p from the current point are then viewed as acceptable candidates for inclusion in the working set. The constraint whose normal makes the largest angle with the search direction is added to the working set.

    Choosing the Initial Working Set

    At the start of the optimality phase, a positive definite HR can be defined if enough constraints are included in the initial working set. (The matrix with no rows and columns is positive definite by definition, corresponding to the case when AFR contains nFR constraints.) The idea is to include as many general constraints as necessary to ensure that the reduced Hessian is positive definite.
    Let HZ denote the matrix of the first nZ rows and columns of the matrix HQ=QTHQ at the beginning of the optimality phase. A partial Cholesky factorization is used to find an upper triangular matrix R that is the factor of the largest positive definite leading sub-matrix of HZ. The use of interchanges during the factorization of HZ tends to maximize the dimension of R. (The condition of R may be controlled using the Rank Tolerance. Let ZR denote the columns of Z corresponding to R, and let Z be partitioned as Z=ZRZA. A working set for which ZR defines the null space can be obtained by including the rows of ZAT as ‘artificial constraints’. Minimization of the objective function then proceeds within the subspace defined by ZR, as described in [Definition of the Search Direction].
    The artificially augmented working set is given by
    A-FR=ZATAFR, (7)
    so that pFR will satisfy AFRpFR=0 and ZATpFR=0. By definition of the TQ factorization, A-FR automatically satisfies the following:
    A-FRQFR=ZATAFRQFR=ZATAFRZRZAY=0T-,
    where
    T-=I00T,
    and hence the TQ factorization of (7) is available trivially from T and QFR without additional expense.
    The matrix ZA is not kept fixed, since its role is purely to define an appropriate null space; the TQ factorization can therefore be updated in the normal fashion as the iterations proceed. No work is required to ‘delete’ the artificial constraints associated with ZA when ZRTgFR=0, since this simply involves repartitioning QFR. The ‘artificial’ multiplier vector associated with the rows of ZAT is equal to ZATgFR, and the multipliers corresponding to the rows of the ‘true’ working set are the multipliers that would be obtained if the artificial constraints were not present. If an artificial constraint is ‘deleted’ from the working set, an A appears alongside the entry in the Jdel column of the monitoring file output (see [Description of Monitoring Information]).
    The number of columns in ZA and ZR, the Euclidean norm of ZRTgFR, and the condition estimator of R appear in the monitoring file output as Art, Zr, Norm Gz and Cond Rz respectively (see [Description of Monitoring Information]).
    Under some circumstances, a different type of artificial constraint is used when solving a linear program. Although the algorithm of h02cb does not usually perform simplex steps (in the traditional sense), there is one exception: a linear program with fewer general constraints than variables (i.e., mLn). (Use of the simplex method in this situation leads to savings in storage.) At the starting point, the ‘natural’ working set (the set of constraints exactly or nearly satisfied at the starting point) is augmented with a suitable number of ‘temporary’ bounds, each of which has the effect of temporarily fixing a variable at its current value. In subsequent iterations, a temporary bound is treated as a standard constraint until it is deleted from the working set, in which case it is never added again. If a temporary bound is ‘deleted’ from the working set, an F (for ‘Fixed’) appears alongside the entry in the Jdel column of the monitoring file output (see [Description of Monitoring Information]).

    Description of Monitoring Information

    This section describes the long line of output (>80 characters) which forms part of the monitoring information produced by h02cb. (See also the description of the optional parameters Monitoring File and Print Level in [Description of the Optional Parameters].) You can control the level of printed output.
    To aid interpretation of the printed results, the following convention is used for numbering the constraints: indices 1 through n refer to the bounds on the variables, and indices n+1 through n+mL refer to the general constraints. When the status of a constraint changes, the index of the constraint is printed, along with the designation L (lower bound), U (upper bound), E (equality), F (temporarily fixed variable) or A (artificial constraint).
    When Print Level5 and Monitoring File0, the following line of output is produced at every iteration on the unit number specified by Monitoring File. In all cases, the values of the quantities printed are those in effect on completion of the given iteration.
    Itn is the iteration count.
    Jdel is the index of the constraint deleted from the working set. If Jdel is zero, no constraint was deleted.
    Jadd is the index of the constraint added to the working set. If Jadd is zero, no constraint was added.
    Step is the step taken along the computed search direction. If a constraint is added during the current iteration, Step will be the step to the nearest constraint. When the problem is of type LP, the step can be greater than one during the optimality phase.
    Ninf is the number of violated constraints (infeasibilities). This will be zero during the optimality phase.
    Sinf/Objective is the value of the current objective function. If x is not feasible, Sinf gives a weighted sum of the magnitudes of constraint violations. If x is feasible, Objective is the value of the objective function. The output line for the final iteration of the feasibility phase (i.e., the first iteration for which Ninf is zero) will give the value of the true objective at the first feasible point.
    During the optimality phase, the value of the objective function will be nonincreasing. During the feasibility phase, the number of constraint infeasibilities will not increase until either a feasible point is found, or the optimality of the multipliers implies that no feasible point exists. Once optimal multipliers are obtained, the number of infeasibilities can increase, but the sum of infeasibilities will either remain constant or be reduced until the minimum sum of infeasibilities is found.
    Bnd is the number of simple bound constraints in the current working set.
    Lin is the number of general linear constraints in the current working set.
    Art is the number of artificial constraints in the working set, i.e., the number of columns of ZA (see [Choosing the Initial Working Set]).
    Zr is the number of columns of ZR (see [Definition of the Search Direction]). Zr is the dimension of the subspace in which the objective function is currently being minimized. The value of Zr is the number of variables minus the number of constraints in the working set; i.e., Zr=n-Bnd+Lin+Art.
    The value of nZ, the number of columns of Z (see [Definition of the Search Direction]) can be calculated as nZ=n-Bnd+Lin. A zero value of nZ implies that x lies at a vertex of the feasible region.
    Norm Gz is ZRTgFR, the Euclidean norm of the reduced gradient with respect to ZR (see [Definition of the Search Direction] and [Choosing the Initial Working Set]). During the optimality phase, this norm will be approximately zero after a unit step.
    NOpt is the number of nonoptimal Lagrange-multipliers at the current point. NOpt is not printed if the current x is infeasible or no multipliers have been calculated. At a minimizer, NOpt will be zero.
    Min Lm is the value of the Lagrange-multiplier associated with the deleted constraint. If Min Lm is negative, a lower bound constraint has been deleted, if Min Lm is positive, an upper bound constraint has been deleted. If no multipliers are calculated during a given iteration, Min Lm will be zero.
    Cond T is a lower bound on the condition number of the working set.
    Cond Rz is a lower bound on the condition number of the triangular factor R (the Cholesky factor of the current reduced Hessian; see [Definition of the Search Direction]). If the problem is specified to be of type LP, Cond Rz is not printed.
    Rzz is the last diagonal element μ of the matrix D associated with the RTDR factorization of the reduced Hessian HR (see [Definition of the Search Direction]). Rzz is only printed if HR is not positive definite (in which case μ1). If the printed value of Rzz is small in absolute value, then HR is approximately singular. A negative value of Rzz implies that the objective function has negative curvature on the current working set.

    See Also