E04SVF (PDF version)
E04 Chapter Contents
E04 Chapter Introduction
NAG Library Manual

NAG Library Routine Document


Note:  before using this routine, please read the Users' Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent details.


    1  Purpose
    7  Accuracy

1  Purpose

E04SVF is a solver from the NAG optimization modelling suite for problems such as, quadratic programming (QP), linear semidefinite programming (SDP) and semidefinite programming with bilinear matrix inequalities (BMI-SDP).

2  Specification


3  Description

E04SVF serves as a solver for compatible problems stored as a handle. The handle points to an internal data structure which defines the problem and serves as means of communication for routines in the suite. First, the problem handle is initialized by E04RAF. Then some of the routines E04REF, E04RFF, E04RHF, E04RJF, E04RNF or E04RPF may be used to formulate the objective function, (standard) constraints and matrix constraints of the problem. Once the problem is fully set, the handle may be passed to the solver. When the handle is not needed anymore, E04RZF should be called to destroy it and deallocate the memory held within. See E04RAF for more details.
Problems which can be defined this way are, for example, (generally nonconvex) quadratic programming (QP)
minimize xn 12 xTHx + cTx   (a) subject to lBBxuB   (b) lxxux ,   (c) (1)
linear semidefinite programming problems (SDP)
minimize xn cTx   (a) subject to   i=1 n xi Aik - A0k 0 ,  k=1,,mA   (b) lBBxuB   (c) lxxux   (d) (2)
or semidefinite programming problems with bilinear matrix inequalities (BMI-SDP)
minimize xn 12 xTHx + cTx   (a) subject to   i,j=1 n xi xj Qijk + i=1 n xi Aik - A0k 0 ,  k=1,,mA   (b) lBBxuB   (c) lxxux .   (d) (3)
Here c, lx and ux are n-dimensional vectors, H is a symmetric n by n matrix, lB, uB are mB-dimensional vectors, B is a general mB by n rectangular matrix and Aik, Qijk are symmetric matrices. The expression S0 stands for a constraint on eigenvalues of a symmetric matrix S, namely, all the eigenvalues should be non-negative, i.e., the matrix should be positive semidefinite. See relevant routines of the suite for more details on the problem formulation.
The solver is based on a generalized Augmented Lagrangian method with a suitable choice of standard and matrix penalty functions. For a detailed description of the algorithm see Section 11. Under standard assumptions on the problem (Slater constraint qualification, boundedness of the objective function on the feasible set, see Stingl (2006) for details) the algorithm converges to a local solution. In case of convex problems such as linear SDP or convex QP, this is the global solution. The solver is suitable for both small dense and large-scale sparse problems.
The algorithm behaviour and solver strategy can be modified by various optional parameters (see Section 12) which can be set by E04ZMF and E04ZPF anytime between the initialization of the handle by E04RAF and a call to the solver. Once the solver has finished, options may be modified for the next solve. The solver may be called repeatedly with various starting points and/or optional parameters.
There are several optional parameters with a multiple choice where the default choice is AUTO (for example, Hessian Density). This value means that the decision over the option is left to the solver based on the structure of the problem. Option getter E04ZNF can be called to retrieve the choice of these options as well as on any other options.
Optional parameter Task may be used to switch the problem to maximization or to ignore the objective function and find only a feasible point.
Optional parameter Monitor Frequency may be used to turn on the monitor mode of the solver. The solver invoked in this mode pauses regularly even before the optimal point is found to allow monitoring the progress from the calling program. All the important error measures and statistics are available in the calling program which may terminate the solver early if desired (see argument INFORM).

3.1  Structure of the Lagrangian Multipliers

The algorithm works internally with estimates of both the decision variables, denoted by x, and the Lagrangian multipliers (dual variables) for standard and matrix constraints, denoted by u and U, respectively. You may provide initial estimates, request approximations during the run (the monitor mode turned on) and obtain the final values. The Lagrangian multipliers are split into two arrays, the multipliers u for (standard) constraints are stored in array U and multipliers U for matrix constraints in array UA. Both arrays need to conform to the structure of the constraints.
If the simple bounds were defined (E04RHF was successfully called), the first 2n elements of U belong to the corresponding Lagrangian multipliers, interleaving a multiplier for the lower and for the upper bound for each xi. If any of the bounds were set to infinity, the corresponding Lagrangian multipliers are set to 0 and may be ignored.
Similarly, the following 2mB elements of U belong to multipliers for the linear constraints, if formulated by E04RJF. The organization is the same, i.e., the multipliers for each constraint for the lower and upper bounds are alternated and zeroes are used for any missing (infinite bound) constraint.
A Lagrangian multiplier for a matrix constraint (one block) of dimension d by d is a dense symmetric matrix of the same dimension. All multipliers U are stored next to each other in array UA in the same order as the matrix constraints were defined by E04RNF and E04RPF. The lower triangle of each is stored in the packed column order (see Section 3.3.2 in the F07 Chapter Introduction). For example, if there are four matrix constraints of dimensions 7, 3, 1, 1, the dimension of array UA should be 36. The first 28 elements d1 × d1+1 /2  belong to the packed lower triangle of U1, followed by six elements of U2 and one element for each U3 and U4. See for example Section 10 in E04RDF.

3.2  Approximation of the Lagrangian Multipliers

By the nature of the algorithm, all inequality Lagrangian multiplier u,U are always kept positive during the computational process. This applies even to Lagrangian multipliers of inactive constraints at the solution. They will only be close to zero although they would normally be equal to zero exactly. This is one of the major differences between results from solvers based on the active set method (such as E04NQF) and others, such as, E04SVF or interior point methods. As a consequence, the initial estimate of the multipliers (if provided) might be adjusted by the solver to be sufficiently positive, also the estimates returned during the intermediate exits might only be a very crude approximation to their final values as they do not satisfy all the Karush–Kuhn–Tucker (KKT) conditions.
Another difference is that E04NQF merges multipliers for both lower and upper inequality into one element whose sign determines the inequality because there can be at most one active constraint and multiplier for the inactive is exact zero. Negative multipliers are associated with the upper bounds and positive with the lower bounds. On the other hand, E04SVF works with both multipliers at the same time so they are returned in two elements, one for the lower bound, the other for the upper bound (see Section 3.1). An equivalent result can be achieved by subtracting the upper bound multiplier from the lower one. This holds even when equalities are interpreted as two inequalities (see optional parameter Transform Constraints).

4  References

Ben–Tal A and Zibulevsky M (1997) Penalty/barrier multiplier methods for convex programming problems SIAM Journal on Optimization 7 347–366
Fujisawa K, Kojima M, Nakata K (1997) Exploiting sparsity in primal-dual interior-point method for semidefinite programming Math. Programming 79 235–253
Hogg J D and Scott J A (2011) HSL MA97: a bit-compatible multifrontal code for sparse symmetric systems RAL Technical Report. RAL-TR-2011-024
Kočvara M and Stingl M (2003) PENNON – a code for convex nonlinear and semidefinite programming Optimization Methods and Software 18(3) 317–333
Kočvara M and Stingl M (2007) On the solution of large-scale SDP problems by the modified barrier method using iterative solvers Math. Programming (Series B) 109(2–3) 413–444
Mittelmann H D (2003) An independent benchmarking of SDP and SOCP solvers Math. Programming 95 407–430
Stingl M (2006) On the Solution of Nonlinear Semidefinite Programs by Augmented Lagrangian Methods, PhD thesis Institute of Applied Mathematics II, Friedrich–Alexander University of Erlangen–Nuremberg

5  Arguments

1:     HANDLE – TYPE (C_PTR)Input
On entry: the handle to the problem. It needs to be initialized by E04RAF and the problem formulated by some of the routines E04REF, E04RFF, E04RHF, E04RJF, E04RNF and E04RPF. It must not be changed between the calls.
2:     NVAR – INTEGERInput
On entry: n, the number of decision variables x in the problem. It must be unchanged from the value set during the initialization of the handle by E04RAF.
3:     XNVAR – REAL (KIND=nag_wp) arrayInput/Output
Note: intermediate stops take place only if Monitor Frequency>0.
On entry: if Initial X=USER (the default), x0, the initial estimate of the variables x, otherwise X need not be set.
On intermediate exit: the value of the variables x at the end of the current outer iteration.
On intermediate re-entry: the input is ignored.
On final exit: the final value of the variables x.
4:     NNZU – INTEGERInput
On entry: the dimension of array U.
If NNZU=0, U will not be referenced; otherwise it needs to match the dimension of constraints defined by E04RHF and E04RJF as explained in Section 3.1.
Constraint: NNZU0.
5:     UNNZU – REAL (KIND=nag_wp) arrayInput/Output
Note: intermediate stops take place only if Monitor Frequency>0.
If NNZU>0, U holds Lagrangian multipliers (dual variables) for (standard) constraints, i.e., simple bounds defined by E04RHF and a set of mB linear constraints defined by E04RJF. Either their initial estimates, intermediate approximations or final values, see Section 3.1.
If NNZU=0, U will not be referenced.
On entry: if Initial U=USER (the default is AUTOMATIC), u0, the initial estimate of the Lagrangian multipliers u, otherwise U need not be set.
On intermediate exit: the estimate of the multipliers u at the end of the current outer iteration.
On intermediate re-entry: the input is ignored.
On exit: the final value of multipliers u.
6:     NNZUC – INTEGERInput
On entry: the dimension of array UC. If NNZUC=0, UC will not be referenced. This argument is reserved for future releases of the NAG Library which will allow definition of second order cone constraints. It needs to be set to 0 at the moment.
Constraint: NNZUC=0.
7:     UCNNZUC – REAL (KIND=nag_wp) arrayInput/Output
UC is reserved for future releases of the NAG Library which will allow definition of second order cone constraints. It is not referenced at the moment.
8:     NNZUA – INTEGERInput
On entry: the dimension of array UA. If NNZUA=0, UA will not be referenced; otherwise it needs to match the total number of nonzeros in all matrix Lagrangian multipliers (constraints defined by E04RNF and E04RPF) as explained in Section 3.1.
Constraint: NNZUA0.
9:     UANNZUA – REAL (KIND=nag_wp) arrayInput/Output
Note: intermediate stops take place only if Monitor Frequency>0.
If NNZUA>0, UA holds the Lagrangian multipliers for matrix constraints defined by E04RNF and E04RPF, see Section 3.1.
If NNZUA=0, UA will not be referenced.
On entry: if Initial U=USER (the default is AUTOMATIC), U0, the initial estimate of the matrix Lagrangian multipliers U, otherwise UA need not be set.
On intermediate exit: the estimate of the matrix multipliers U at the end of the outer iteration.
On intermediate re-entry: the input is ignored.
On final exit: the final estimate of the multipliers U.
10:   RINFO32 – REAL (KIND=nag_wp) arrayOutput
On intermediate or final entry: error measures and various indicators (see Section 11 for details) at the end of the current (or final) outer iteration as given in the table below:
1 objective function value fx
2 optimality (12)
3 feasibility (13)
4 complementarity (14)
5 minimum penalty
6 relative precision (11)
7 relative duality gap (10)
8 precision fx-fx+1
9 duality gap
10 minimum penalty for (standard) inequalities p
11 minimum penalty for matrix inequalities P
12 feasibility of equality constraints
13 feasibility of (standard) inequalities
14 feasibility of matrix inequalities
15 complementarity of equality constraints
16 complementarity of (standard) inequalities
17 complementarity of matrix inequalities
1823 DIMACS error measures (16) (only if turned on by DIMACS Measures)
2432 reserved for future use
11:   STATS32 – REAL (KIND=nag_wp) arrayOutput
On intermediate or final exit: solver statistics at the end of the current (or final) outer iteration as given in the table below. Note that time statistics is provided only if Stats Time is set (the default is NO), the measured time is returned in seconds.
1 number of the outer iterations
2 total number of the inner iterations
3 total number of the linesearch steps
4 number of evaluations of the augmented Lagrangian F, (see (8))
5 number of evaluations of F
6 number of evaluations of 2F
7 reserved for future use
8 total running time of the solver
9 total running time of the solver without evaluations of the user's functions and monitoring stops
10 time spent in the inner iterations
11 time spent in Lagrangian multipliers updates
12 time spent in penalty parameters updates
13 time spent in matrix feasibility computation
14 time of evaluations of F
15 time of evaluations of F
16 time of evaluations of 2F
17 time of factorizations of the Newton system
18 time of factorizations of the matrix constraints
1932 reserved for future use
12:   INFORM – INTEGERInput/Output
Note: intermediate stops take place only if Monitor Frequency>0.
On initial entry: no effect.
On intermediate exit: INFORM=1.
On intermediate re-entry: if set to 0, solving the current problem is terminated and the routine returns IFAIL=20; otherwise the routine continues.
On final exit: INFORM=0.
13:   IFAIL – INTEGERInput/Output
On entry: IFAIL must be set to 0, -1​ or ​1. If you are unfamiliar with this argument you should refer to Section 3.4 in How to Use the NAG Library and its Documentation for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value -1​ or ​1 is recommended. If the output of error messages is undesirable, then the value 1 is recommended. Otherwise, because for this routine the values of the output arguments may be useful even if IFAIL0 on exit, the recommended value is -1. When the value -1​ or ​1 is used it is essential to test the value of IFAIL on exit.
On exit: IFAIL=0 unless the routine detects an error or a warning has been flagged (see Section 6).

6  Error Indicators and Warnings

If on entry IFAIL=0 or -1, explanatory error messages are output on the current error message unit (as defined by X04AAF).
Note: E04SVF may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the routine:
The supplied HANDLE does not define a valid handle to the data structure for the NAG optimization modelling suite. It has not been initialized by E04RAF or it has been corrupted.
This solver does not support general nonlinear objective and constraints.
A different solver from the suite has already been used.
On entry, NVAR=value, expected value=value.
Constraint: NVAR must match the value given during initialization of HANDLE.
On entry, NNZU=value.
NNZU does not match the size of the Lagrangian multipliers for (standard) constraints.
The correct value is 0 for no (standard) constraints.
On entry, NNZU=value.
NNZU does not match the size of the Lagrangian multipliers for (standard) constraints.
The correct value is either 0 or value.
On entry, NNZUA=value.
NNZUA does not match the size of the Lagrangian multipliers for matrix constraints.
The correct value is 0 for no matrix constraints.
On entry, NNZUA=value.
NNZUA does not match the size of the Lagrangian multipliers for matrix constraints.
The correct value is either 0 or value.
On entry, NNZUC=value.
NNZUC does not match the size of the Lagrangian multipliers for second order cone constraints.
The correct value is 0 for no such constraints.
User requested termination during a monitoring step.
The current starting point is unusable.
The starting point x0, either provided by the user (if Initial X=USER, the default) or the automatic estimate (if Initial X=AUTOMATIC), must not be extremely infeasible in the matrix constraints (infeasibility of order 106 and higher) and all the functions used in the problem formulation must be evaluatable.
In the unlikely case this error is triggered, it is necessary to provide a better estimate of the initial values.
Outer iteration limit has been reached.
The requested accuracy is not achieved.
If Outer Iteration Limit is left to the default, this error indicates numerical difficulties. Consider whether the stopping tolerances (Stop Tolerance 1, Stop Tolerance 2, Stop Tolerance Feasibility) are set too low or optional parameters affecting the behaviour of the penalty updates (P Update Speed, P Min or Pmat Min) have been modified inadvisedly. The iteration log should reveal more about the misbehaviour. Providing a different starting point might be of help in certain situations.
The inner subproblem could not be solved to the required accuracy.
Inner iteration limit has been reached.
The inner subproblem could not be solved to the required accuracy.
Limited progress in the inner subproblem triggered a stop (heuristic inner stop criteria).
The inner subproblem could not be solved to the required accuracy.
Line search or another internal component failed.
A problem with the convergence of the inner subproblem is typically a sign of numerical difficulties of the whole algorithm. The inner subproblem might be stopped before reaching the required accuracy because of the Inner Iteration Limit, a heuristic detected no progress in the inner iterations (if Inner Stop Criteria=HEURISTIC, default) or if an internal component failed (for example, line search was unable to find a suitable step). The algorithm tries to recover, however, it might give up after several attempts with one of these error messages. If it occurs in the very early iterations, consider increasing Inner Stop Tolerance and possibly Init Value P or Init Value Pmat which should ease the first iterations. An occurrence in later iterations indicates numerical difficulties typically due to scaling and/or ill-conditioning or the problem is close to infeasible. Reducing the tolerance on the stopping criteria or increasing P Update Speed might be of limited help.
Unable to make progress, the algorithm was stopped.
This error is returned if the solver cannot decrease the duality gap over a range of iterations. This can be due to the scaling of the problem or the problem might be close to primal or dual infeasibility.
The algorithm converged to a suboptimal solution.
The full accuracy was not achieved. The solution should still be usable.
This error may be reported only if Stop Criteria=SOFT (default). The solver predicted that it is unable to reach a better estimate of the solution. However, the error measures indicate that the point is a reasonable approximation. Typically, only the norm of the gradient of the Lagrangian (optimality) does not fully satisfy the requested tolerance whereas the others are well below the tolerance.
Setting Stop Criteria=STRICT will disallow this error but it is unlikely that the algorithm would reach a better solution.
The problem was found to be infeasible during preprocessing.
One or more of the constraints (or its part after preprocessing) violates the constraints by more than εfeas (Stop Tolerance Feasibility).
The problem was found unbounded during preprocessing.
The objective function consists of an unrestricted ray and thus the problem does not have a solution.
The problem seems to be infeasible, the algorithm was stopped.
Whilst the algorithm cannot definitively detect that the problem is infeasible, several indirect indicators suggest that it might be the case.
The problem seems to be unbounded, the algorithm was stopped.
Whilst the algorithm cannot definitively detect that the problem is unbounded, several indirect indicators (such as a rapid decrease in the objective function and a lack of convergence in the inner subproblem) suggest that this might be the case. A good scaling of the objective function is always highly recommended to avoid situations when unusual behavior triggers falsely this error exit.
An unexpected error has been triggered by this routine. Please contact NAG.
See Section 3.9 in How to Use the NAG Library and its Documentation for further information.
Your licence key may have expired or may not have been installed correctly.
See Section 3.8 in How to Use the NAG Library and its Documentation for further information.
Dynamic memory allocation failed.
See Section 3.7 in How to Use the NAG Library and its Documentation for further information.

7  Accuracy

The accuracy of the solution is driven by optional parameters Stop Tolerance 1, Stop Tolerance 2, Stop Tolerance Feasibility and Stop Criteria and in certain cases DIMACS Measures.
If IFAIL=0 on the final exit, the returned point satisfies Karush–Kuhn–Tucker (KKT) conditions to the requested accuracy (under the default settings close to ε) and thus it is a good estimate of a local solution. If IFAIL=50, some of the convergence conditions were not fully satisfied but the point still seems to be a reasonable estimate and should be usable. Please refer to Section 11.2 and the description of the particular options.

8  Parallelism and Performance

E04SVF is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library.
E04SVF makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the Users' Note for your implementation for any additional implementation-specific information.

9  Further Comments

9.1  Description of the Printed Output

The solver can print information to give an overview of the problem and of the progress of the computation. The output may be send to two independent unit numbers which are set by optional parameters Print File and Monitoring File. Optional parameters Print Level, Print Options and Monitoring Level determine the exposed level of detail. This allows, for example, to generate a detailed log in a file while the condensed information is displayed on the screen.
By default (Print File=6, Print Level=2), four sections are printed to the standard output: a header, a list of options, an iteration log and a summary.
The header contains statistics about the size of the problem as represented internally, i.e., it reflects any changes imposed by preprocessing and problem transformations (see, for example, Presolve Block Detect and Transform Constraints). The header may look like:
E04SV, NLP-SDP Solver (Pennon)
Number of variables             2                 [eliminated            0]
                           simple  linear  nonlin
(Standard) inequalities         3       0       0
(Standard) equalities                   0       0
Matrix inequalities                     1       1 [dense    2, sparse    0]
                                                  [max dimension         3] 
It shows the total number of variables and how many of them were eliminated (e.g., fixed by a simple equality). A constraint with both a lower and an upper bound counts as 2 inequalities. Simple bounds are set by E04RHF, matrix inequalities by E04RNF and E04RPF and standard equalities and inequalities by E04RJF. Note that matrix constraints of dimension 1 are extracted and treated as (standard) inequalities as well. The header report concludes with the number of matrix constraints factorized as dense and sparse matrices, together with the largest dimension of the matrix inequalities.
Optional parameters list
The list shows all options of the solver, each displayed on one line. The line contains the option name, its current value and an indicator for how it was set. The options left at their defaults are noted by (d), the ones set by the user are noted by (U) and the options reset by the solver by (S). The solver will automatically set options which are set to AUTO or options which are not possible to satisfy in the given context (e.g., requesting DIMACS Measures for a nonlinear problem). Note that the output format is compatible with the file format expected by E04ZPF. The output might look as follows:
Outer Iteration Limit         =                  20 * U
Stop Tolerance 1              =         1.00000E-06 * d
Stop Tolerance 2              =         1.00000E-07 * d
Hessian Density               =               Dense * S
Iteration log
If Print Level=2, the status of each major iteration is condensed to one line. The line shows the major iteration number (0 represents the starting point), the current objective value, KKT measures (optimality, feasibility and complementarity), minimal penalty and the number of inner iterations performed. Note that all these values are also available in RINFO1,,RINFO5 and STATS1. The output might look as follows:
 it |  objective |  optim  |   feas  |  compl  | pen min  | inner
  0   0.00000E+00  7.34E+00  1.23E-01  4.41E+01  1.00E+00   0
  1  -3.01998E-01  2.54E-03  0.00E+00  1.89E+00  1.00E+00   6
  2  -2.53008E+00  1.06E-03  1.30E-01  3.22E-01  3.17E-01   8
  3  -2.08172E+00  6.52E-03  1.85E-02  4.54E-02  1.01E-01   7
  4  -2.01060E+00  6.47E-03  4.10E-03  1.02E-02  3.19E-02   3
Occasionally, a one letter flag is printed at the end of the line indicating that the inner subproblem was not solved to the required accuracy. The possibilities are M for maximum number of inner iterations, L for difficulties in the line search and ! when a heuristic stop took place. Repeated troubles in the subproblems may lead to IFAIL=23. The output below had Inner Iteration Limit=5 which was not enough in the first subproblem (first outer iteration).
  it |  objective |  optim  |   feas  |  compl  | pen min | inner
   0  0.00000E+00  1.46E+03  5.01E+01  1.46E+03  6.40E+01   0
   1  3.78981E+02  3.86E+01  0.00E+00  1.21E+04  6.40E+01   5 M
   2  9.11724E+02  1.46E-02  0.00E+00  9.24E+02  4.45E+01   5
All KKT measures should normally converge to zero as the algorithm progresses and once the requested accuracy (Stop Tolerance 2) is achieved, the solver stops. However, the convergence is not necessarilly monotonic. The penalty parameters are decreased each major iteration which should improve overall the feasibility of the problem. This also increases the ill-conditioning which might lead to a higher number of inner iterations. A very high number of inner iterations usually signals numerical difficulties. See Section 11 for the algorithmic details.
If Print Level>2, each major iteration produces significantly more detailed output comprising detailed error measures and output from every inner iteration. The output is self-explanatory so is not featured here in detail.
Once the solver finishes, a detailed summary is produced. An example is shown below:
Status: converged, an optimal solution found
Final objective value                2.300000E+01
Relative precision                   5.873755E-09
Optimality                           1.756062E-10
Feasibility                          9.048738E-08
Complementarity                      1.855566E-08
DIMACS error 1                       8.780308E-11
DIMACS error 2                       0.000000E+00
DIMACS error 3                       0.000000E+00
DIMACS error 4                       4.524369E-08
DIMACS error 5                       4.065998E-10
DIMACS error 6                       3.948012E-10
Iteration counts
  Outer iterations                             13
  Inner iterations                             82
  Linesearch steps                             95
Evaluation counts
  Augm. Lagr. values                           96
  Augm. Lagr. gradient                         96
  Augm. Lagr. hessian                          82
  Total time                    0 h  0 min  3 sec
    Evaluations + monitoring             0.04 sec
    Solver itself                        3.09 sec
  Inner minimization step                2.72 sec   ( 87.1%)
    Augm. Lagr. value                    0.28 sec   (  9.0%)
    Augm. Lagr. gradient                 0.67 sec   ( 21.6%)
    Augm. Lagr. hessian                  1.11 sec   ( 35.4%)
    system matr. factor.                 0.64 sec   ( 20.5%)
    const. matr. factor.                 0.40 sec   ( 12.8%)
  Multiplier update                      0.01 sec   (  0.3%)
  Penalty update                         0.02 sec   (  0.5%)
  Feasibility check                      0.15 sec   (  4.7%)
It starts with the status line of the overall result which matches the IFAIL value. It is followed by the final objective value and the error measures (including DIMACS Measures if turned on). Iteration counters, numbers of evaluations of the Augmented Lagrangian function and timing of the routine conclude the section. The timing of the algorithm is displayed only if Stats Time is set.

10  Example

Semidefinite Programming has many applications in several fields of mathematics, such as, combinatorial optimization, finance, statistics, control theory or structural optimization. However, these applications seldom come in the form of (2) or (3). Usually a reformulation is needed or even a relaxation is employed to achieve the desired formulation. This is also the case of the Lovász ϑ function computed in this example. See also Section 10 in E04RAF for links to further examples in the suite.
The Lovász ϑ function (or also called ϑ number) of an undirected graph G=V,E is an important quantity in combinatorial optimization. It gives an upper bound to Shannon capacity of the graph G and is also related to the clique number and the chromatic number of the complement of G which are NP-hard problems.
The ϑ function can be expressed in various ways, here we use the following:
ϑG = minimize λ max H H 𝕊n ,  sij = 1 ​ if ​ i=j ​ or if ​ ij E  
where n=V  and 𝕊n  denotes the space of real symmetric n by n matrices. This eigenvalue optimization problem is easy to reformulate as an SDP problem by introducing an artificial variable t as follows:
minimize t,H t subject to HtI H𝕊n ,  sij = 1 ​ if ​ i=j ​ or if ​ ijE .  
Finally, this can be written as (2)) which is formulated in the example:
minimize t,x t subject to tI + ijE xij Eij - J 0  
where J is a matrix of all ones and Eij  is a matrix of all zeros except i,j and j,i.
The example also demonstrates how to set the optional parameters and how to retrieve them.
The data file stores the Petersen graph whose ϑ is 4.

10.1  Program Text

Program Text (e04svfe.f90)

10.2  Program Data

Program Data (e04svfe.d)

10.3  Program Results

Program Results (e04svfe.r)

11  Algorithmic Details

This section contains a description of the algorithm used in E04SVF which is based on the implementation of the code called Pennon. For further details, see Kočvara and Stingl (2003), Stingl (2006) and Kočvara and Stingl (2007).
For simplicity, we will use the following problem formulation; its connection to (SDP) and (BMI-SDP) is easy to see:
minimize xn fx subject to gkx0, k=1,2,,mg hkx=0, k=1,2,,mh Akx0, k=1,2,,mA, (4)
where f, gk, hk are C2 functions from n to  and Ak is a C2 matrix function from n to 𝕊mk. Here 𝕊m denotes the space of real symmetric matrices m×m and S 𝕊m, S0 stands for a constraint on eigenvalues of S, namely the matrix S should be positive semidefinite. Furthermore, we define the inner product on 𝕊m by A,B𝕊m=traceAB. The index 𝕊m will be omitted whenever the dimension is clear from the context. Finally, for Φ:𝕊m𝕊m and X,Y𝕊m, DΦX;Y denotes the directional derivative of Φ with respect to X in direction Y.

11.1  Overview

The algorithm is based on a (generalized) augmented Lagrangian approach and on a suitable choice of smooth penalty/barrier functions φg: for (standard) inequality constraints and φA: for constraints on matrix eigenvalues. By means of φA we define a penalty/barrier function for matrix inequalities as follows.
Let A𝕊m have an eigenvalue decomposition A=ST ΛS where Λ=diagλ1,λ2,,λmT. We define matrix function ΦP:𝕊m𝕊m for P>0 as
ΦP:AST PφAλ1P 0 0 0 PφAλ2P 0 0 0 PφAλmP S . (5)
Both φg and φA satisfy a number of assumptions (see Kočvara and Stingl (2003)) guaranteeing, in particular, that for any p, P>0 
gkx 0 pφggkx/p 0 , k=1,2,,mg, Akx 0 ΦPAkx 0 , k=1,2,,mA. (6)
Further in the text, we use simplified notation φp(·)=pφg·/p.
Thus for any p, P>0, problem (4) has the same solution as the following augmented problem
minimize xn fx subject to φpgkx0, k=1,2,,mg hkx=0, k=1,2,,mh ΦPAkx0, k=1,2,,mA. (7)
The Lagrangian of (7) can be viewed as a (generalized) augmented Lagrangian of (4):
Fx,u,v,U,p,P = fx - k=1 mg uk φpgkx + k=1 mh vk hkx - k=1 mA Uk, ΦP Akx ; (8)
where umg, vmh and U=U1,,UmA, Uk𝕊pk, k=1,,mA are Lagrange multipliers associated with the (standard) inequalities and equalities and the matrix inequality constraints, respectively.
The algorithm combines ideas of the (exterior) penalty and (interior) barrier methods with the augmented Lagrangian method, it can be defined as follows:
Step (i) of Algorithm 1, further referred as the inner problem, is the most time-consuming and thus the choice of the solver for (9) is critical for the overall efficiency of the method. See Section 11.4 below.
The inequality Lagrangian multipliers update in step (ii) is motivated by the fact that if x+1, v+1 solve (9) exactly in iteration , we obtain
x F x+1,u+1,v+1,U+1,p,P =0 .  
Details can be found, for example, in Stingl (2006).
In practise, numerical studies showed that it is not advantageous to do the full updates of multipliers u, U. Firstly, big changes in the multipliers may lead to a large number of iterations in subsequent solution of (9) and, secondly, the multipliers might become ill-conditioned after a few steps and the algorithm suffers from numerical instabilities. To overcome these difficulties, a restricted update is performed instead.
New Lagrangian multipliers for (standard) inequalities uk+1, for k=1,2,,mg are limited not to violate the following bound
μg < u k +1 u k < 1 μg  
for a given 0<μg <1 (see U Update Restriction).
A similar strategy is applied to the matrix multipliers Uk+1 as well. For 0<μA<1 (see Umat Update Restriction) set
U k new = U k +1 + μA U k - U k +1 .  
The penalty parameters p,P in step (iii) are updated by some constant factor dependent on the initial penalty parameters p0,P0 and P Update Speed. The update process is stopped when pmin and Pmin are reached (see P Min, Pmat Min).
Additional details about the multiplier and penalty update strategies, as well as local and global convergence properties under standard assumptions can be found in an extensive study Stingl (2006).

11.2  Stopping Criteria

Algorithm 1 is stopped when all the stopping criteria are satisfied to the requested accuracy, these are:
f x - F x,u,v,U,p,P 1+fx ε1 , ​ relative duality gap (10)
fx - fx-1 1+fx ε1 ,        relative precision (11)
and these based on Karush–Kuhn–Tucker (KKT) error measures, to keep the notation simple, formulation (4) is assumed and iteration index  is dropped:
fx - k=1 mg uk gkx + k=1 mh vk hkx - k=1 mA Uk, xi Ak x i=1,,n ε2 ,     optimality (12)
gkx -εfeas ,  hkx εfeas ,   Akx -εfeas I   for all ​ k ,       feasibility (13)
gkx uk ε2 ,   hkx vk ε2 ,   Akx ,Uk ε2 .          complementarity (14)
Here ε1, ε2, εfeas may be set in the option settings as Stop Tolerance 1, Stop Tolerance 2 and Stop Tolerance Feasibility, respectively.
Note that if Task=FEASIBLE POINT, only the feasibility is taken into account.
There is an option for linear SDP problems to switch from stopping criteria based on the KKT conditions to DIMACS Measures, see Mittelmann (2003). This is the default choice. To keep the notation readable, these are defined here only for the following simpler formulation of linear SDP rather than (2):
minimize xn cTx subject to   Ax = i=1 n xi Ai - A0 0 . (15)
In this case the algorithm stops when:
Derr1 = A*U - c 1+c Derr2 = max0, - λmin U 1+c Derr4 = max0, - λmin i=1 n xi Ai - A0 1+A0 Derr5 = A0,U - cT x 1+ A0,U + cT x Derr6 = i=1 n xiAi - A0 ,U 1 + A0,U + cT x (16)
where A*(·) denote the adjoint operator to A(·), A*Ui=Ai,U.
They can be viewed as a scaled version of the KKT conditions. Derr1 represents the (scaled) norm of the gradient of the Lagrangian, Derr2 and Derr4 the dual and primal infeasibility, respectively, and Derr5 and Derr6 measure the duality gap and the complementary slackness. Note that in this solver Derr2=0 by definition and Derr3 is automaticaly zero because the formulation involves slack variables which are not used here.

11.3  Choice of penalty functions φg and φA

To treat the (standard) inequality constraints gkx0, we use the penalty/barrier function proposed by Ben–Tal and Zibulevsky (1997):
φgτ = -τ + 12 τ2 if ​ τ τ- - 1- τ- 2 log 1-2τ-+τ 1-τ- -τ- + 12 τ-2 if ​τ>τ-;  
with default τ-=12.
The choice of φA (and thus of ΦP) is motivated by the complexity of the evaluation of ΦP and its derivatives. If φA is defined as
φAτ = 1 1+τ - 1 ,  
it is possible to avoid the explicit eigenvalue decomposition in (5) as it can be seen in the formulae below (note that index k is omitted):
ΦP Ax = P2 Zx -PI xi ΦP Ax = - P2 Zx Ax xi Zx 2 xi xj ΦPAx = P2 Zx Ax xi Zx Ax xj - 2 Ax xi xj + Ax xj Zx Ax xi Zx (17)
Zx = Ax + P I-1 . (18)
For details follow Kočvara and Stingl (2003). Note that, in particular, formula (17) requires nontrivial computational resources even if careful handling of the sparsity of partial derivatives of Ax is implemented. E04SVF uses a set of strategies described in Fujisawa et al. (1997) adapted for parallel computation.

11.4  Solution of the inner problem

This section describes solving of the inner problem (step (i) of Algorithm 1). We attempt to find an approximate solution of the following system (in x and v) up to the given precision α:
x Fx,u,v,U,p,P = 0 hx = 0 (19)
where the penalty parameters p,P, as well as the Lagrangian multipliers u and U are fixed.
A linesearch SQP framework is used due to its desirable convergence properties. It can be stated as follows.
System (20) is solved by the factorization routine MA97 (see Hogg and Scott (2011), in combination with an inertia correction strategy described in Stingl (2006). The step length selection is guided by Linesearch Mode.
If there are no equality constraints in the problem, the unconstrained minimization in Step (i) of Algorithm 1 simplifies to the modified Newton method with line-search (for details, see Kočvara and Stingl (2003)). Alternatively, the equality constraints hk x = 0  can be converted to two inequalities which would be treated with the remaining constraints (see Transform Constraints).

12  Optional Parameters

Several optional parameters in E04SVF define choices in the problem specification or the algorithm logic. In order to reduce the number of formal arguments of E04SVF these optional parameters have associated default values that are appropriate for most problems. Therefore, you need only specify those optional parameters whose values are to be different from their default values.
The remainder of this section can be skipped if you wish to use the default values for all optional parameters.
The optional parameters can be changed by calling E04ZMF anytime between the initialization of the handle by E04RAF and the call to the solver. Modification of the arguments during intermediate monitoring stops is not allowed. Once the solver finishes, the optional parameters can be altered again for the next solve.
If any options are set by the solver (typically those with the choice of AUTO), their value can be retrieved by E04ZNF. If the solver is called again, any such arguments are reset to their default values and the decision is made again.
The following is a list of the optional parameters available. A full description of each optional parameter is provided in Section 12.1.

12.1  Description of the Optional Parameters

For each option, we give a summary line, a description of the optional parameter and details of constraints.
The summary line contains:
All options accept the value DEFAULT to return single options to their default states.
Keywords and character values are case and white space insensitive.
This special keyword may be used to reset all optional parameters to their default values. Any argument value given with this keyword will be ignored.
DIMACS MeasuresaDefault =CHECK
If the problem is a linear semidefinite programming problem, this argument specifies if DIMACS error measures (see Section 11.2) should be computed and/or checked. In other cases, this option reverts to NO automatically.
Constraint: DIMACS Measures=COMPUTE, CHECK or NO.
Hessian DensityaDefault =AUTO
This optional parameter guides the solver on how the Hessian matrix of augmented Lagrangian Fx,u,v,U,p,P should be built. Option AUTO leaves the decision to the solver and it is the recommended option. Setting it to DENSE bypasses the autodetection and the Hessian is always built as a dense matrix. Option SPARSE instructs the solver to use a sparse storage and factorization of the matrix if possible.
Constraint: Hessian Density=AUTO, DENSE or SPARSE
Infinite Bound SizerDefault =1020
This defines the ‘infinite’ bound bigbnd in the definition of the problem constraints. Any upper bound greater than or equal to bigbnd will be regarded as + (and similarly any lower bound less than or equal to -bigbnd will be regarded as -). Note that a modification of this optional parameter does not influence constraints which have already been defined; only the constraints formulated after the change will be affected.
Constraint: Infinite Bound Size1000.
Initial PaDefault =AUTOMATIC
This optional parameter defines the choice of the penalty optional parameters p0, P0, see Algorithm 1.
The penalty optional parameters are chosen automatically as set by optional parameter Init Value P, Init Value Pmat and subject to automatic scaling. Note that P0 might be increased so that the penalty function ΦP is defined for all matrix constraints at the starting point.
The penalty optional parameters are kept from the previous run of the solver if possible. If not, this options reverts to AUTOMATIC. Note that even if the matrix penalty optional parameters are the same as in the previous run, they are still subject to a possible increase so that the penalty function ΦP is well defined at the starting point.
Constraint: Initial P=AUTOMATIC or KEEP PREVIOUS.
Initial UaDefault =AUTOMATIC
This argument guides the solver on which initial Lagrangian multipliers are to be used.
The Lagrangian multipliers are chosen automatically as set by automatic scaling.
Initial U=USER
The values of arrays U and UA (if provided) are used as the initial Lagrangian multipliers subject to automatic adjustments. If one or the other array is not provided, the choice for missing data is as in AUTOMATIC.
The Lagrangian multipliers are kept from the previous run of the solver. If this option is set for the first run or optional parameters change the approach of the solver, the choice automatically reverts to AUTOMATIC. This might be useful if the solver is hot started, for example, to achieve higher precision of the solution.
Initial XaDefault =USER
This argument guides which starting point x0 is to be used.
The starting point is chosen automatically so that it satisfies simple bounds on the variables or as a zero vector. Input of argument X is ignored.
Initial X=USER
Initial values of argument X are used as a starting point.
Constraint: Initial X=AUTOMATIC or USER.
Init Value PrDefault =1.0
This argument defines the value p0, the initial penalty optional parameter for (standard) inequalities. A low value of the penalty causes the solution of the inner problem to be closer to the feasible region and thus to the desirable result. However, it also increases ill-conditioning of the system. It is not advisable to set the penalty too low unless a good starting point is provided.
Constraint: ε4Init Value P104.
Init Value PmatrDefault =1.0
The value of this option suggests P0, the initial penalty optional parameter for matrix inequalities. It is similar to Init Value P (and the same advice applies), however, P0 gets increased automatically if the matrix constraints are more infeasible than the actual penalty optional parameter.
Constraint: ε4Init Value Pmat104.
Inner Iteration LimitiDefault =100
The maximum number of the inner iterations (Newton steps) to be performed by Algorithm 2 in each outer iteration. Setting the option too low might lead to IFAIL=23. Values higher than 100 are unlikely to improve convergence.
Constraint: Inner Iteration Limit>0.
Inner Stop CriteriaaDefault =HEURISTIC
The precision α for the solution of the inner subproblem is determined in Algorithm 1 and under typical circumstances Algorithm 2 is expected to reach this precision within the given Inner Iteration Limit. If any problems are detected and Inner Stop Criteria=HEURISTIC, Algorithm 2 is allowed to stop before reaching the requested precision or the Inner Iteration Limit. This usually saves many unfruitful iterations and the solver may recover in the following iterations. If you suspect that the heuristic problem detection is not suitable for your problem, setting Inner Stop Criteria=STRICT disallows such behaviour.
Constraint: Inner Stop Criteria=HEURISTIC or STRICT.
Inner Stop TolerancerDefault =10-2
This option sets the required precision α0 for the first inner problem solved by Algorithm 2. The precison of the solution of the inner problem does not need to be very high in the first outer iterations and it is automatically adjusted through the outer iterations to reach the optimality limit ε2 in the last one.
Setting α0 too restrictive (too low) causes an increase of the number of inner iterations needed in the first outer iterations and might lead to IFAIL=23. In certain cases it might be helpful to use a more relaxed (higher) α0 and increase P Update Speed which should reduce the number of inner iterations needed at the beginning of the computation in exchange for a possibly higher number of the outer iterations.
Constraint: ε<Inner Stop Tolerance103.
Linesearch ModeaDefault =AUTO
This controls the step size selection in Algorithm 2. If Linesearch Mode=FULLSTEP (the default for linear problems), unit steps are taken where possible and the step shortening takes place only to avoid undefined regions for the matrix penalty function ΦP (see (17)). This may be used for linear problems but it is not recommended for any nonlinear ones. If Linesearch Mode=ARMIJO, Armijo backtracking linesearch is used instead which is a fairly basic linesearch. If Linesearch Mode=GOLDSTEIN, a cubic safe guarded linesearch based on Goldstein condition is employed, this is the recommended (and default) choice for nonlinear problems.
Constraint: Linesearch Mode=AUTO, FULLSTEP, ARMIJO or GOLDSTEIN.
ListaDefault =NO
This argument may be set to YES if you wish to turn on printing of each optional parameter specification as it is supplied.
Constraint: List=YES or NO
Monitor FrequencyiDefault =0
If Monitor Frequency>0, the solver returns to you at the end of every ith outer iteration. During these intermediate exits, the current point X and Lagrangian multipliers U, UA (if requested) are provided as well as the statistics and error measures (RINFO, STATS). Argument INFORM helps to distinguish between intermediate and final exits and also allows immediate termination.
If Monitor Frequency=0, the solver stops only once on the final point and no intermediate exits are made.
Constraint: Monitor Frequency0.
Monitoring FileiDefault =-1
If i0, the unit number for the secondary (monitoring) output. If set to -1, no secondary output is provided. The following information is output to the unit:
Constraint: Monitoring File-1.
Monitoring LeveliDefault =4
This argument sets the amount of information detail that will be printed by the solver to the secondary output. The meaning of the levels is the same as with Print Level.
Constraint: 0Monitoring Level5.
Outer Iteration LimitiDefault =100
The maximum number of the outer iterations to be performed by Algorithm 1. If Outer Iteration Limit=0, no iteration is performed, only quantities needed in the stopping criteria are computed and returned in RINFO. This might be useful in connection with Initial X=USER and Initial U=USER to check optimality of the given point. However, note that the rules for possible modifications of the starting point still apply, see U and UA. Setting the option too low might lead to IFAIL=22.
Constraint: Outer Iteration Limit0.
P MinrDefault =ε
This controls pmin, the lowest possible penalty value p used for (standard) inequalities. In general, very small values of the penalty optional parameters cause ill-conditioning which might lead to numerical difficulties. On the other hand, very high pmin prevents the algorithm from reaching the requested accuracy on the feasibility. Under normal circumstances, the default value is recommended.
Constraint: εP Min10-2.
Pmat MinrDefault =ε
This is an equivalent of P Min for the minimal matrix penalty optional parameter Pmin. The same advice applies.
Constraint: εPmat Min10-2.
PreferenceaDefault =SPEED
This option affects how contributions from the matrix constraints (17) to the system Hessian matrix are computed. The default option of Preference=SPEED should be suitable in most cases. However, dealing with matrix constraints of a very high dimension may cause noticable memory overhead and switching to Preference=MEMORY may be required.
Constraint: Preference=SPEED or MEMORY.
Presolve Block DetectaDefault =YES
If Presolve Block Detect=YES, the matrix constraints are checked during preprocessoring to determine if they can be split into smaller independent ones, thus speeding up the solver.
Constraint: Presolve Block Detect=YES or NO.
Print FileiDefault =6
If i0, the unit number for the primary output of the solver. If Print File=-1, the primary output is completely turned off independently of other settings. The following information is output to the unit:
Constraint: Print File-1.
Print LeveliDefault =2
This argument defines how detailed information should be printed by the solver to the primary output.
i Output
0 No output from the solver
1 Only the final status and the objective value
2 Problem statistics, one line per outer iteration showing the progress of the solution, final status and statistics
3 As level 2 but detailed output of the outer iterations is provided and brief overview of the inner iterations
4, 5 As level 3 but details of the inner iterations are printed as well
Constraint: 0Print Level5.
Print OptionsaDefault =YES
If Print Options=YES, a listing of optional parameters will be printed to the primary output.
Constraint: Print Options=YES or NO.
P Update SpeediDefault =12
This option affects the rate at which the penalty optional parameters p,P are updated (Algorithm 1, step (iii)) and thus indirectly influences the overall number of outer iterations. Its value can be interpretted as the typical number of outer iterations needed to get from the initial penalty values p0, P0 half-way to the pmin and Pmin. Values smaller than 3 causes a very agressive penalty update strategy which might lead to the increased number of inner iterations and possibly to numerical difficulties. On the other hand, values higher than 15 produce a relatively conservative approach which leads to a higher number of the outer iterations.
If the solver encounters difficulties on your problem, a higher value might help. If your problem is working fine, setting a lower value might increase the speed.
Constraint: 1P Update Speed100.
Stats TimeaDefault =NO
This argument turns on timings of various parts of the algorithm to give a better overview of where most of the time is spent. This might be helpful for a choice of different solving approaches. It is possible to choose between CPU and wall clock time. Choice YES is equivalent to wall clock.
Constraint: Stats Time=YES, NO, CPU or WALL CLOCK.
Stop CriteriaaDefault =SOFT
If Stop Criteria=SOFT, the solver is allowed to stop prematurely with a suboptimal solution, IFAIL=50, if it predicts that a better estimate of the solution cannot be reached. This is the recommended option.
Constraint: Stop Criteria=SOFT or STRICT.
Stop Tolerance 1rDefault =max10-6,ε
This option defines ε1 used as a tolerance for the relative duality gap (10) and the relative precision (11), see Section 11.2.
Constraint: Stop Tolerance 1>ε.
Stop Tolerance 2rDefault =max10-7,ε
This option sets the value ε2 which is used for optimality (12) and complementarity (14) tests from KKT conditions or if DIMACS Measures=Check for all DIMACS error measures instead. See Section 11.2.
Constraint: Stop Tolerance 2>ε.
Stop Tolerance FeasibilityrDefault =max10-7,ε
This argument places an acceptance limit on the feasibility of the solution (13), εfeas. See Section 11.2.
Constraint: Stop Tolerance Feasibility>ε.
TaskaDefault =MINIMIZE
This argument specifies the required direction of the optimization. If Task=FEASIBLE POINT, the objective function (if set) is ignored and the algorithm stops as soon as a feasible point is found with respect to the given tolerance. If no objective function was set, Task reverts to FEASIBLE POINT automatically.
Transform ConstraintsaDefault =AUTO
This argument controls how equality constraints are treated by the solver. If Transform Constraints=EQUALITIES, all equality constraints hkx=0 from (4) are treated as two inequalities hkx0 and hkx0, see Section 11.4. This is the default and the only option in this release for equality constrained problems.
Constraint: Transform Constraints=AUTO, NO or EQUALITIES.
U Update RestrictionrDefault =0.5
This defines the value μg giving the bounds on the updates of Lagrangian multipliers for (standard) inequalities between the outer iterations. Values close to 1 limit the changes of the multipliers and serve as a kind of smoothing, lower values allow more significant changes.
Based on numerical experience, big variation in the multipliers may lead to a large number of iterations in the subsequent step and might disturb the convergence due to ill-conditioning.
It might be worth experimenting with the value on your particular problem. Mid range values are recommended over the more extremal ones.
Constraint: ε<U Update Restriction<1.
Umat Update RestrictionrDefault =0.3
This is an equivalent of U Update Restriction for matrix constraints, denoted as μA in Section 11.1. The advice above applies equally.
Constraint: ε<Umat Update Restriction<1.

E04SVF (PDF version)
E04 Chapter Contents
E04 Chapter Introduction
NAG Library Manual

© The Numerical Algorithms Group Ltd, Oxford, UK. 2016