NAG Library Function Document

nag_opt_conj_grad (e04dgc)


    1  Purpose
    7  Accuracy


nag_opt_conj_grad (e04dgc) minimizes an unconstrained nonlinear function of several variables using a pre-conditioned, limited memory quasi-Newton conjugate gradient method. The function is intended for use on large scale problems.


#include <nag.h>
#include <nage04.h>
void  nag_opt_conj_grad (Integer n,
void (*objfun)(Integer n, const double x[], double *objf, double g[], Nag_Comm *comm),
double x[], double *objf, double g[], Nag_E04_Opt *options, Nag_Comm *comm, NagError *fail)


nag_opt_conj_grad (e04dgc) uses a pre-conditioned conjugate gradient method and is based upon algorithm PLMA as described in Gill and Murray (1979) and Section 4.8.3 of Gill et al. (1981).
The algorithm proceeds as follows:
Let x 0  be a given starting point and let k  denote the current iteration, starting with k=0 . The iteration requires g k , the gradient vector evaluated at x k , the k th estimate of the minimum. At each iteration a vector p k  (known as the direction of search) is computed and the new estimate x k+1  is given by x k + α k p k  where α k  (the step length) minimizes the function F x k + α k p k  with respect to the scalar α k . At the start of each line search an initial approximation α 0  to the step α k  is taken as:
α 0 = min1, 2 F k - F est / gkT g k  
where F est  is a user-supplied estimate of the function value at the solution. If F est  is not specified, the software always chooses the unit step length for α 0 . Subsequent step length estimates are computed using cubic interpolation with safeguards.
A quasi-Newton method computes the search direction, p k , by updating the inverse of the approximate Hessian H k  and computing
p k+1 = - H k+1 g k+1 . (1)
The updating formula for the approximate inverse is given by
H k+1 = H k - 1 ykT s k H k y k skT + s k ykT H k + 1 ykT s k 1 + ykT H k y k ykT s k s k skT (2)
where y k = g k-1 - g k  and s k = x k+1 - x k = α k p k .
The method used by nag_opt_conj_grad (e04dgc) to obtain the search direction is based upon computing p k+1  as - H k+1 g k+1  where H k+1  is a matrix obtained by updating the identity matrix with a limited number of quasi-Newton corrections. The storage of an n  by n  matrix is avoided by storing only the vectors that define the rank two corrections – hence the term limited-memory quasi-Newton method. The precise method depends upon the number of updating vectors stored. For example, the direction obtained with the ‘one-step’ limited memory update is given by (1) using (2) with H k  equal to the identity matrix, viz.
p k+1 = - g k+1 + 1 ykT s k skT g k+1 y k + ykT g k+1 s k - skT g k+1 ykT s k 1 + ykT y k ykT s k s k  
nag_opt_conj_grad (e04dgc) uses a two-step method described in detail in Gill and Murray (1979) in which restarts and pre-conditioning are incorporated. Using a limited-memory quasi-Newton formula, such as the one above, guarantees p k+1  to be a descent direction if all the inner products ykT s k  are positive for all vectors y k  and s k  used in the updating formula.
The termination criteria of nag_opt_conj_grad (e04dgc) are as follows:
Let τ F  specify an argument that indicates the number of correct figures desired in F k  ( τ F  is equivalent to options.optim_tol in the optional parameter list, see Section 11). If the following three conditions are satisfied:
(i) F k-1 - F k < τ F 1 + F k
(ii) x k-1 - x k < τ F 1 + x k
(iii) g k τ F 1/3 1 + F k  or g k < ε A , where ε A  is the absolute error associated with computing the objective function
then the algorithm is considered to have converged. For a full discussion on termination criteria see Chapter 8 of Gill et al. (1981).


Gill P E and Murray W (1979) Conjugate-gradient methods for large-scale nonlinear optimization Technical Report SOL 79-15 Department of Operations Research, Stanford University
Gill P E, Murray W, Saunders M A and Wright M H (1983) Computing forward-difference intervals for numerical optimization SIAM J. Sci. Statist. Comput. 4 310–321
Gill P E, Murray W and Wright M H (1981) Practical Optimization Academic Press


1:     n IntegerInput
On entry: the number n  of variables.
Constraint: n1 .
2:     objfun function, supplied by the userExternal Function
objfun must calculate the objective function F x  and its gradient at a specified point x .
The specification of objfun is:
void  objfun (Integer n, const double x[], double *objf, double g[], Nag_Comm *comm)
1:     n IntegerInput
On entry: the number n  of variables.
2:     x[n] const doubleInput
On entry: the point x  at which the objective function is required.
3:     objf double *Output
On exit: the value of the objective function F  at the current point x .
4:     g[n] doubleOutput
On exit: g[i-1]  must contain the value of F xi  at the point x , for i=1,2,,n.
5:     comm Nag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to objfun.
On entry: commflag is always non-negative.
On exit: if objfun resets commflag to some negative number then nag_opt_conj_grad (e04dgc) will terminate immediately with the error indicator NE_USER_STOP. If fail is supplied to nag_opt_conj_grad (e04dgc) fail.errnum will be set to your setting of commflag.
On entry: will be set to Nag_TRUE on the first call to objfun and Nag_FALSE for all subsequent calls.
On entry: the number of calculations of the objective function; this value will be equal to the number of calls made to objfun including the current one.
userdouble *
iuserInteger *
The type Pointer will be void * with a C compiler that defines void * and char * otherwise. Before calling nag_opt_conj_grad (e04dgc) these pointers may be allocated memory and initialized with various quantities for use by objfun when called from nag_opt_conj_grad (e04dgc).
Note: objfun should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by nag_opt_conj_grad (e04dgc). If your code inadvertently does return any NaNs or infinities, nag_opt_conj_grad (e04dgc) is likely to produce unexpected results.
Note: objfun should be tested separately before being used in conjunction with nag_opt_conj_grad (e04dgc). The array x must not be changed by objfun.
3:     x[n] doubleInput/Output
On entry: x 0 , an estimate of the solution point x * .
On exit: the final estimate of the solution.
4:     objf double *Output
On exit: the value of the objective function F x  at the final iterate.
5:     g[n] doubleOutput
On exit: the objective gradient at the final iterate.
6:     options Nag_E04_Opt *Input/Output
On entry/exit: a pointer to a structure of type Nag_E04_Opt whose members are optional parameters for nag_opt_conj_grad (e04dgc). These structure members offer the means of adjusting some of the argument values of the algorithm and on output will supply further details of the results. A description of the members of options is given below in Section 11.
If any of these optional parameters are required then the structure options should be declared and initialized by a call to nag_opt_init (e04xxc) and supplied as an argument to nag_opt_conj_grad (e04dgc). However, if the optional parameters are not required the NAG defined null pointer, E04_DEFAULT, can be used in the function call.
7:     comm Nag_Comm *Input/Output
Note: comm is a NAG defined type (see Section in How to Use the NAG Library and its Documentation).
On entry/exit: structure containing pointers for communication with user-supplied functions; see the above description of objfun for details. If you do not need to make use of this communication feature the null pointer NAGCOMM_NULL may be used in the call to nag_opt_conj_grad (e04dgc); comm will then be declared internally for use in calls to user-supplied functions.
8:     fail NagError *Input/Output
The NAG error argument (see Section 3.7 in How to Use the NAG Library and its Documentation).

Error Indicators and Warnings

Dynamic memory allocation failed.
On entry, argument options.print_level had an illegal value.
On entry, argument options.verify_grad had an illegal value.
Large errors were found in the derivatives of the objective function.
This value of fail will occur if the verification process indicated that at least one gradient component had no correct figures. You should refer to the printed output to determine which elements are suspected to be in error.
As a first step, you should check that the code for the objective values is correct – for example, by computing the function at a point where the correct value is known. However, care should be taken that the chosen point fully tests the evaluation of the function. It is remarkable how often the values x=0  or x=1  are used to test function evaluation procedures, and how often the special properties of these numbers make the test meaningless.
Errors in programming the function may be quite subtle in that the function value is ‘almost’ correct. For example, the function may not be accurate to full precision because of the inaccurate calculation of a subsidiary quantity, or the limited accuracy of data upon which the function depends.
The gradient at the starting point is too small, rerun the problem at a different starting point.
The value of g x0T g x 0  is less than ε F x o , where ε  is the machine precision.
On entry, n=value.
Constraint: n1.
Value value given to options.max_iter not valid. Correct range is options.max_iter0 .
Value value given to options.f_prec not valid. Correct range is ε options.f_prec < 1.0 .
Value value given to options.optim_tol not valid. Correct range is value  options.optim_tol < 1.0 .
Value value given to options.max_line_step not valid. Correct range is options.max_line_step>0.0 .
Value value given to options.linesearch_tol not valid. Correct range is 0.0 options.linesearch_tol < 1.0 .
Cannot open file string  for appending.
Cannot close file string .
Options structure not initialized.
User requested termination, user flag value =value .
This exit occurs if you set commflag to a negative value in objfun. If fail is supplied the value of fail.errnum will be the same as your setting of commflag.
Error occurred when writing to file string .
A sufficient decrease in the function value could not be attained during the final linesearch. Current point cannot be improved upon.
If objfun computes the function and gradients correctly, then this warning may occur because an overly stringent accuracy has been requested, i.e., options.optim_tol is too small or if the minimum lies close to a step length of zero. In this case you should apply the tests described in Section 3 to determine whether or not the final solution is acceptable. For a discussion of attainable accuracy see Gill et al. (1981).
If many iterations have occurred in which essentially no progress has been made or nag_opt_conj_grad (e04dgc) has failed to move from the initial point, then the function objfun may be incorrect. You should refer to the comments below under NE_DERIV_ERRORS and check the gradients using the options.verify_grad argument. Unfortunately, there may be small errors in the objective gradients that cannot be detected by the verification process. Finite difference approximations to first derivatives are catastrophically affected by even small inaccuracies.
Computed upper-bound on step length was too small
The computed upper bound on the step length taken during the linesearch was too small. A rerun with an increased value of options.max_line_step ( ρ  say) may be successful unless ρ 10 10  (the default value), in which case the current point cannot be improved upon.
The maximum number of iterations, value, have been performed.
If the algorithm appears to be making progress the value of options.max_iter value may be too small (see Section 11), you should increase its value and rerun nag_opt_conj_grad (e04dgc). If the algorithm seems to be ‘bogged down’, you should check for incorrect gradients or ill-conditioning as described below under NW_NO_IMPROVEMENT.


On successful exit the accuracy of the solution will be as defined by the optional parameter options.optim_tol.

Parallelism and Performance

nag_opt_conj_grad (e04dgc) is not threaded in any implementation.

Further Comments


Problems whose Hessian matrices at the solution contain sets of clustered eigenvalues are likely to be minimized in significantly fewer than n  iterations. Problems without this property may require anything between n  and 5 n  iterations, with approximately 2 n  iterations being a common figure for moderately difficult problems.


This example minimizes the function
F = e x 1 4 x 1 2 + 2 x 2 2 + 4 x 1 x 2 + 2 x 2 + 1 .  
The data includes a set of user-defined column and row names, and data for the Hessian in a sparse storage format (see Section 10.2 for further details). The options structure is declared and initialized by nag_opt_init (e04xxc). Five option values are read from a data file by use of nag_opt_read (e04xyc).

Program Text

Program Text (e04dgce.c)

Program Data

Program Options (e04dgce.opt)

Program Results

Program Results (e04dgce.r)

Optional Parameters

A number of optional input and output arguments to nag_opt_conj_grad (e04dgc) are available through the structure argument options, type Nag_E04_Opt. An argument may be selected by assigning an appropriate value to the relevant structure member; those arguments not selected will be assigned default values. If no use is to be made of any of the optional parameters you should use the NAG defined null pointer, E04_DEFAULT, in place of options when calling nag_opt_conj_grad (e04dgc); the default settings will then be used for all arguments.
Before assigning values to options directly the structure must be initialized by a call to the function nag_opt_init (e04xxc). Values may then be assigned to the structure members in the normal C manner.
After return from nag_opt_conj_grad (e04dgc), the options structure may only be re-used for future calls of nag_opt_conj_grad (e04dgc) if the dimensions of the new problem are the same. Otherwise, the structure must be cleared by a call of nag_opt_free (e04xzc)) and re-initialized by a call of nag_opt_init (e04xxc) before future calls. Failure to do this will result in unpredictable behaviour.
Option settings may also be read from a text file using the function nag_opt_read (e04xyc) in which case initialization of the options structure will be performed automatically if not already done. Any subsequent direct assignment to the options structure must not be preceded by initialization.
If assignment of functions and memory to pointers in the options structure is required, then this must be done directly in the calling program, they cannot be assigned using nag_opt_read (e04xyc).

Optional Parameter Checklist and Default Values

For easy reference, the following list shows the members of options which are valid for nag_opt_conj_grad (e04dgc) together with their default values where relevant. The number ε  is a generic notation for machine precision (see nag_machine_precision (X02AJC)).
Boolean list Nag_TRUE
Nag_PrintType print_level Nag_Soln_Iter
char outfile[512] stdout
void (*print_fun)() NULL
Nag_GradChk verify_grad Nag_SimpleCheck
Boolean print_gcheck Nag_TRUE
Integer obj_check_start 1
Integer obj_check_stop n
Integer max_iter max50, 5 n
double f_prec ε 0.9
double optim_tol options.f_prec 0.8
double linesearch_tol 0.9
double max_line_step 10 10
double f_est
Integer iter
Integer nf

Description of the Optional Parameters

list – Nag_Boolean Default =Nag_TRUE
On entry: if options.list=Nag_TRUE  the argument settings in the call to nag_opt_conj_grad (e04dgc) will be printed.
print_level – Nag_PrintType Default =Nag_Soln_Iter
On entry: the level of results printout produced by nag_opt_conj_grad (e04dgc). The following values are available:
Nag_NoPrint No output.
Nag_Soln The final solution.
Nag_Iter One line of output for each iteration.
Nag_Soln_Iter The final solution and one line of output for each iteration.
Constraint: options.print_level=Nag_NoPrint, Nag_Soln, Nag_Iter or Nag_Soln_Iter.
outfile – const char[512] Default = stdout
On entry: the name of the file to which results should be printed. If options.outfile[0] = ' \0 '  then the stdout stream is used.
print_fun – pointer to function Default =  NULL
On entry: printing function defined by you; the prototype of options.print_fun is
void (*print_fun)(const Nag_Search_State *st, Nag_Comm *comm);
See Section 11.3.1 below for further details.
verify_grad – Nag_GradChk Default =Nag_SimpleCheck
On entry: specifies the level of derivative checking to be performed by nag_opt_conj_grad (e04dgc) on the gradient elements defined in objfun.
options.verify_grad may have the following values:
Nag_NoCheck No derivative check is performed.
Nag_SimpleCheck Perform a simple check of the gradient.
Nag_CheckObj Perform a component check of the gradient elements.
If options.verify_grad=Nag_SimpleCheck then a simple ‘cheap’ test is performed, which requires only one call to objfun. If options.verify_grad=Nag_CheckObj then a more reliable (but more expensive) test will be made on individual gradient components. This component check will be made in the range specified by options.obj_check_start and options.obj_check_stop, default values being 1  and n respectively. The procedure for the derivative check is based on finding an interval that produces an acceptable estimate of the second derivative, and then using that estimate to compute an interval that should produce a reasonable forward-difference approximation. The gradient element is then compared with the difference approximation. (The method of finite difference interval estimation is based on Gill et al. (1983)). The result of the test is printed out by nag_opt_conj_grad (e04dgc) if options.print_gcheck=Nag_TRUE .
Constraint: options.verify_grad=Nag_NoCheck, Nag_SimpleCheck or Nag_CheckObj.
print_gcheck – Nag_Boolean Default =Nag_TRUE
On entry: if Nag_TRUE the result of any derivative check (see options.verify_grad) will be printed.
obj_check_start – Integer Default =1
obj_check_stop – Integer Default =n
On entry: these options take effect only when options.verify_grad=Nag_CheckObj. They may be used to control the verification of gradient elements computed by the function objfun. For example, if the first 30 variables appear linearly in the objective, so that the corresponding gradient elements are constant, then it is reasonable for options.obj_check_start to be set to 31.
Constraint: 1 options.obj_check_start options.obj_check_stop n.
max_iter – Integer Default = max50, 5 n
On entry: the limit on the number of iterations allowed before termination.
Constraint: options.max_iter0 .
f_prec – double Default = ε 0.9
On entry: this argument defines ε r , which is intended to be a measure of the accuracy with which the problem function F  can be computed. The value of ε r  should reflect the relative precision of 1 + F x ; i.e., ε r  acts as a relative precision when F  is large, and as an absolute precision when F  is small. For example, if F x  is typically of order 1000 and the first six significant digits are known to be correct, an appropriate value for ε r  would be 1.0e−6 . In contrast, if F x  is typically of order 10 -4  and the first six significant digits are known to be correct, an appropriate value for ε r  would be 1.0e−10 . The choice of ε r  can be quite complicated for badly scaled problems; see Chapter 8 of Gill et al. (1981), for a discussion of scaling techniques. The default value is appropriate for most simple functions that are computed with full accuracy. However when the accuracy of the computed function values is known to be significantly worse than full precision, the value of ε r  should be large enough so that nag_opt_conj_grad (e04dgc) will not attempt to distinguish between function values that differ by less than the error inherent in the calculation.
Constraint: ε options.f_prec < 1.0 .
optim_tol – double Default = options.f_prec 0.8
On entry: specifies the accuracy to which you wish the final iterate to approximate a solution of the problem. Broadly speaking, options.optim_tol indicates the number of correct figures desired in the objective function at the solution. For example, if options.optim_tol is 10 -6  and nag_opt_conj_grad (e04dgc) terminates successfully, the final value of F  should have approximately six correct figures. nag_opt_conj_grad (e04dgc) will terminate successfully if the iterative sequence of x -values is judged to have converged and the final point satisfies the termination criteria (see Section 3, where τ F  represents options.optim_tol).
Constraint: options.f_prec options.optim_tol < 1.0 .
linesearch_tol – double Default =0.9
On entry: controls the accuracy with which the step α  taken during each iteration approximates a minimum of the function along the search direction (the smaller the value of options.linesearch_tol, the more accurate the linesearch). The default value requests an inaccurate search, and is appropriate for most problems. A more accurate search may be appropriate when it is desirable to reduce the number of iterations – for example, if the objective function is cheap to evaluate.
Constraint: 0.0 options.linesearch_tol < 1.0 .
max_line_step – double Default = 10 10
On entry: defines the maximum allowable step length for the line search.
Constraint: options.max_line_step>0.0 .
f_est – double
On entry: specifies the user-supplied guess of the optimum objective function value. This value is used by nag_opt_conj_grad (e04dgc) to calculate an initial step length (see Section 3). If no value is supplied then an initial step length of 1.0 will be used but it should be noted that for badly scaled functions a unit step along the steepest descent direction will often compute the function at very large values of x .
iter – Integer 
On exit: the number of iterations which have been performed in nag_opt_conj_grad (e04dgc).
nf – Integer 
On exit: the number of times the objective function has been evaluated (i.e., number of calls of objfun). The total excludes the calls made to objfun for purposes of derivative checking.

Description of Printed Output

The level of printed output can be controlled with the structure members options.list, options.print_gcheck and options.print_level (see Section 11.2). If options.list=Nag_TRUE  then the argument values to nag_opt_conj_grad (e04dgc) are listed, followed by the result of any derivative check if options.print_gcheck=Nag_TRUE . The printout of the optimization results is governed by the value of options.print_level. The default of options.print_level=Nag_Soln_Iter provides a single line of output at each iteration and the final result. This section describes all of the possible levels of results printout available from nag_opt_conj_grad (e04dgc).
If a simple derivative check, options.verify_grad=Nag_SimpleCheck, is requested then the directional derivative, g xT p , of the objective gradient and its finite difference approximation are printed out, where p  is a random vector of unit length.
When a component derivative check, options.verify_grad=Nag_CheckObj, is requested then the following results are supplied for each component:
x[i] the element of x .
dx[i] the optimal finite difference interval.
g[i] the gradient element.
Difference approxn. the finite difference approximation.
Itns the number of trials performed to find a suitable difference interval.
The indicator, OK or BAD?, states whether the gradient element and finite difference approximation are in agreement.
If the gradient is believed to be in error nag_opt_conj_grad (e04dgc) will exit with fail set to NE_DERIV_ERRORS.
When options.print_level=Nag_Iter or Nag_Soln_Iter a single line of output is produced on completion of each iteration, this gives the following values:
Itn the current iteration number k .
Nfun the cumulative number of calls to objfun. The evaluations needed for the estimation of the gradients by finite differences are not included in the total Nfun. The value of Nfun is a guide to the amount of work required for the linesearch. nag_opt_conj_grad (e04dgc) will perform at most 16 function evaluations per iteration.
Objective the current value of the objective function, F x k .
Norm g the Euclidean norm of the gradient vector, g x k .
Norm x the Euclidean norm of x k .
Norm(x(k-1)-x(k)) the Euclidean norm of x k-1 - x k .
Step the step α  taken along the computed search direction p k .
If options.print_level=Nag_Soln or Nag_Soln_Iter, the final result is printed out. This consists of:
x the final point, x * .
g the final gradient vector, g x * .
If options.print_level=Nag_NoPrint then printout will be suppressed; you can print the final solution when nag_opt_conj_grad (e04dgc) returns to the calling program.

Output of results via a user-defined printing function

You may also specify your own print function for output of the results of any gradient check, the optimization results at each iteration and the final solution. The user-defined print function should be assigned to the options.print_fun function pointer, which has prototype
void (*print_fun)(const Nag_Search_State *st, Nag_Comm *comm);
The rest of this section can be skipped if the default printing facilities provide the required functionality.
When a user-defined function is assigned to options.print_fun this will be called in preference to the internal print function of nag_opt_conj_grad (e04dgc). Calls to the user-defined function are again controlled by means of the options.print_gcheck and options.print_level members. Information is provided through st and comm the two structure arguments to options.print_fun.
If commit_prt = Nag_TRUE then the results from the last iteration of nag_opt_conj_grad (e04dgc) are in the following members of st:
The number of variables.
xdouble *
Points to the stn memory locations holding the current point x k .
The value of the current objective function.
gdouble *
Points to the stn memory locations holding the first derivatives of F  at the current point x k .
The step α  taken along the search direction p k .
The Euclidean norm of x k-1 - x k .
The number of iterations performed by nag_opt_conj_grad (e04dgc).
The cumulative number of calls made to objfun.
If commg_prt = Nag_TRUE then the following members are set:
The number of variables.
xdouble *
Points to the stn memory locations holding the initial point x 0 .
gdouble *
Points to the stn memory locations holding the first derivatives of F  at the initial point x 0 .
Details of any derivative check performed by nag_opt_conj_grad (e04dgc) are held in the following substructure of st:
gprintNag_GPrintSt *
Which in turn contains two substructures gprintg_chk, gprintf_sim and a pointer to an array of substructures, * gprintf_comp.
g_chkNag_Grad_Chk_St *
This substructure contains the members:
The type of derivative check performed by nag_opt_conj_grad (e04dgc). This will be the same value as in options.verify_grad.
This member will be equal to one of the error codes NE_NOERROR or NE_DERIV_ERRORS according to whether the derivatives were found to be correct or not.
Specifies the gradient element at which any component check started. This value will be equal to options.obj_check_start.
Specifies the gradient element at which any component check ended. This value will be equal to options.obj_check_stop.
f_simNag_SimSt *
The result of a simple derivative check, g_chktype = Nag_SimpleCheck, will be held in this substructure which has members:
If Nag_TRUE then the objective gradient is consistent with the finite difference approximation according to a simple check.
dir_derivdouble *
The directional derivative g x T p  where p  is a random vector of unit length with elements of approximately equal magnitude.
fd_approxdouble *
The finite difference approximation, F x + hp - F x / h , to the directional derivative.
f_compNag_CompSt *
The results of a component derivative check, g_chktype = Nag_CheckObj, will be held in the array of stn substructures of type Nag_CompSt pointed to by gprintf_comp. The procedure for the derivative check is based on finding an interval that produces an acceptable estimate of the second derivative, and then using that estimate to compute an interval that should produce a reasonable forward-difference approximation. The gradient element is then compared with the difference approximation. (The method of finite difference interval estimation is based on Gill et al. (1983)).
If Nag_TRUE then this objective gradient component is consistent with its finite difference approximation.
hoptdouble *
The optimal finite difference interval. This is dx[i] in the NAG default printout.
gdiffdouble *
The finite difference approximation for this gradient component.
The number of trials performed to find a suitable difference interval.
A character string which describes the possible nature of the reason for which an estimation of the finite difference interval failed to produce a satisfactory relative condition error of the second-order difference. Possible strings are: "Constant?", "Linear or odd?", "Too nonlinear?" and "Small derivative?".
The relevant members of the structure comm are:
Will be Nag_TRUE only when the print function is called with the result of the derivative check of objfun.
Will be Nag_TRUE when the print function is called with the result of the current iteration.
Will be Nag_TRUE when the print function is called with the final result.
userdouble *
iuserInteger *
Pointers for communication of user information. If used they must be allocated memory either before entry to nag_opt_conj_grad (e04dgc) or during a call to objfun or options.print_fun. The type Pointer will be void * with a C compiler that defines void * and char * otherwise.
© The Numerical Algorithms Group Ltd, Oxford, UK. 2017