library.opt Submodule

Module Summary

Interfaces for the NAG Mark 30.0 opt Chapter.

opt - Minimizing or Maximizing a Function

This module provides functions for solving various mathematical optimization problems by solvers based on local stopping criteria. The main classes of problems covered in this module are:

Linear Programming (LP) – dense and sparse;

Quadratic Programming (QP) – convex and nonconvex, dense and sparse;

Quadratically Constrained Quadratic Programming (QCQP) – convex and nonconvex;

Nonlinear Programming (NLP) – dense and sparse, based on active-set SQP methods or interior point methods (IPM);

Second-order Cone Programming (SOCP);

Semidefinite Programming (SDP) – both linear matrix inequalities (LMI) and bilinear matrix inequalities (BMI);

Derivative-free Optimization (DFO);

Least Squares (LSQ) – linear and nonlinear, constrained and unconstrained;

General Nonlinear Data Fitting (NLDF) – nonlinear loss functions with regularization, constrained and unconstrained.

For a full overview of the functionality offered in this module, see the functionality index or the Module Contents (submodule opt).

See also other modules in the NAG Library relevant to optimization:

submodule glopt contains functions to solve global optimization problems;

submodule mip addresses problems arising in operational research and focuses on Mixed Integer Programming (MIP);

submodule lapacklin and submodule lapackeig include functions for linear algebra and in particular unconstrained linear least squares;

submodule fit focuses on curve and surface fitting, in which linear data fitting in or norm might be of interest;

submodule correg offers several regression (data fitting) functions, including linear, nonlinear and quantile regression, LARS, LASSO and others.

See Also

naginterfaces.library.examples.opt :

This subpackage contains examples for the opt module. See also the Examples subsection.

Functionality Index

Linear programming (LP)

dense

active-set method/primal simplex

alternative 1: lp_solve()

alternative 2: lsq_lincon_solve()

sparse

interior point method (IPM): handle_solve_lp_ipm()

active-set method/primal simplex

Quadratic programming (QP)

dense

active-set method for (possibly nonconvex) QP problem: qp_dense_solve()

active-set method for convex QP problem: lsq_lincon_solve()

sparse

active-set method sparse convex QP problem

recommended (see the E04 Introduction): qpconvex2_sparse_solve()

alternative: qpconvex1_sparse_solve()

interior point method (IPM) for (possibly nonconvex) QP problems: handle_solve_ipopt()

Second-order Cone Programming (SOCP)

dense or sparse

interior point method: handle_solve_socp_ipm()

Semidefinite programming (SDP)

generalized augmented Lagrangian method for SDP and SDP with bilinear matrix inequalities (BMI-SDP): handle_solve_pennon()

Nonlinear programming (NLP)

dense

active-set sequential quadratic programming (SQP)

direct communication

recommended (see the E04 Introduction): nlp1_solve()

alternative: nlp2_solve()

reverse communication: nlp1_rcomm()

sparse

active-set sequential quadratic programming (SQP): handle_solve_ssqp()

interior point method (IPM): handle_solve_ipopt()

active-set sequential quadratic programming (SQP)

alternative: nlp2_sparse_solve()

alternative: nlp1_sparse_solve()

Nonlinear programming (NLP) – derivative-free optimization (DFO)

model-based method for bound-constrained optimization: bounds_bobyqa_func()

Nelder–Mead simplex method for unconstrained optimization: uncon_simplex()

Nonlinear programming (NLP) – special cases

unidimensional optimization (one-dimensional) with bound constraints

method based on quadratic interpolation, no derivatives: one_var_func()

method based on cubic interpolation: one_var_deriv()

unconstrained

preconditioned conjugate gradient method: uncon_conjgrd_comp()

bound-constrained

first order active-set method (nonlinear conjugate gradient): handle_solve_bounds_foas()

quasi-Newton algorithm, no derivatives: bounds_quasi_func_easy()

quasi-Newton algorithm, first derivatives: bounds_quasi_deriv_easy()

modified Newton algorithm, first derivatives: bounds_mod_deriv_comp()

modified Newton algorithm, first derivatives, easy-to-use: bounds_mod_deriv_easy()

modified Newton algorithm, first and second derivatives: bounds_mod_deriv2_comp()

modified Newton algorithm, first and second derivatives, easy-to-use: bounds_mod_deriv2_easy()

Linear least squares, linear regression, data fitting

constrained

bound-constrained least squares problem: bnd_lin_lsq()

linearly-constrained active-set method: lsq_lincon_solve()

Data fitting

general loss functions (for sum of squares, see nonlinear least squares): handle_solve_nldf()

Nonlinear least squares, data fitting

unconstrained

combined Gauss–Newton and modified Newton algorithm

no derivatives: lsq_uncon_mod_func_comp()

no derivatives, easy-to-use: lsq_uncon_mod_func_easy()

first derivatives: lsq_uncon_mod_deriv_comp()

first derivatives, easy-to-use: lsq_uncon_mod_deriv_easy()

first and second derivatives: lsq_uncon_mod_deriv2_comp()

first and second derivatives, easy-to-use: lsq_uncon_mod_deriv2_easy()

combined Gauss–Newton and quasi-Newton algorithm

first derivatives: lsq_uncon_quasi_deriv_comp()

first derivatives, easy-to-use: lsq_uncon_quasi_deriv_easy()

covariance matrix for nonlinear least squares problem (unconstrained): lsq_uncon_covariance()

bound constrained

model-based derivative-free algorithm

direct communication: handle_solve_dfls()

reverse communication: handle_solve_dfls_rcomm()

trust region algorithm

first derivatives, optionally second derivatives: handle_solve_bxnl()

generic, including nonlinearly constrained

nonlinear constraints active-set sequential quadratic programming (SQP): lsq_gencon_deriv()

NAG optimization modelling suite

initialization of a handle for the suite

initialization as an empty problem: handle_init()

read a problem from a file to a handle: handle_read_file()

problem definition

define a linear objective function: handle_set_linobj()

define a linear or a quadratic objective function: handle_set_quadobj()

define nonlinear residual functions: handle_set_nlnls()

define a nonlinear objective function: handle_set_nlnobj()

define a second-order cone: handle_set_group()

define bounds of variables: handle_set_simplebounds()

define a block of linear constraints: handle_set_linconstr()

define a block of nonlinear constraints: handle_set_nlnconstr()

define a structure of Hessian of the objective, constraints or the Lagrangian: handle_set_nlnhess()

add one or more linear matrix inequality constraints: handle_set_linmatineq()

define bilinear matrix terms: handle_set_quadmatineq()

factor of quadratic coefficient matrix: handle_set_qconstr_fac()

full quadratic coefficient matrix: handle_set_qconstr()

set variable properties (e.g., integrality): handle_set_property()

problem editing

define new variables: handle_add_vars()

disable (temporarily remove) components of the model: handle_disable()

enable (bring back) previously disabled components of the model: handle_enable()

modify a single coefficient in a linear constraint: handle_set_linconstr_coeff()

modify a single coefficient in the linear objective function: handle_set_linobj_coeff()

modify bounds of an existing constraint or variable: handle_set_bound()

solvers

interior point method (IPM) for linear programming (LP): handle_solve_lp_ipm()

first order active-set method (nonlinear conjugate gradient): handle_solve_bounds_foas()

active-set sequential quadratic programming method (SQP) for nonlinear programming (NLP): handle_solve_ssqp()

interior point method (IPM) for nonlinear programming (NLP): handle_solve_ipopt()

generalized augmented Lagrangian method for SDP and SDP with bilinear matrix inequalities (BMI-SDP): handle_solve_pennon()

interior point method (IPM) for Second-order Cone programming (SOCP): handle_solve_socp_ipm()

constrained nonlinear data fitting (NLDF): handle_solve_nldf()

derivative-free optimisation (DFO) for nonlinear least squares problems

direct communication: handle_solve_dfls()

reverse communication: handle_solve_dfls_rcomm()

trust region optimisation for nonlinear least squares problems (BXNL): handle_solve_bxnl()

model-based method for bound-constrained optimization

direct communication: handle_solve_dfno()

reverse communication: handle_solve_dfno_rcomm()

deallocation

destroy the problem handle: handle_free()

service routines

print information about a problem handle: handle_print()

set/get information in a problem handle: handle_set_get_real()

set/get integer information in a problem handle: handle_set_get_integer()

supply option values from a character string: handle_opt_set()

get the setting of option: handle_opt_get()

supply option values from external file: handle_opt_set_file()

Service functions

input and output (I/O)

read MPS data file defining LP, QP, MILP or MIQP problem: miqp_mps_read()

write MPS data file defining LP, QP, MILP or MIQP problem: miqp_mps_write()

read sparse SPDA data files for linear SDP problems: sdp_read_sdpa()

read MPS data file defining LP or QP problem (deprecated): qpconvex1_sparse_mps()

read a problem from a file to a handle: handle_read_file()

derivative check and approximation

check user’s function for calculating first derivatives of function: check_deriv()

check user’s function for calculating second derivatives of function: check_deriv2()

check user’s function for calculating Jacobian of first derivatives: lsq_check_deriv()

check user’s function for calculating Hessian of a sum of squares: lsq_check_hessian()

estimate (using numerical differentiation) gradient and/or Hessian of a function: estimate_deriv()

determine the pattern of nonzeros in the Jacobian matrix for nlp2_sparse_solve(): nlp2_sparse_jacobian()

covariance matrix for nonlinear least squares problem (unconstrained): lsq_uncon_covariance()

option setting functions

NAG optimization modelling suite

supply option values from a character string: handle_opt_set()

get the setting of option: handle_opt_get()

supply option values from external file: handle_opt_set_file()

uncon_conjgrd_comp()

supply option values from external file: uncon_conjgrd_option_file()

supply option values from a character string: uncon_conjgrd_option_string()

lp_solve()

supply option values from external file: lp_option_file()

supply option values from a character string: lp_option_string()

lsq_lincon_solve()

supply option values from external file: lsq_lincon_option_file()

supply option values from a character string: lsq_lincon_option_string()

qp_dense_solve()

supply option values from external file: qp_dense_option_file()

supply option values from a character string: qp_dense_option_string()

qpconvex1_sparse_solve()

supply option values from external file: qpconvex1_sparse_option_file()

supply option values from a character string: qpconvex1_sparse_option_string()

qpconvex2_sparse_solve()

initialization function: qpconvex2_sparse_init()

supply option values from external file: qpconvex2_sparse_option_file()

set a single option from a character string: qpconvex2_sparse_option_string()

set a single option from an integer argument: qpconvex2_sparse_option_integer_set()

set a single option from a real argument: qpconvex2_sparse_option_double_set()

get the setting of an integer valued option: qpconvex2_sparse_option_integer_get()

get the setting of a real valued option: qpconvex2_sparse_option_double_get()

nlp1_solve() and nlp1_rcomm()

initialization function for nlp1_solve() and nlp1_rcomm(): nlp1_init()

supply option values from external file: nlp1_option_file()

supply option values from a character string: nlp1_option_string()

nlp1_sparse_solve()

supply option values from external file: nlp1_sparse_option_file()

supply option values from a character string: nlp1_sparse_option_string()

lsq_gencon_deriv()

supply option values from external file: lsq_gencon_deriv_option_file()

supply option values from a character string: lsq_gencon_deriv_option_string()

nlp2_sparse_solve()

initialization function: nlp2_sparse_init()

supply option values from external file: nlp2_sparse_option_file()

set a single option from a character string: nlp2_sparse_option_string()

set a single option from an integer argument: nlp2_sparse_option_integer_set()

set a single option from a real argument: nlp2_sparse_option_double_set()

get the setting of an integer valued option: nlp2_sparse_option_integer_get()

get the setting of a real valued option: nlp2_sparse_option_double_get()

nlp2_solve()

initialization function: nlp2_init()

supply option values from external file: nlp2_option_file()

set a single option from a character string: nlp2_option_string()

set a single option from an integer argument: nlp2_option_integer_set()

set a single option from a real argument: nlp2_option_double_set()

get the setting of an integer valued option: nlp2_option_integer_get()

get the setting of a real valued option: nlp2_option_double_get()

For full information please refer to the NAG Library document

https://support.nag.com/numeric/nl/nagdoc_30/flhtml/e04/e04intro.html

Examples

naginterfaces.library.examples.opt.handle_add_vars_ex.main()[source]

Example for naginterfaces.library.opt.handle_add_vars()

NAG Optimization Modelling suite: adding variables to an optimization model.

This example program demonstrates how to edit an LP model using the NAG Optimization Modeling Suite (NOMS) functionality.

>>> main()
naginterfaces.library.opt.handle_add_vars Python Example Results

Solve the first LP

 E04MT, Interior point method for LP problems
 Status: converged, an optimal solution found
 Final primal objective value  8.500000E+02
 Final dual objective value    8.500000E+02

 Primal variables:
   idx   Lower bound       Value       Upper bound
     1   0.00000E+00    2.00000E+02         inf
     2   0.00000E+00    1.00000E+02    1.00000E+02

The new variable has been added, solve the handle again

 E04MT, Interior point method for LP problems
 Status: converged, an optimal solution found
 Final primal objective value  9.000000E+02
 Final dual objective value    9.000000E+02

 Primal variables:
   idx   Lower bound       Value       Upper bound
     1   0.00000E+00    5.00000E+01         inf
     2   0.00000E+00    1.00000E+02    1.00000E+02
     3   0.00000E+00    5.00000E+01    5.00000E+01

The new constraint has been added, solve the handle again

 E04MT, Interior point method for LP problems
 Status: converged, an optimal solution found
 Final primal objective value  8.750000E+02
 Final dual objective value    8.750000E+02

 Primal variables:
   idx   Lower bound       Value       Upper bound
     1   0.00000E+00    1.50000E+02         inf
     2   0.00000E+00    5.00000E+01    1.00000E+02
     3   0.00000E+00    5.00000E+01    5.00000E+01
naginterfaces.library.examples.opt.handle_disable_ex.main()[source]

Example for naginterfaces.library.opt.handle_disable()

NAG Optimization Modelling suite: disabling a residual from a nonlinear least-square problem.

>>> main()
naginterfaces.library.opt.handle_disable Python Example Results
First solve the problem with the outliers
--------------------------------------------------------
 E04GG, Nonlinear least squares method for bound-constrained problems
 Status: converged, an optimal solution was found
 Value of the objective             1.05037E+00
 Norm of gradient                   8.78014E-06
 Norm of scaled gradient            6.05781E-06
 Norm of step                       1.47886E-01

 Primal variables:
   idx   Lower bound       Value       Upper bound
     1       -inf        3.61301E-01        inf
     2       -inf        9.10227E-01        inf
     3       -inf        3.42138E-03        inf
     4       -inf       -6.08965E+00        inf
     5       -inf        6.24881E-04        inf

Now remove the outlier residuals from the problem handle

 E04GG, Nonlinear least squares method for bound-constrained problems
 Status: converged, an optimal solution was found
 Value of the objective             5.96811E-02
 Norm of gradient                   1.19914E-06
 Norm of scaled gradient            3.47087E-06
 Norm of step                       3.49256E-06

 Primal variables:
   idx   Lower bound       Value       Upper bound
     1       -inf        3.53888E-01        inf
     2       -inf        1.06575E+00        inf
     3       -inf        1.91383E-03        inf
     4       -inf        2.17299E-01        inf
     5       -inf        5.17660E+00        inf
--------------------------------------------------------

Assuming the outliers points are measured again
we can enable the residuals and adjust the values

--------------------------------------------------------
 E04GG, Nonlinear least squares method for bound-constrained problems
 Status: converged, an optimal solution was found
 Value of the objective             6.51802E-02
 Norm of gradient                   2.57338E-07
 Norm of scaled gradient            7.12740E-07
 Norm of step                       1.56251E-05

 Primal variables:
   idx   Lower bound       Value       Upper bound
     1   3.00000E-01    3.00000E-01    3.00000E-01
     2       -inf       1.06039E+00         inf
     3       -inf       2.11765E-02         inf
     4       -inf       2.11749E-01         inf
     5       -inf       5.16415E+00         inf
naginterfaces.library.examples.opt.handle_solve_bounds_foas_ex.main()[source]

Example for naginterfaces.library.opt.handle_solve_bounds_foas().

Large-scale first order active set bound-constrained nonlinear programming.

Demonstrates using the FileObjManager class.

>>> main()
naginterfaces.library.opt.handle_solve_bounds_foas Python Example Results.
Minimizing a bound-constrained Rosenbrock problem.
 E04KF, First order method for bound-constrained problems
 Begin of Options
...
 End of Options


 Status: converged, an optimal solution was found
 Value of the objective             4.00000E-02
...
naginterfaces.library.examples.opt.handle_solve_dfls_ex.main()[source]

Example for naginterfaces.library.opt.handle_solve_dfls().

Derivative-free solver for a nonlinear least squares objective function.

Demonstrates handling optional algorithmic parameters in the NAG optimization modelling suite.

>>> main()
naginterfaces.library.opt.handle_solve_dfls Python Example Results.
Minimizing the Kowalik and Osborne function.
...
  Status: Converged, small trust region size
...
  Value of the objective                    4.02423E-04
  Number of objective function evaluations           27
  Number of steps                                    10
...
naginterfaces.library.examples.opt.handle_solve_dfno_ex.main()[source]

Example for naginterfaces.library.opt.handle_solve_dfno().

Derivative-free solver for nonlinear problems.

Demonstrates terminating early in a callback function and how to silence the subsequent NagCallbackTerminateWarning.

>>> main()
naginterfaces.library.opt.handle_solve_dfno Python Example Results.
Minimizing a bound-constrained nonlinear problem.
...
Terminating early because rho is small enough.
Function value at lowest point found is 2.43383.
The corresponding x is (1.0000, -0.0858, 0.4097, 1.0000).
naginterfaces.library.examples.opt.handle_solve_ipopt_ex.main()[source]

Example for naginterfaces.library.opt.handle_solve_ipopt().

Interior-point solver for sparse NLP.

>>> main()
naginterfaces.library.opt.handle_solve_ipopt Python Example Results.
Solving a problem based on Hock and Schittkowski Problem 73.
Solving with a nonlinear objective.
At the solution the objective function is 2.9894378e+01.
naginterfaces.library.examples.opt.handle_solve_lp_ipm_ex.main()[source]

Example for naginterfaces.library.opt.handle_solve_lp_ipm().

Large-scale linear programming based on an interior point method.

>>> main()
naginterfaces.library.opt.handle_solve_lp_ipm Python Example Results.
Solve a small LP problem.
 E04MT, Interior point method for LP problems
 Status: converged, an optimal solution found
 Final primal objective value  2.359648E-02
 Final dual objective value    2.359648E-02
naginterfaces.library.examples.opt.handle_solve_nldf_ex.main()[source]

Example for naginterfaces.library.opt.handle_solve_nldf().

General nonlinear data-fitting with constraints.

Solve a nonlinear regression problem using both least squares and robust regression.

>>> main()
naginterfaces.library.opt.handle_solve_nldf Python Example Results.
First solve the problem using least squares loss function
 E04GN, Nonlinear Data-Fitting
 Status: converged, an optimal solution found
 Final objective value  4.590715E+01

 Primal variables:
   idx   Lower bound       Value       Upper bound
     1  -1.00000E+00    9.43732E-02         inf
     2   0.00000E+00    7.74046E-01    1.00000E+00
---------------------------------------------------------
Now solve the problem using SmoothL1 loss function
 E04GN, Nonlinear Data-Fitting
 Status: converged, an optimal solution found
 Final objective value  1.294635E+01

 Primal variables:
   idx   Lower bound       Value       Upper bound
     1  -1.00000E+00    9.69201E-02         inf
     2   0.00000E+00    7.95110E-01    1.00000E+00
naginterfaces.library.examples.opt.handle_solve_pennon_bmi_ex.main()[source]

Example for naginterfaces.library.opt.handle_solve_pennon(), with bilinear matrix inequality constraints.

Semidefinite programming using Pennon.

>>> main()
naginterfaces.library.opt.handle_solve_pennon Python Example Results.
BMI-SDP.
Final objective value is 2.000000
at the point:
(1.000e+00, 1.029e-15, 1.000e+00, 1.314e+03, 1.311e+03).
naginterfaces.library.examples.opt.handle_solve_pennon_lmi_ex.main()[source]

Example for naginterfaces.library.opt.handle_solve_pennon(), with linear matrix inequality constraints.

Semidefinite programming using Pennon.

>>> main()
naginterfaces.library.opt.handle_solve_pennon Python Example Results.
Find the Lovasz theta number for a Petersen Graph.
Lovasz theta number of the given graph is    4.00.
naginterfaces.library.examples.opt.handle_solve_socp_ipm_ex.main()[source]

Example for naginterfaces.library.opt.handle_solve_socp_ipm().

Second order cone programming based on an interior point method.

>>> main()
naginterfaces.library.opt.handle_solve_socp_ipm Python Example Results.
Solve a small SOCP problem.
 E04PT, Interior point method for SOCP problems
 Status: converged, an optimal solution found
 Final primal objective value -1.951817E+01
 Final dual objective value   -1.951817E+01
naginterfaces.library.examples.opt.handle_solve_ssqp_ex.main()[source]

Example for naginterfaces.library.opt.handle_solve_ssqp().

SQP solver for sparse NLP.

NLP example: Quadratic objective, linear constraint and two nonlinear constraints. For illustrative purposes, the quadratic objective is coded as a nonlinear function to show the usage of objfun, objgrd user call-backs.

>>> main()
naginterfaces.library.opt.handle_solve_ssqp Python Example Results.
At the solution the objective function is 1.900124e+00.
naginterfaces.library.examples.opt.lsq_gencon_deriv_ex.main()[source]

Example for naginterfaces.library.opt.lsq_gencon_deriv().

Minimum of a sum of squares, nonlinear constraints, dense, active-set SQP method, using function values and optionally first derivatives.

>>> main()
naginterfaces.library.opt.lsq_gencon_deriv Python Example Results.
Minimizing Problem 57 from Hock and Schittkowski's
'Test Examples for Nonlinear Scripting Codes'.
Final function value is 1.42298348615e-02.
naginterfaces.library.examples.opt.lsq_uncon_mod_func_comp_ex.main()[source]

Example for naginterfaces.library.opt.lsq_uncon_mod_func_comp().

Find an unconstrained minimum of a sum of squares of m nonlinear functions in n variables (m at least n). No derivatives are required.

>>> main()
naginterfaces.library.opt.lsq_uncon_mod_func_comp Python Example Results.
Find an unconstrained minimum of a sum of squares.

Best fit model parameters are:
        x_0 =      0.082
        x_1 =      1.133
        x_2 =      2.344

Residuals for observed data:
   -0.0059   -0.0003    0.0003    0.0065   -0.0008
   -0.0013   -0.0045   -0.0200    0.0822   -0.0182
   -0.0148   -0.0147   -0.0112   -0.0042    0.0068

Sum of squares of residuals: 0.0082
naginterfaces.library.examples.opt.nlp1_rcomm_ex.main()[source]

Example for naginterfaces.library.opt.nlp1_rcomm().

Dense NLP.

>>> main()
naginterfaces.library.opt.nlp1_rcomm Python Example Results.
Solve Hock and Schittkowski Problem 71.
Final objective value is 1.7014017e+01
naginterfaces.library.examples.opt.nlp1_solve_ex.main()[source]

Example for naginterfaces.library.opt.nlp1_solve().

Dense NLP.

Demonstrates handling optional algorithmic parameters.

>>> main()
naginterfaces.library.opt.nlp1_solve Python Example Results.
Solve Hock and Schittkowski Problem 71.
Final objective value is 1.7014017e+01
naginterfaces.library.examples.opt.nlp1_sparse_solve_ex.main()[source]

Example for naginterfaces.library.opt.nlp1_sparse_solve().

Sparse NLP.

>>> main()
naginterfaces.library.opt.nlp1_sparse_solve Python Example Results.
Solve Hock and Schittkowski Problem 74.
Final objective value is 5.1264981e+03
naginterfaces.library.examples.opt.nlp2_solve_ex.main()[source]

Example for naginterfaces.library.opt.nlp2_solve().

Dense NLP.

>>> main()
naginterfaces.library.opt.nlp2_solve Python Example Results.
Solve Hock and Schittkowski Problem 71.
Final objective value is 1.7014017e+01
naginterfaces.library.examples.opt.nlp2_sparse_solve_ex.main()[source]

Example for naginterfaces.library.opt.nlp2_sparse_solve().

Sparse NLP.

>>> main()
naginterfaces.library.opt.nlp2_sparse_solve Python Example Results.
Solve Hock and Schittkowski Problem 74.
Final objective value is 5.1264981e+03
naginterfaces.library.examples.opt.qpconvex2_sparse_solve_ex.main()[source]

Example for naginterfaces.library.opt.qpconvex2_sparse_solve().

Sparse QP/LP.

>>> main()
naginterfaces.library.opt.qpconvex2_sparse_solve Python Example Results.
Sparse QP/LP.
Function value at lowest point found is -1847784.67712.
The corresponding x is:
(0.00, 349.40, 648.85, 172.85, 407.52, 271.36, 150.02).
naginterfaces.library.examples.opt.uncon_simplex_ex.main()[source]

Example for naginterfaces.library.opt.uncon_simplex().

Unconstrained minimum, Nelder–Mead simplex algorithm.

>>> main()
naginterfaces.library.opt.uncon_simplex Python Example Results.
Nelder--Mead simplex algorithm.
The final function value is 0.0000
at the point x = (0.5000, -0.9999)