Optimization in the NAG Library

Extensively tested and documented optimization routines

One of the most popular sections in the NAG Library is the Optimization set of routines. It's not difficult to understand why; after all, there aren't many organisations or researchers that don't need to be continually achieving optimal results!

Tried and Trusted

NAG optimization experts have developed, extensively tested and documented a wide range of routines to ensure the accuracy and speed of computation and give peace of mind when results matter. Like all NAG software, the Optimization routines are highly flexible - callable from various mathematical packages, including MATLAB® and usable from many programming languages.

Many of the world's most prominent and respected finance houses and institutions rely on NAG’s mathematical and statistical software components for their numerical applications because of the quality and accuracy the software gives to their work.

By utilizing NAG’s software in financial modelling critical business applications can be built more quickly, essential in today’s ever changing and volatile markets. And by embedding NAG software within financial applications, analysts and software engineers are able to spend more time in other areas of their work, and so ultimately improve productivity and time management.

Mathematically, optimization is the term used for the minimization or maximization of a function.

This function is called the objective function. We will denote the objective function as F(x), where x is, in general, an n-vector of variables that we are trying to determine.

The variables denoted by x may have no limitations on their value, in which case the problem is called unconstrained.

A simple bound constrained problem merely imposes bounds on the range of values that the variables may take. For example the variables might be constrained to lie within the interval [-1,1], or may be constrained so that the values of x are all positive.

More often the values of x are constrained in some other way, typically constraint function G(x) are such that A<= G(x)<=B If the functions G(x) are linear (i.e. a weighted sum of the variables x) then the problem is linearly constrained; otherwise the problem is nonlinearly constrained.

It is also possible to meet constraints that require the solution to be integral i.e. only integer values are required. Such problems are termed integer programming problems. Whilst the majority of NAG's optimization routines might be found in the E04 chapter, integer programming routines are located in the H chapter.

A point which satisfies the constraints is called a feasible point.

The types of objective function and the types of the constraint together determine which routine is best suited to that problem.

Local and Global Optimization

In practical terms most numerical techniques can only readily compute a local minimum or a local maximum. At these points all neighbouring feasible points give a greater (or smaller for the local maximum) value of the objective function.

Unless the objective function has a special form, then it is likely that a local minimum would not give rise to the smallest (or largest) possible value of the objective function throughout the feasible region. Consider say F(x) = x sin(x), within the region [0,4pi]. The point 1.5pi is a local minima, with F(x) = -1.5pi, but a smaller value is attained at x=3.5*pi, where F(x) is -3.5pi.

When the user is unsatisfied with a local minimum (or maximum) it is necessary to search for a global minimum or global maximum. For a general problem this is much more difficult and computationally much more expensive.