(a)  the value of a onedimensional definite integral of the form
Some methods are specially designed for integrands of the form


(b)  the values of the onedimensional indefinite integrals arising from (1) where the ranges of integration are interior to the interval $\left[a,b\right]$.  
(c)  the value of a multidimensional definite integral of the form
The simplest form of ${R}_{n}$ is the $n$rectangle defined by

$$\underset{a}{\overset{b}{\int}}f\left(x\right)dx\simeq \sum _{i=1}^{N}{w}_{i}f\left({x}_{i}\right)\text{.}$$  (5) 
$$\underset{a}{\overset{b}{\int}}w\left(x\right)g\left(x\right)dx\simeq \sum _{i=1}^{N}{w}_{i}g\left({x}_{i}\right)\text{.}$$  (6) 
(a)  Single rule evaluation procedures
A fixed number of abscissae, $N$, is used. This number and the particular rule chosen uniquely determine the weights and abscissae. No estimate is made of the accuracy of the result. 

(b)  Automatic procedures
The number of abscissae, $N$, within $\left[a,b\right]$ is gradually increased until consistency is achieved to within a level of accuracy (absolute or relative) you requested. There are essentially two ways of doing this; hybrid forms of these two methods are also possible:

(a)  Products of onedimensional rules
Using a twodimensional integral as an example, we have
A different onedimensional rule may be used for each dimension, as appropriate to the range and any weight function present, and a different strategy may be used, as appropriate to the integrand behaviour as a function of each independent variable.
For a ruleevaluation strategy in all dimensions, the formula (8) is applied in a straightforward manner. For automatic strategies (i.e., attempting to attain a requested accuracy), there is a problem in deciding what accuracy must be requested in the inner integral(s). Reference to formula (7) shows that the presence of a limited but random error in the $y$integration for different values of ${x}_{i}$ can produce a ‘jagged’ function of $x$, which may be difficult to integrate to the desired accuracy and for this reason products of automatic onedimensional functions should be used with caution (see Lyness (1983)). 

(b)  Monte–Carlo methods
These are based on estimating the mean value of the integrand sampled at points chosen from an appropriate statistical distribution function. Usually a variance reducing procedure is incorporated to combat the fundamentally slow rate of convergence of the rudimentary form of the technique. These methods can be effective by comparison with alternative methods when the integrand contains singularities or is erratic in some way, but they are of quite limited accuracy. 

(c)  Number theoretic methods
These are based on the work of Korobov and Conroy and operate by exploiting implicitly the properties of the Fourier expansion of the integrand. Special rules, constructed from socalled optimal coefficients, give a particularly uniform distribution of the points throughout $n$dimensional space and from their number theoretic properties minimize the error on a prescribed class of integrals. The method can be combined with the Monte–Carlo procedure. 

(d)  Sag–Szekeres method
By transformation this method seeks to induce properties into the integrand which make it accurately integrable by the trapezoidal rule. The transformation also allows effective control over the number of integrand evaluations. 

(e)  Sparse grid methods
Given a set of onedimensional quadrature rules of increasing levels of accuracy, the sparse grid method constructs an approximation to a multidimensional integral using $d$ dimensional tensor products of the differences between rules of adjacent levels. This provides a lower theoretical accuracy than the methods in (a), the full grid approach, which is nonetheless still sufficient for various classes of sufficiently smooth integrands. Furthermore, it requries substantially fewer evaluations than the full grid approach. Specifically, if a onedimensional quadrature rule has $N\sim \mathit{O}\left({2}^{\ell}\right)$ points, the full grid will require $\mathit{O}\left({2}^{\mathit{ld}}\right)$ function evaluations, whereas the sparse grid of level $\ell $ will require $\mathit{O}\left({2}^{\ell}{d}^{\ell 1}\right)$. Hence a sparse grid approach is computationally feasible even for integrals over $d\sim \mathit{O}\left(100\right)$.
Sparse grid methods are deterministic, and may be viewed as automatic whole domain procedures if their level $\ell $ is allowed to increase. 

(f)  Automatic adaptive procedures
An automatic adaptive strategy in several dimensions normally involves division of the region into subregions, concentrating the divisions in those parts of the region where the integrand is worst behaved. It is difficult to arrange with any generality for variable limits in the inner integral(s). For this reason, some methods use a region where all the limits are constants; this is called a hyperrectangle. Integrals over regions defined by variable or infinite limits may be handled by transformation to a hyperrectangle. Integrals over regions so irregular that such a transformation is not feasible may be handled by surrounding the region by an appropriate hyperrectangle and defining the integrand to be zero outside the desired region. Such a technique should always be followed by a Monte–Carlo method for integration.
The method used locally in each subregion produced by the adaptive subdivision process is usually one of three types: Monte–Carlo, number theoretic or deterministic. Deterministic methods are usually the most rapidly convergent but are often expensive to use for high dimensionality and not as robust as the other techniques. 
(a)  Single abscissa interfaces
The algorithm will provide a single abscissa at which information is required. These are typically the most simple to use, although they may be significantly less efficient than a vectorized equivalent. Most of the algorithms in this chapter are of this type.
Examples of this include
nag_quad_1d_fin_bad (d01aj) and nag_quad_md_gauss (d01fb).

(b)  Vectorized abscissae interfaces
The algorithm will return a set of abscissae, at all of which information is required. While these are more complicated to use, they are typically more efficient than a nonvectorized equivalent. They reduce the overhead of function calls, allow the avoidance of repetition of computations common to each of the integrand evaluations, and offer greater scope for vectorization and parallelization of your code.
Examples include
nag_quad_1d_fin_gonnet_vec (d01rg), nag_quad_1d_gauss_vec (d01ua), and the functions nag_quad_1d_fin_bad_vec (d01at) and nag_quad_1d_fin_osc_vec (d01au), which are vectorized equivalents of nag_quad_1d_fin_bad (d01aj) and nag_quad_1d_fin_osc (d01ak).

(c)  Multiple integral interfaces
These are functions which allow for multiple integrals to be estimated simultaneously. As with (b) above, these are more complicated to use than single integral functions, however they can provide higher efficiency, particularly if several integrals require the same subcalculations at the same abscissae. They are most efficient if integrals which are supplied together are expected to have similar behaviour over the domain, particularly when the algorithm is adaptive.
Examples include nag_quad_md_adapt_multi (d01ea) and nag_quad_1d_gen_vec_multi_rcomm (d01ra).

(a)  Integrand defined at a set of points
If $f\left(x\right)$ is defined numerically at four or more points, then the Gill–Miller finite difference method (nag_quad_1d_data (d01ga)) should be used. The interval of integration is taken to coincide with the range of $x$ values of the points supplied. It is in the nature of this problem that any function may be unreliable. In order to check results independently and so as to provide an alternative technique you may fit the integrand by Chebyshev series using nag_fit_1dcheb_arb (e02ad) and then use function nag_fit_1dcheb_integ (e02aj) to evaluate its integral (which need not be restricted to the range of the integration points, as is the case for nag_quad_1d_data (d01ga)). A further alternative is to fit a cubic spline to the data using nag_fit_1dspline_knots (e02ba) and then to evaluate its integral using nag_fit_1dspline_integ (e02bd). 

(b)  Integrand defined as a function
If the functional form of $f\left(x\right)$ is known, then one of the following approaches should be taken. They are arranged in the order from most specific to most general, hence the first applicable procedure in the list will be the most efficient.
However, if you do not wish to make any assumptions about the integrand, the most reliable functions to use will be
nag_quad_1d_fin_bad_vec (d01at) (or nag_quad_1d_fin_bad (d01aj)), nag_quad_1d_fin_osc_vec (d01au) (or nag_quad_1d_fin_osc (d01ak)), nag_quad_1d_fin_sing (d01al), nag_quad_1d_fin_gonnet_vec (d01rg) or nag_quad_1d_gen_vec_multi_rcomm (d01ra), although these will in general be less efficient for simple integrals.

(a)  Integrand defined at a set of points
If $f\left(x\right)$ is defined numerically at four or more points, and the portion of the integral lying outside the range of the points supplied may be neglected, then the Gill–Miller finite difference method, nag_quad_1d_data (d01ga), should be used. 

(b)  Integrand defined as a function

(a)  Products of onedimensional rules (suitable for up to about $5$ dimensions)
If $f\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)$ is known to be a sufficiently well behaved function of each variable ${x}_{i}$, apart possibly from weight functions of the types provided, a product of Gaussian rules may be used. These are provided by
nag_quad_1d_gauss_wgen (d01bc) or nag_quad_1d_gauss_wres (d01tb)
with nag_quad_md_gauss (d01fb). Rules for finite, semiinfinite and infinite ranges are included.
For twodimensional integrals only, unless the integrand is very badly behaved, the automatic wholeinterval product procedure of nag_quad_2d_fin (d01da) may be used. The limits of the inner integral may be userspecified functions of the outer variable. Infinite limits may be handled by transformation (see Onedimensional Integrals over a Semiinfinite or Infinite Interval); end point singularities introduced by transformation should not be troublesome, as the integrand value will not be required on the boundary of the region.
If none of these functions proves suitable and convenient, the onedimensional functions may be used recursively. For example, the twodimensional integral
Note that for reasons of efficiency and robustness the integration functions are not defined as recursive, and thus a different library integration function must be used for each dimension. Apart from this restriction, the following combinations were not permitted:
nag_quad_1d_fin_bad (d01aj) and nag_quad_1d_fin_sing (d01al), nag_quad_1d_fin_wtrig (d01an) and nag_quad_1d_fin_wsing (d01ap), nag_quad_1d_fin_wsing (d01ap) and nag_quad_1d_fin_wcauchy (d01aq), nag_quad_1d_fin_wtrig (d01an) and nag_quad_1d_fin_wcauchy (d01aq), nag_quad_1d_fin_wtrig (d01an) and nag_quad_1d_inf_wtrig (d01as), nag_quad_1d_inf (d01am) and nag_quad_1d_inf_wtrig (d01as), nag_quad_1d_fin_bad_vec (d01at) and nag_quad_1d_fin_osc_vec (d01au).
Otherwise the full range of onedimensional functions are available, for finite/infinite intervals, constant/variable limits, rule evaluation/automatic strategies etc.
The reverse communication function nag_quad_1d_gen_vec_multi_rcomm (d01ra) may be used by itself in a pseudorecursive manner, in that it may be called to evaluate an inner integral for the integrand value of an outer integral also being calculated by nag_quad_1d_gen_vec_multi_rcomm (d01ra). 

(b)  Sag–Szekeres method
Two functions are based on this method.
nag_quad_md_sphere (d01fd) is particularly suitable for integrals of very large dimension although the accuracy is generally not high. It allows integration over either the general product region (with builtin transformation to the $n$cube) or the $n$sphere. Although no error estimate is provided, two adjustable arguments may be varied for checking purposes or may be used to tune the algorithm to particular integrals.
nag_quad_md_sphere_bad (d01ja) is also based on the Sag–Szekeres method and integrates over the $n$sphere. It uses improved transformations which may be varied according to the behaviour of the integrand. Although it can yield very accurate results it can only practically be employed for dimensions not exceeding $4$. 

(c)  Number Theoretic method
Two functions are based on this method, nag_quad_md_numth (d01gc) and a vectorized equivalent nag_quad_md_numth_vec (d01gd).
Algorithms of this type carry out multidimensional integration using the Korobov–Conroy method over a product region with builtin transformation to the $n$cube. A stochastic modification of this method is incorporated into the functions in this library, hybridising the technique with the Monte–Carlo procedure. An error estimate is provided in terms of the statistical standard error. A number of precomputed optimal coefficient rules for up to $20$ dimensions are provided; others can be computed using nag_quad_md_numth_coeff_prime (d01gy) and nag_quad_md_numth_coeff_2prime (d01gz). Like the Sag–Szekeres method it is suitable for large dimensional integrals although the accuracy is not high.
nag_quad_md_numth (d01gc) requires a function to be provided to evaluate the value of the integrand at a single abscissa, and a function to return the upper and lower limits of integration in a given dimension.
nag_quad_md_numth_vec (d01gd) has a vectorized interface which can result in faster execution, especially on vectorprocessing machines. You are required to provide two functions, the first to return an array of values of the integrand at each of an array of points, and the second to evaluate the limits of integration at each of an array of points. This reduces the overhead of function calls, avoids repetitions of computations common to each of the evaluations of the integral and limits of integration, and offers greater scope for vectorization of your code. 

(d)  A combinatorial extrapolation method
nag_quad_md_simplex (d01pa) computes a sequence of approximations and an error estimate to the integral of a function over a multidimensional simplex using a combinatorial method with extrapolation. 

(e)  Sparse Grid method
nag_quad_md_sgq_multi_vec (d01es) implements a sparse grid quadrature scheme for the integration of a vector of multidimensional integrals over the unit hypercube,
The function uses a vectorized interface, which returns a set of points at which the integrands must be evaluated in a sparse storage format for efficiency.
Other domains can be readily integrated over by using an appropriate mapping inside the provided function for evaluating the integrands. It is suitable for $d$ up to $\mathit{O}\left(100\right)$, although no upper bound on the number of dimensions is enforced. It will also evaluate onedimensional integrals, although in this case the sparse grid used is in fact the full grid.
The function uses optional parameters, set and queried using the functions nag_quad_opt_set (d01zk) and nag_quad_opt_get (d01zl) respectively. Amongst other options, these allow the parallelization of the function to be controlled. 

(f)  Automatic functions
(nag_quad_md_adapt (d01fc) and nag_quad_md_mcarlo (d01gb))
Both functions are for integrals of the form
nag_quad_md_mcarlo (d01gb)
is an adaptive Monte–Carlo function. This function is usually slow and not recommended for highaccuracy work. It is a robust function that can often be used for lowaccuracy results with highly irregular integrands or when $n$ is large.
nag_quad_md_adapt (d01fc)
is an adaptive deterministic function. Convergence is fast for well behaved integrands. Highly accurate results can often be obtained for $n$ between $2$ and $5$, using significantly fewer integrand evaluations than would be required by
nag_quad_md_mcarlo (d01gb).
The function will usually work when the integrand is mildly singular and for $n\le 10$ should be used before
nag_quad_md_mcarlo (d01gb).
If it is known in advance that the integrand is highly irregular, it is best to compare results from at least two different functions.
There are many problems for which one or both of the functions will require large amounts of computing time to obtain even moderately accurate results. The amount of computing time is controlled by the number of integrand evaluations you have allowed, and you should set this argument carefully, with reference to the time available and the accuracy desired.
nag_quad_md_adapt_multi (d01ea) extends the technique of nag_quad_md_adapt (d01fc) to integrate adaptively more than one integrand, that is to calculate the set of integrals

Is the functional form of the integrand known?  Is indefinite integration required?  nag_quad_1d_indef (d01ar)  
yes  yes  
no  no  
Do you require reverse communication?  nag_quad_1d_gen_vec_multi_rcomm (d01ra)  
yes  
no  
Are you concerned with efficiency for simple integrals?  Is the integrand smooth (polynomiallike) apart from weight function ${\leftx\left(a+b\right)/2\right}^{c}$ or ${\left(bx\right)}^{c}{\left(xa\right)}^{d}$?  nag_quad_1d_indef (d01ar), nag_quad_1d_gauss_vec (d01ua), nag_quad_1d_gauss_wres (d01tb) or nag_quad_1d_gauss_wgen (d01bc) and nag_quad_md_gauss (d01fb), or nag_quad_md_numth (d01gc)  
yes  yes  
no  no  
Is the integrand reasonably smooth and the required accuracy not too great?  nag_quad_1d_indef (d01ar), nag_quad_1d_fin_smooth (d01bd), nag_quad_md_sgq_multi_vec (d01es) or nag_quad_1d_gauss_vec (d01ua)  
yes  
no  
Are multiple integrands to be integrated simultaneously?  nag_quad_md_sgq_multi_vec (d01es) or nag_quad_1d_gen_vec_multi_rcomm (d01ra)  
yes  
no  
Has the integrand discontinuities, sharp peaks or singularities at known points other than the end points?  Split the range and begin again; or use nag_quad_1d_fin_sing (d01al) or nag_quad_1d_fin_gonnet_vec (d01rg)  
yes  
no  
Is the integrand free of singularities, sharp peaks and violent oscillations apart from weight function ${\left(bx\right)}^{\alpha}{\left(xa\right)}^{\beta}\phantom{\rule{0ex}{0ex}}{\left(\mathrm{log}\left(bx\right)\right)}^{k}{\left(\mathrm{log}\left(xa\right)\right)}^{l}$?  nag_quad_1d_fin_wsing (d01ap)  
yes  
no  
Is the integrand free of singularities, sharp peaks and violent oscillations apart from weight function ${\left(xc\right)}^{1}$?  nag_quad_1d_fin_wcauchy (d01aq)  
yes  
no  
Is the integrand free of violent oscillations apart from weight function $\mathrm{cos}\left(\omega x\right)$ or $\mathrm{sin}\left(\omega x\right)$?  nag_quad_1d_fin_wtrig (d01an)  
yes  
no  
Is the integrand free of singularities?  nag_quad_1d_fin_bad (d01aj), nag_quad_1d_fin_osc (d01ak), nag_quad_1d_fin_osc_vec (d01au) or nag_quad_md_sgq_multi_vec (d01es)  
yes  
no  
Is the integrand free of discontinuities and of singularities except possibly at the end points?  nag_quad_1d_fin_well (d01ah)  
yes  
no  
nag_quad_1d_fin_bad (d01aj), nag_quad_1d_fin_bad_vec (d01at), nag_quad_1d_gen_vec_multi_rcomm (d01ra) or nag_quad_1d_fin_gonnet_vec (d01rg)  
nag_quad_1d_fin_well (d01ah), nag_quad_1d_fin_bad (d01aj), nag_quad_1d_fin_bad_vec (d01at), nag_quad_1d_gen_vec_multi_rcomm (d01ra) or nag_quad_1d_fin_gonnet_vec (d01rg)  
nag_quad_1d_data (d01ga)  
Is the functional form of the integrand known?  Are you concerned with efficiency for simple integrands?  Is the integrand smooth (polynomiallike) with no exceptions?  nag_quad_1d_gauss_vec (d01ua), nag_quad_1d_fin_smooth (d01bd), nag_quad_1d_indef (d01ar) or nag_quad_md_sgq_multi_vec (d01es) with transformation. See Onedimensional Integrals over a Semiinfinite or Infinite Interval (b)(ii).  
yes  yes  yes  
no  no  no  
Is the integrand smooth (polynomiallike) apart from weight function ${e}^{\beta \left(x\right)}$ (semiinfinite range) or ${e}^{{\beta \left(xa\right)}^{2}}$ (infinite range) or is the integrand polynomiallike in $\frac{1}{x+b}$? (semiinfinite range)?  nag_quad_1d_gauss_vec (d01ua), nag_quad_1d_gauss_wres (d01tb) and nag_quad_md_gauss (d01fb) or nag_quad_1d_gauss_wgen (d01bc) and nag_quad_md_gauss (d01fb)  
yes  
no  
Has integrand discontinuities, sharp peaks or singularities at known points other than a finite limit?  Split range; begin again using finite or infinite range trees  
yes  
no  
Does the integrand oscillate over the entire range?  Does the integrand decay rapidly towards an infinite limit?  Use nag_quad_1d_inf (d01am); or set cutoff and use finite range tree  
yes  yes  
no  no  
Is the integrand free of violent oscillations apart from weight function $\mathrm{cos}\left(\omega x\right)$ or $\mathrm{sin}\left(\omega x\right)$ (semiinfinite range)?  nag_quad_1d_inf_wtrig (d01as)  
yes  
no  
Use finiterange integration between the zeros and extrapolate (see nag_sum_accelerate (c06ba))  
nag_quad_1d_inf (d01am)  
nag_quad_1d_inf (d01am)  
nag_quad_1d_data (d01ga) (integrates over the range of the points supplied)  