This chapter is concerned with the interpolation of a function of one or more variables. When provided with the value of the function (and possibly one or more of its lowest-order derivatives) at each of a number of values of the variable(s), the NAG Library methods provide either an interpolating function or an interpolated value. For some of the interpolating functions, there are supporting NAG Library methods to evaluate, differentiate or integrate them.
In motivation and in some of its numerical processes, this chapter has much in common with E02 class (Curve and Surface Fitting). For this reason, we shall adopt the same terminology and refer to dependent variable and independent variable(s) instead of function and variable(s). Where there is only one independent variable, we shall denote it by and the dependent variable by . Thus, in the basic problem considered in this chapter, we are given a set of distinct values of and a corresponding set of values of , and we shall describe the problem as being one of interpolating the data points , rather than interpolating a function. In modern usage, however, interpolation can have either of two rather different meanings, both relevant to methods in this chapter. They are
|(a)||the determination of a function of which takes the value at , for (an interpolating function or interpolant),|
|(b)||the determination of the value (interpolated value or interpolate) of an interpolating function at any given value, say , of within the range of the (so as to estimate the value at of the function underlying the data).|
The latter is the older meaning, associated particularly with the use of mathematical tables. The term ‘function underlying the data’, like the other terminology described above, is used so as to cover situations additional to those in which the data points have been computed from a known function, as with a mathematical table. In some contexts, the function may be unknown, perhaps representing the dependency of one physical variable on another, say temperature upon time.
Whether the underlying function is known or unknown, the object of interpolation will usually be to approximate it to acceptable accuracy by a function which is easy to evaluate anywhere in some range of interest. Polynomials, rational functions (ratios of two polynomials) and piecewise polynomials, such as cubic splines (see  in the E02 class Chapter Introduction for definitions of terms in the latter case), being easy to evaluate and also capable of approximating a wide variety of functions, are the types of function mostly used in this chapter as interpolating functions. An interpolating polynomial is taken to have degree when there are data points, and so it is unique. It is called the Lagrange interpolating polynomial. The rational function, in the special form used, is also unique. An interpolating spline, on the other hand, depends on the choice made for the knots.
One way of achieving the objective in (b) above is, of course, through (a), but there are also methods which do not involve the explicit computation of the interpolating function. Everett's formula and Aitken's successive linear interpolation (see Dahlquist and Björck (1974)) provide two such methods. Both are used in this chapter and determine a value of the Lagrange interpolating polynomial.
It is important to appreciate, however, that the Lagrange interpolating polynomial often exhibits unwanted fluctuations between the data points. These tend to occur particularly towards the ends of the data range, and to get larger with increasing number of data points. In severe cases, such as with or equally spaced values of , the polynomial can take on values several orders of magnitude larger than the data values. (Closer spacing near the ends of the range tends to improve the situation, and wider spacing tends to make it worse.) Clearly, therefore, the Lagrange polynomial often gives a very poor approximation to the function underlying the data. On the other hand, it can be perfectly satisfactory when its use is restricted to providing interpolated values away from the ends of the data range from a reasonably small number of data values.
In contrast, a cubic spline which interpolates a large number of data points can often be used satisfactorily over the whole of the data range. Unwanted fluctuations can still arise but much less frequently and much less severely than with polynomials. Rational functions, when appropriate, would also be used over the whole data range. The main danger with these functions is that their polynomial denominators may take zero values within that range. Unwanted fluctuations are avoided altogether by a method using piecewise cubic polynomials having only first derivative continuity. It is designed especially for monotonic data, but for other data still provides an interpolant which increases, or decreases, over the same intervals as the data.
The concept of interpolation can be generalized in a number of ways. Firstly, at each , the interpolating function may be required to take on not only a given value but also given values for all its derivatives up to some specified order (which can vary with ). This is the Hermite–Birkoff interpolation problem. Secondly, we may be required to estimate the value of the underlying function at a value outside the range of the data. This is the process of extrapolation. In general, it is a good deal less accurate than interpolation and is to be avoided whenever possible.
Interpolation can also be extended to the case of two or more independent variables. If the data values are given at the intersections of a regular two-dimensional mesh bicubic splines (see  in the E02 class Chapter Introduction) are very suitable and usually very effective for the problem. For other cases, perhaps where the data values are quite arbitrarily scattered, polynomials and splines are not at all appropriate and special forms of interpolating function have to be employed. Many such forms have been devised and two of the most successful are in methods in this chapter. They both have continuity in first, but not higher, derivatives.
Dahlquist G and Björck Å (1974) Numerical Methods Prentice–Hall
Fröberg C E (1970) Introduction to Numerical Analysis Addison–Wesley