nag_1d_cheb_interp (e01aec) (PDF version)
e01 Chapter Contents
e01 Chapter Introduction
NAG Library Manual

NAG Library Function Document

nag_1d_cheb_interp (e01aec)

 Contents

    1  Purpose
    7  Accuracy

1  Purpose

nag_1d_cheb_interp (e01aec) constructs the Chebyshev series representation of a polynomial interpolant to a set of data which may contain derivative values.

2  Specification

#include <nag.h>
#include <nage01.h>
void  nag_1d_cheb_interp (Integer m, double xmin, double xmax, const double x[], const double y[], const Integer p[], Integer itmin, Integer itmax, double a[], double perf[], Integer *num_iter, NagError *fail)

3  Description

Let m distinct values xi of an independent variable x be given, with xminxixmax, for i=1,2,,m. For each value xi, suppose that the value yi of the dependent variable y together with the first pi derivatives of y with respect to x are given. Each pi must therefore be a non-negative integer, with the total number of interpolating conditions, n, equal to m+i=1mpi.
nag_1d_cheb_interp (e01aec) calculates the unique polynomial qx of degree n-1 (or less) which is such that qkxi=yik, for i=1,2,,m and k=0,1,,pi. Here q 0 xi means qxi. This polynomial is represented in Chebyshev series form in the normalized variable x-, as follows:
qx=12a0T0x-+a1T1x-++an-1Tn-1x-,  
where
x-=2x-xmin-xmax xmax-xmin  
so that -1x-1 for x in the interval xmin to xmax, and where Tix- is the Chebyshev polynomial of the first kind of degree i with argument x-.
(The polynomial interpolant can subsequently be evaluated for any value of x in the given range by using nag_1d_cheb_eval2 (e02akc). Chebyshev series representations of the derivative(s) and integral(s) of qx may be obtained by (repeated) use of nag_1d_cheb_deriv (e02ahc) and nag_1d_cheb_intg (e02ajc).)
The method used consists first of constructing a divided-difference table from the normalized x- values and the given values of y and its derivatives with respect to x-. The Newton form of qx is then obtained from this table, as described in Huddleston (1974) and Krogh (1970), with the modification described in Section 9.2. The Newton form of the polynomial is then converted to Chebyshev series form as described in Section 9.3.
Since the errors incurred by these stages can be considerable, a form of iterative refinement is used to improve the solution. This refinement is particularly useful when derivatives of rather high order are given in the data. In reasonable examples, the refinement will usually terminate with a certain accuracy criterion satisfied by the polynomial (see Section 7). In more difficult examples, the criterion may not be satisfied and refinement will continue until the maximum number of iterations (as specified by the input argument itmax) is reached.
In extreme examples, the iterative process may diverge (even though the accuracy criterion is satisfied): if a certain divergence criterion is satisfied, the process terminates at once. In all cases the function returns the ‘best’ polynomial achieved before termination. For the definition of ‘best’ and details of iterative refinement and termination criteria, see Section 9.4.

4  References

Huddleston R E (1974) CDC 6600 routines for the interpolation of data and of data with derivatives SLL-74-0214 Sandia Laboratories (Reprint)
Krogh F T (1970) Efficient algorithms for polynomial interpolation and numerical differentiation Math. Comput. 24 185–190

5  Arguments

1:     m IntegerInput
On entry: m, the number of given values of the independent variable x.
Constraint: m1.
2:     xmin doubleInput
3:     xmax doubleInput
On entry: the lower and upper end points, respectively, of the interval xmin,xmax. If they are not determined by your problem, it is recommended that they be set respectively to the smallest and largest values among the xi.
Constraint: xmin<xmax.
4:     x[m] const doubleInput
On entry: x[i-1] must be set to the value of xi, for i=1,2,,m. The x[i-1] need not be ordered.
Constraint: xminx[i-1]xmax, and the x[i-1] must be distinct.
5:     y[dim] const doubleInput
Note: the dimension, dim, of the array y must be at least m+ i=0 m-1 p[i].
On entry: the given values of the dependent variable, and derivatives, as follows:
The first p1+1 elements contain y1,y1 1 ,,y1 p1  in that order.
The next p2+1 elements contain y2,y2 1 ,,y2 p2  in that order.
The last pm+1 elements contain ym,ym 1 ,,ym pm  in that order.
6:     p[m] const IntegerInput
On entry: p[i-1] must be set to pi, the order of the highest-order derivative whose value is given at xi, for i=1,2,,m. If the value of y only is given for some xi then the corresponding value of p[i-1] must be zero.
Constraint: p[i-1]0, for i=1,2,,m.
7:     itmin IntegerInput
8:     itmax IntegerInput
On entry: respectively the minimum and maximum number of iterations to be performed by the function (for full details see Section 9.4). Setting itmin and/or itmax negative or zero invokes default value(s) of 2 and/or 10, respectively.
The default values will be satisfactory for most problems, but occasionally significant improvement will result from using higher values.
Suggested value: itmin=0 and itmax=0.
9:     a[dim] doubleOutput
Note: the dimension, dim, of the array a must be at least m+ i=0 m-1 p[i].
On exit: a[i] contains the coefficient ai in the Chebyshev series representation of qx, for i=0,1,,n-1.
10:   perf[dim] doubleOutput
Note: the dimension, dim, of the array perf must be at least ipmax+m+ i=0 m-1 p[i]+1.
On exit: perf[k-1], for k=0,1,,ipmax, contains the ratio of Pk, the performance index relating to the kth derivative of the qx finally provided, to 8 times the machine precision.
perf[ipmax+j-1], for j=1,2,,n, contains the jth residual, i.e., the value of yi k -q k xi, where i and k are the appropriate values corresponding to the jth element in the array y (see the description of y in Section 5).
This information is also output if fail.code= NE_ITER_FAIL or NE_NOT_ACC.
11:   num_iter Integer *Output
On exit: num_iter contains the number of iterations actually performed in deriving qx.
This information is also output if fail.code= NE_ITER_FAIL or NE_NOT_ACC.
12:   fail NagError *Input/Output
The NAG error argument (see Section 2.7 in How to Use the NAG Library and its Documentation).

6  Error Indicators and Warnings

NE_ALLOC_FAIL
Dynamic memory allocation failed.
See Section 2.3.1.2 in How to Use the NAG Library and its Documentation for further information.
NE_BAD_PARAM
On entry, argument value had an illegal value.
NE_INT
On entry, m=value.
Constraint: m1.
NE_INT_ARRAY
On entry, p[value]=value.
Constraint: p[i-1]0, for i=1,2,,m.
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
An unexpected error has been triggered by this function. Please contact NAG.
See Section 2.7.6 in How to Use the NAG Library and its Documentation for further information.
NE_ITER_FAIL
Iteration is divergent. Problem is ill-conditioned.
NE_NO_LICENCE
Your licence key may have expired or may not have been installed correctly.
See Section 2.7.5 in How to Use the NAG Library and its Documentation for further information.
NE_NOT_ACC
Not all performance indices are small enough. Try increasing itmax: itmax=value.
NE_REAL_2
On entry, xmin=value and xmax=value.
Constraint: xmin<xmax.
NE_REAL_ARRAY
On entry, I=value, J=value and x[I-1]=value.
Constraint: x[I-1]x[J-1].
On entry, I=value, x[I-1]=value, xmin=value and xmax=value.
Constraint: xminx[I-1]xmax.

7  Accuracy

A complete error analysis is not currently available, but the method gives good results for reasonable problems.
It is important to realise that for some sets of data, the polynomial interpolation problem is ill-conditioned. That is, a small perturbation in the data may induce large changes in the polynomial, even in exact arithmetic. Though by no means the worst example, interpolation by a single polynomial to a large number of function values given at points equally spaced across the range is notoriously ill-conditioned and the polynomial interpolating such a dataset is prone to exhibit enormous oscillations between the data points, especially near the ends of the range. These will be reflected in the Chebyshev coefficients being large compared with the given function values. A more familiar example of ill-conditioning occurs in the solution of certain systems of linear algebraic equations, in which a small change in the elements of the matrix and/or in the components of the right-hand side vector induces a relatively large change in the solution vector. The best that can be achieved in these cases is to make the residual vector small in some sense. If this is possible, the computed solution is exact for a slightly perturbed set of data. Similar considerations apply to the interpolation problem.
The residuals yi k -q k xi are available for inspection . To assess whether these are reasonable, however, it is necessary to relate them to the largest function and derivative values taken by qx over the interval xmin,xmax. The following performance indices aim to do this. Let the kth derivative of q with respect to the normalized variable x- be given by the Chebyshev series
12a0kT0x-+a1kT1x-++an-1-kkTn-1-kx-.  
Let Ak denote the sum of the moduli of these coefficients (this is an upper bound on the kth derivative in the interval and is taken as a measure of the maximum size of this derivative), and define
Sk = max ik Ai .  
Then if the root-mean-square value of the residuals of q k , scaled so as to relate to the normalized variable x-, is denoted by rk, the performance indices are defined by
Pk=rk/Sk,   for ​k=0,1,,maxipi.  
It is expected that, in reasonable cases, they will all be less than (say) 8 times the machine precision (this is the accuracy criterion mentioned in Section 3), and in many cases will be of the order of machine precision or less.

8  Parallelism and Performance

nag_1d_cheb_interp (e01aec) is not threaded in any implementation.

9  Further Comments

9.1  Timing

Computation time is approximately proportional to it×n3, where it is the number of iterations actually used.

9.2  Divided-difference Strategy

In constructing each new coefficient in the Newton form of the polynomial, a new xi must be brought into the computation. The xi chosen is that which yields the smallest new coefficient. This strategy increases the stability of the divided-difference technique, sometimes quite markedly, by reducing errors due to cancellation.

9.3  Conversion to Chebyshev Form

Conversion from the Newton form to Chebyshev series form is effected by evaluating the former at the n values of x- at which Tn-1x takes the value ±1, and then interpolating these n function values by a call of nag_1d_cheb_interp_fit (e02afc), which provides the Chebyshev series representation of the polynomial with very small additional relative error.

9.4  Iterative Refinement

The iterative refinement process is performed as follows.
Firstly, an initial approximation, q1x say, is found by the technique described in Section 3. The rth step of the refinement process then consists of evaluating the residuals of the rth approximation qrx, and constructing an interpolant, dqrx, to these residuals. The next approximation qr+1x to the interpolating polynomial is then obtained as
qr+1x=qrx+dqrx.  
This completes the description of the rth step.
The iterative process is terminated according to the following criteria. When a polynomial is found whose performance indices (as defined in Section 7) are all less than 8 times the machine precision, the process terminates after itmin further iterations (or after a total of itmax iterations if that occurs earlier). This will occur in most reasonable problems. The extra iterations are to allow for the possibility of further improvement. If no such polynomial is found, the process terminates after a total of itmax iterations. Both these criteria are over-ridden, however, in two special cases. Firstly, if for some value of r the sum of the moduli of the Chebyshev coefficients of dqrx is greater than that of qrx, it is concluded that the process is diverging and the process is terminated at once (qr+1x is not computed).
Secondly, if at any stage, the performance indices are all computed as zero, again the process is terminated at once.
As the iterations proceed, a record is kept of the best polynomial. Subsequently, at the end of each iteration, the new polynomial replaces the current best polynomial if it satisfies two conditions (otherwise the best polynomial remains unchanged). The first condition is that at least one of its root-mean-square residual values, rk (see Section 7) is smaller than the corresponding value for the current best polynomial. The second condition takes two different forms according to whether or not the performance indices (see Section 7) of the current best polynomial are all less than 8 times the machine precision. If they are, then the largest performance index of the new polynomial is required to be less than that of the current best polynomial. If they are not, the number of indices which are less than 8 times the machine precision must not be smaller than for the current best polynomial. When the iterative process is terminated, it is the polynomial then recorded as best, which is returned to you as qx.

10  Example

This example constructs an interpolant qx to the following data:
m=4, xmin=2, xmax=6, x1=2, p1=0, y1=1, x2=4, p2=1, y2=2, y2 1 =-1, x3=5, p3=0, y3=1, x4=6, p4=2, y4=2, y4 1 =4, y4 2 =-2.  
The coefficients in the Chebyshev series representation of qx are printed, and also the residuals corresponding to each of the given function and derivative values.
This program is written in a generalized form which can read any number of data-sets.

10.1  Program Text

Program Text (e01aece.c)

10.2  Program Data

Program Data (e01aece.d)

10.3  Program Results

Program Results (e01aece.r)


nag_1d_cheb_interp (e01aec) (PDF version)
e01 Chapter Contents
e01 Chapter Introduction
NAG Library Manual

© The Numerical Algorithms Group Ltd, Oxford, UK. 2016