g02fc calculates the Durbin–Watson statistic, for a set of residuals, and the upper and lower bounds for its significance.

# Syntax

C#
```public static void g02fc(
int n,
int ip,
double[] res,
out double d,
out double pdl,
out double pdu,
out int ifail
)```
Visual Basic
```Public Shared Sub g02fc ( _
n As Integer, _
ip As Integer, _
res As Double(), _
<OutAttribute> ByRef d As Double, _
<OutAttribute> ByRef pdl As Double, _
<OutAttribute> ByRef pdu As Double, _
<OutAttribute> ByRef ifail As Integer _
)```
Visual C++
```public:
static void g02fc(
int n,
int ip,
array<double>^ res,
[OutAttribute] double% d,
[OutAttribute] double% pdl,
[OutAttribute] double% pdu,
[OutAttribute] int% ifail
)```
F#
```static member g02fc :
n : int *
ip : int *
res : float[] *
d : float byref *
pdl : float byref *
pdu : float byref *
ifail : int byref -> unit
```

#### Parameters

n
Type: System..::..Int32
On entry: $n$, the number of residuals.
Constraint: ${\mathbf{n}}>{\mathbf{ip}}$.
ip
Type: System..::..Int32
On entry: $p$, the number of independent variables in the regression model, including the mean.
Constraint: ${\mathbf{ip}}\ge 1$.
res
Type: array<System..::..Double>[]()[][]
An array of size [n]
On entry: the residuals, ${r}_{1},{r}_{2},\dots ,{r}_{n}$.
Constraint: the mean of the residuals $\text{}\le \sqrt{\epsilon }$, where .
d
Type: System..::..Double%
On exit: the Durbin–Watson statistic, $d$.
pdl
Type: System..::..Double%
On exit: lower bound for the significance of the Durbin–Watson statistic, ${p}_{\mathrm{l}}$.
pdu
Type: System..::..Double%
On exit: upper bound for the significance of the Durbin–Watson statistic, ${p}_{\mathrm{u}}$.
ifail
Type: System..::..Int32%
On exit: ${\mathbf{ifail}}={0}$ unless the method detects an error or a warning has been flagged (see [Error Indicators and Warnings]).

# Description

For the general linear regression model
 $y=Xβ+ε,$
 where $y$ is a vector of length $n$ of the dependent variable, $X$ is a $n$ by $p$ matrix of the independent variables, $\beta$ is a vector of length $p$ of unknown parameters, and $\epsilon$ is a vector of length $n$ of unknown random errors.
The residuals are given by
 $r=y-y^=y-Xβ^$
and the fitted values, $\stackrel{^}{y}=X\stackrel{^}{\beta }$, can be written as $Hy$ for a $n$ by $n$ matrix $H$. Note that when a mean term is included in the model the sum of the residuals is zero. If the observations have been taken serially, that is ${y}_{1},{y}_{2},\dots ,{y}_{n}$ can be considered as a time series, the Durbin–Watson test can be used to test for serial correlation in the ${\epsilon }_{i}$, see Durbin and Watson (1950)Durbin and Watson (1951) and Durbin and Watson (1971).
The Durbin–Watson statistic is
 $d=∑i=1n-1ri+1-ri2∑i=1nri2.$
Positive serial correlation in the ${\epsilon }_{i}$ will lead to a small value of $d$ while for independent errors $d$ will be close to $2$. Durbin and Watson show that the exact distribution of $d$ depends on the eigenvalues of the matrix $HA$ where the matrix $A$ is such that $d$ can be written as
 $d=rTArrTr$
and the eigenvalues of the matrix $A$ are ${\lambda }_{j}=\left(1-\mathrm{cos}\left(\pi j/n\right)\right)$, for $j=1,2,\dots ,n-1$.
However bounds on the distribution can be obtained, the lower bound being
 $dl=∑i=1n-pλiui2∑i=1n-pui2$
and the upper bound being
 $du=∑i=1n-pλi-1+pui2∑i=1n-pui2,$
where the ${u}_{i}$ are independent standard Normal variables. The lower tail probabilities associated with these bounds, ${p}_{\mathrm{l}}$ and ${p}_{\mathrm{u}}$, are computed by g01ep. The interpretation of the bounds is that, for a test of size (significance) $\alpha$, if ${p}_{l}\le \alpha$ the test is significant, if ${p}_{u}>\alpha$ the test is not significant, while if ${p}_{\mathrm{l}}>\alpha$ and ${p}_{\mathrm{u}}\le \alpha$ no conclusion can be reached.
The above probabilities are for the usual test of positive auto-correlation. If the alternative of negative auto-correlation is required, then a call to g01ep should be made with the parameter d taking the value of $4-d$; see Newbold (1988).

# References

Durbin J and Watson G S (1950) Testing for serial correlation in least squares regression. I Biometrika 37 409–428
Durbin J and Watson G S (1951) Testing for serial correlation in least squares regression. II Biometrika 38 159–178
Durbin J and Watson G S (1971) Testing for serial correlation in least squares regression. III Biometrika 58 1–19
Granger C W J and Newbold P (1986) Forecasting Economic Time Series (2nd Edition) Academic Press
Newbold P (1988) Statistics for Business and Economics Prentice–Hall

# Error Indicators and Warnings

Errors or warnings detected by the method:
${\mathbf{ifail}}=1$
 On entry, ${\mathbf{n}}\le {\mathbf{ip}}$, or ${\mathbf{ip}}<1$.
${\mathbf{ifail}}=2$
 On entry, the mean of the residuals was $\text{}>\sqrt{\epsilon }$, where .
${\mathbf{ifail}}=3$
 On entry, all residuals are identical.
${\mathbf{ifail}}=-9000$
An error occured, see message report.
${\mathbf{ifail}}=-8000$
Negative dimension for array $〈\mathit{\text{value}}〉$
${\mathbf{ifail}}=-6000$
Invalid Parameters $〈\mathit{\text{value}}〉$

# Accuracy

The probabilities are computed to an accuracy of at least $4$ decimal places.

None.

# Further Comments

If the exact probabilities are required, then the first $n-p$ eigenvalues of $HA$ can be computed and g01jd used to compute the required probabilities with the parameter c set to $0.0$ and the parameter d set to the Durbin–Watson statistic $d$.

# Example

A set of $10$ residuals are read in and the Durbin–Watson statistic along with the probability bounds are computed and printed.

Example program (C#): g02fce.cs

Example program data: g02fce.d

Example program results: g02fce.r