g02kb calculates a ridge regression, with ridge parameters supplied by you.

# Syntax

C#
```public static void g02kb(
int n,
int m,
double[,] x,
int[] isx,
int ip,
double[] y,
int lh,
double[] h,
double[] nep,
int wantb,
double[,] b,
int wantvf,
double[,] vf,
int lpec,
string[] pec,
double[,] pe,
out int ifail
)```
Visual Basic
```Public Shared Sub g02kb ( _
n As Integer, _
m As Integer, _
x As Double(,), _
isx As Integer(), _
ip As Integer, _
y As Double(), _
lh As Integer, _
h As Double(), _
nep As Double(), _
wantb As Integer, _
b As Double(,), _
wantvf As Integer, _
vf As Double(,), _
lpec As Integer, _
pec As String(), _
pe As Double(,), _
<OutAttribute> ByRef ifail As Integer _
)```
Visual C++
```public:
static void g02kb(
int n,
int m,
array<double,2>^ x,
array<int>^ isx,
int ip,
array<double>^ y,
int lh,
array<double>^ h,
array<double>^ nep,
int wantb,
array<double,2>^ b,
int wantvf,
array<double,2>^ vf,
int lpec,
array<String^>^ pec,
array<double,2>^ pe,
[OutAttribute] int% ifail
)```
F#
```static member g02kb :
n : int *
m : int *
x : float[,] *
isx : int[] *
ip : int *
y : float[] *
lh : int *
h : float[] *
nep : float[] *
wantb : int *
b : float[,] *
wantvf : int *
vf : float[,] *
lpec : int *
pec : string[] *
pe : float[,] *
ifail : int byref -> unit
```

#### Parameters

n
Type: System..::..Int32
On entry: $n$, the number of observations.
Constraint: ${\mathbf{n}}\ge 1$.
m
Type: System..::..Int32
On entry: the number of independent variables available in the data matrix $X$.
Constraint: ${\mathbf{m}}\le {\mathbf{n}}$.
x
Type: array<System..::..Double,2>[,](,)[,][,]
An array of size [dim1, m]
Note: dim1 must satisfy the constraint: $\mathrm{dim1}\ge {\mathbf{n}}$
On entry: the values of independent variables in the data matrix $X$.
isx
Type: array<System..::..Int32>[]()[][]
An array of size [m]
On entry: indicates which $m$ independent variables are included in the model.
${\mathbf{isx}}\left[j-1\right]=1$
The $j$th variable in x will be included in the model.
${\mathbf{isx}}\left[j-1\right]=0$
Variable $j$ is excluded.
Constraint: ${\mathbf{isx}}\left[\mathit{j}-1\right]=0\text{​ or ​}1$, for $\mathit{j}=1,2,\dots ,{\mathbf{m}}$.
ip
Type: System..::..Int32
On entry: $m$, the number of independent variables in the model.
Constraints:
• $1\le {\mathbf{ip}}\le {\mathbf{m}}$;
• Exactly ip elements of isx must be equal to $1$.
y
Type: array<System..::..Double>[]()[][]
An array of size [n]
On entry: the $n$ values of the dependent variable $y$.
lh
Type: System..::..Int32
On entry: the number of supplied ridge parameters.
Constraint: ${\mathbf{lh}}>0$.
h
Type: array<System..::..Double>[]()[][]
An array of size [lh]
On entry: ${\mathbf{h}}\left[j-1\right]$ is the value of the $j$th ridge parameter $h$.
Constraint: ${\mathbf{h}}\left[\mathit{j}-1\right]\ge 0.0$, for $\mathit{j}=1,2,\dots ,{\mathbf{lh}}$.
nep
Type: array<System..::..Double>[]()[][]
An array of size [lh]
On exit: ${\mathbf{nep}}\left[\mathit{j}-1\right]$ is the number of effective parameters, $\gamma$, in the $\mathit{j}$th model, for $\mathit{j}=1,2,\dots ,{\mathbf{lh}}$.
wantb
Type: System..::..Int32
On entry: defines the options for parameter estimates.
${\mathbf{wantb}}=0$
Parameter estimates are not calculated and b is not referenced.
${\mathbf{wantb}}=1$
Parameter estimates $b$ are calculated for the original data.
${\mathbf{wantb}}=2$
Parameter estimates $\stackrel{~}{b}$ are calculated for the standardized data.
Constraint: ${\mathbf{wantb}}=0$, $1$ or $2$.
b
Type: array<System..::..Double,2>[,](,)[,][,]
An array of size [dim1, _tdb]
Note: dim1 must satisfy the constraint:
• if ${\mathbf{wantb}}\ne 0$, $\mathrm{dim1}\ge {\mathbf{ip}}+1$;
• otherwise $\mathrm{dim1}\ge 1$.
Note: the second dimension of the array b must be at least ${\mathbf{lh}}$ if ${\mathbf{wantb}}\ne 0$, and at least $1$ otherwise.
On exit: if ${\mathbf{wantb}}\ne 0$, b contains the intercept and parameter estimates for the fitted ridge regression model in the order indicated by isx. ${\mathbf{b}}\left[0,\mathit{j}-1\right]$, for $\mathit{j}=1,2,\dots ,{\mathbf{lh}}$, contains the estimate for the intercept; ${\mathbf{b}}\left[\mathit{i},j-1\right]$ contains the parameter estimate for the $\mathit{i}$th independent variable in the model fitted with ridge parameter ${\mathbf{h}}\left[j-1\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{ip}}$.
wantvf
Type: System..::..Int32
On entry: defines the options for variance inflation factors.
${\mathbf{wantvf}}=0$
Variance inflation factors are not calculated and the array vf is not referenced.
${\mathbf{wantvf}}=1$
Variance inflation factors are calculated.
Constraints:
• ${\mathbf{wantvf}}=0$ or $1$;
• if ${\mathbf{wantb}}=0$, ${\mathbf{wantvf}}=1$.
vf
Type: array<System..::..Double,2>[,](,)[,][,]
An array of size [dim1, _tdv]
Note: dim1 must satisfy the constraint:
• if ${\mathbf{wantvf}}\ne 0$, $\mathrm{dim1}\ge {\mathbf{ip}}$;
• otherwise $\mathrm{dim1}\ge 1$.
Note: the second dimension of the array vf must be at least ${\mathbf{lh}}$ if ${\mathbf{wantvf}}\ne 0$, and at least $1$ otherwise.
On exit: if ${\mathbf{wantvf}}=1$, the variance inflation factors. For the $\mathit{i}$th independent variable in a model fitted with ridge parameter ${\mathbf{h}}\left[j-1\right]$, ${\mathbf{vf}}\left[\mathit{i}-1,j-1\right]$ is the value of ${v}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,{\mathbf{ip}}$.
lpec
Type: System..::..Int32
On entry: the number of prediction error statistics to return; set ${\mathbf{lpec}}\le 0$ for no prediction error estimates.
pec
Type: array<System..::..String>[]()[][]
An array of size [lpec]
On entry: if ${\mathbf{lpec}}>0$, ${\mathbf{pec}}\left[\mathit{j}-1\right]$ defines the $\mathit{j}$th prediction error, for $\mathit{j}=1,2,\dots ,{\mathbf{lpec}}$; otherwise pec is not referenced.
${\mathbf{pec}}\left[j-1\right]=\text{"B"}$
Bayesian information criterion (BIC).
${\mathbf{pec}}\left[j-1\right]=\text{"F"}$
Future prediction error (FPE).
${\mathbf{pec}}\left[j-1\right]=\text{"G"}$
Generalized cross-validation (GCV).
${\mathbf{pec}}\left[j-1\right]=\text{"L"}$
Leave-one-out cross-validation (LOOCV).
${\mathbf{pec}}\left[j-1\right]=\text{"U"}$
Unbiased estimate of variance (UEV).
Constraint: if ${\mathbf{lpec}}>0$, ${\mathbf{pec}}\left[\mathit{j}-1\right]=\text{"B"}$, $\text{"F"}$, $\text{"G"}$, $\text{"L"}$ or $\text{"U"}$, for $\mathit{j}=1,2,\dots ,{\mathbf{lpec}}$.
pe
Type: array<System..::..Double,2>[,](,)[,][,]
An array of size [dim1, _tdpe]
Note: dim1 must satisfy the constraint:
• if ${\mathbf{lpec}}>0$, $\mathrm{dim1}\ge {\mathbf{lpec}}$;
• otherwise $\mathrm{dim1}\ge 1$.
Note: the second dimension of the array pe must be at least ${\mathbf{lh}}$ if ${\mathbf{lpec}}>0$, and at least $1$ otherwise.
On exit: if ${\mathbf{lpec}}\le 0$, pe is not referenced; otherwise ${\mathbf{pe}}\left[\mathit{i}-1,\mathit{j}-1\right]$ contains the prediction error of criterion ${\mathbf{pec}}\left[\mathit{i}-1\right]$ for the model fitted with ridge parameter ${\mathbf{h}}\left[\mathit{j}-1\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{lpec}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{lh}}$.
ifail
Type: System..::..Int32%
On exit: ${\mathbf{ifail}}={0}$ unless the method detects an error or a warning has been flagged (see [Error Indicators and Warnings]).

# Description

A linear model has the form:
 $y=c+Xβ+ε,$
where
• $y$ is an $n$ by $1$ matrix of values of a dependent variable;
• $c$ is a scalar intercept term;
• $X$ is an $n$ by $m$ matrix of values of independent variables;
• $\beta$ is a $m$ by $1$ matrix of unknown values of parameters;
• $\epsilon$ is an $n$ by $1$ matrix of unknown random errors such that variance of ${\epsilon =\sigma }^{2}I$.
Let $\stackrel{~}{X}$ be the mean-centred $X$ and $\stackrel{~}{y}$ the mean-centred $y$. Furthermore, $\stackrel{~}{X}$ is scaled such that the diagonal elements of the cross product matrix ${\stackrel{~}{X}}^{\mathrm{T}}\stackrel{~}{X}$ are one. The linear model now takes the form:
 $y~=X~β~+ε.$
Ridge regression estimates the parameters $\stackrel{~}{\beta }$ in a penalised least squares sense by finding the $\stackrel{~}{b}$ that minimizes
 $X~b~-y~2+hb~2, h>0,$
where $‖·‖$ denotes the ${\ell }_{2}$-norm and $h$ is a scalar regularization or ridge parameter. For a given value of $h$, the parameters estimates $\stackrel{~}{b}$ are found by evaluating
 $b~=X~TX~+hI-1X~Ty~.$
Note that if $h=0$ the ridge regression solution is equivalent to the ordinary least squares solution.
Rather than calculate the inverse of (${\stackrel{~}{X}}^{\mathrm{T}}\stackrel{~}{X}+hI$) directly, g02kb uses the singular value decomposition (SVD) of $\stackrel{~}{X}$. After decomposing $\stackrel{~}{X}$ into $UD{V}^{\mathrm{T}}$ where $U$ and $V$ are orthogonal matrices and $D$ is a diagonal matrix, the parameter estimates become
 $b~=VDTD+hI-1DUTy~.$
A consequence of introducing the ridge parameter is that the effective number of parameters, $\gamma$, in the model is given by the sum of diagonal elements of
 $DTDDTD+hI-1,$
see Moody (1992) for details.
Any multi-collinearity in the design matrix $X$ may be highlighted by calculating the variance inflation factors for the fitted model. The $j$th variance inflation factor, ${v}_{j}$, is a scaled version of the multiple correlation coefficient between independent variable $j$ and the other independent variables, ${R}_{j}$, and is given by
 $vj=11-Rj, j=1,2,…,m.$
The $m$ variance inflation factors are calculated as the diagonal elements of the matrix:
 $X~TX~+hI-1X~TX~X~TX~+hI-1,$
which, using the SVD of $\stackrel{~}{X}$, is equivalent to the diagonal elements of the matrix:
 $VDTD+hI-1DTDDTD+hI-1VT.$
Given a value of $h$, any or all of the following prediction criteria are available:
(a) Generalized cross-validation (GCV):
 $nsn-γ2;$
(b) Unbiased estimate of variance (UEV):
 $sn-γ;$
(c) Future prediction error (FPE):
 $1ns+2γsn-γ;$
(d) Bayesian information criterion (BIC):
 $1ns+lognγsn-γ;$
(e) Leave-one-out cross-validation (LOOCV),
where $s$ is the sum of squares of residuals.
Although parameter estimates $\stackrel{~}{b}$ are calculated by using $\stackrel{~}{X}$, it is usual to report the parameter estimates $b$ associated with $X$. These are calculated from $\stackrel{~}{b}$, and the means and scalings of $X$. Optionally, either $\stackrel{~}{b}$ or $b$ may be calculated.

# References

Hastie T, Tibshirani R and Friedman J (2003) The Elements of Statistical Learning: Data Mining, Inference and Prediction Springer Series in Statistics
Moody J.E. (1992) The effective number of parameters: An analysis of generalisation and regularisation in nonlinear learning systems In: Neural Information Processing Systems (eds J E Moody, S J Hanson, and R P Lippmann) 4 847–854 Morgan Kaufmann San Mateo CA

# Error Indicators and Warnings

Errors or warnings detected by the method:
Some error messages may refer to parameters that are dropped from this interface (LDX, LDB, LDVF, LDPE) In these cases, an error in another parameter has usually caused an incorrect value to be inferred.
${\mathbf{ifail}}=1$
 On entry, ${\mathbf{n}}<1$, or ${\mathbf{h}}\left[j-1\right]<0.0$, or ${\mathbf{lh}}\le 0$, or ${\mathbf{wantb}}\ne 0$, $1$ or $2$, or ${\mathbf{wantvf}}\ne 0$ or $1$, or an element of pec is not defined.
${\mathbf{ifail}}=2$
 On entry, ${\mathbf{m}}>{\mathbf{n}}$, or ${\mathbf{ip}}<1$ or ${\mathbf{ip}}>{\mathbf{m}}$, or an element of ${\mathbf{isx}}\ne 0$ or $1$, or ip does not equal the sum of elements in isx,
${\mathbf{ifail}}=3$
Both wantb and wantvf are zero.
${\mathbf{ifail}}=4$
${\mathbf{ifail}}=-999$
Internal memory allocation failed.
${\mathbf{ifail}}=-9000$
An error occured, see message report.
${\mathbf{ifail}}=-6000$
Invalid Parameters $〈\mathit{\text{value}}〉$
${\mathbf{ifail}}=-4000$
Invalid dimension for array $〈\mathit{\text{value}}〉$
${\mathbf{ifail}}=-8000$
Negative dimension for array $〈\mathit{\text{value}}〉$
${\mathbf{ifail}}=-6000$
Invalid Parameters $〈\mathit{\text{value}}〉$

# Accuracy

The accuracy of g02kb is closely related to that of the singular value decomposition.

# Parallelism and Performance

None.

g02kb allocates internally $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(5×\left({\mathbf{n}}-1\right),2×{\mathbf{ip}}×{\mathbf{ip}}\right)+\left({\mathbf{n}}+3\right)×{\mathbf{ip}}+{\mathbf{n}}$ elements of double precision storage.

# Example

This example reads in data from an experiment to model body fat, and a selection of ridge regression models are calculated.

Example program (C#): g02kbe.cs

Example program data: g02kbe.d

Example program results: g02kbe.r