g03aa performs a principal component analysis on a data matrix; both the principal component loadings and the principal component scores are returned.

# Syntax

C# |
---|

public static void g03aa( string matrix, string std, string weight, int n, int m, double[,] x, int[] isx, double[] s, double[] wt, int nvar, double[,] e, double[,] p, double[,] v, out int ifail ) |

Visual Basic |
---|

Public Shared Sub g03aa ( _ matrix As String, _ std As String, _ weight As String, _ n As Integer, _ m As Integer, _ x As Double(,), _ isx As Integer(), _ s As Double(), _ wt As Double(), _ nvar As Integer, _ e As Double(,), _ p As Double(,), _ v As Double(,), _ <OutAttribute> ByRef ifail As Integer _ ) |

Visual C++ |
---|

public: static void g03aa( String^ matrix, String^ std, String^ weight, int n, int m, array<double,2>^ x, array<int>^ isx, array<double>^ s, array<double>^ wt, int nvar, array<double,2>^ e, array<double,2>^ p, array<double,2>^ v, [OutAttribute] int% ifail ) |

F# |
---|

static member g03aa : matrix : string * std : string * weight : string * n : int * m : int * x : float[,] * isx : int[] * s : float[] * wt : float[] * nvar : int * e : float[,] * p : float[,] * v : float[,] * ifail : int byref -> unit |

#### Parameters

- matrix
- Type: System..::..String
*On entry*: indicates for which type of matrix the principal component analysis is to be carried out.- ${\mathbf{matrix}}=\text{"C"}$
- It is for the correlation matrix.
- ${\mathbf{matrix}}=\text{"S"}$
- It is for a standardized matrix, with standardizations given by s.
- ${\mathbf{matrix}}=\text{"U"}$
- It is for the sums of squares and cross-products matrix.
- ${\mathbf{matrix}}=\text{"V"}$
- It is for the variance-covariance matrix.

*Constraint*: ${\mathbf{matrix}}=\text{"C"}$, $\text{"S"}$, $\text{"U"}$ or $\text{"V"}$.

- std
- Type: System..::..String
*On entry*: indicates if the principal component scores are to be standardized.- ${\mathbf{std}}=\text{"S"}$
- The principal component scores are standardized so that ${F}^{\prime}F=I$, i.e., $F={X}_{s}P{\Lambda}^{-1}=V$.
- ${\mathbf{std}}=\text{"U"}$
- The principal component scores are unstandardized, i.e., $F={X}_{s}P=V\Lambda $.
- ${\mathbf{std}}=\text{"Z"}$
- The principal component scores are standardized so that they have unit variance.
- ${\mathbf{std}}=\text{"E"}$
- The principal component scores are standardized so that they have variance equal to the corresponding eigenvalue.

*Constraint*: ${\mathbf{std}}=\text{"E"}$, $\text{"S"}$, $\text{"U"}$ or $\text{"Z"}$.

- weight
- Type: System..::..String
*On entry*: indicates if weights are to be used.- ${\mathbf{weight}}=\text{"U"}$
- No weights are used.
- ${\mathbf{weight}}=\text{"W"}$
- Weights are used and must be supplied in wt.

*Constraint*: ${\mathbf{weight}}=\text{"U"}$ or $\text{"W"}$.

- n
- Type: System..::..Int32
*On entry*: $n$, the number of observations.*Constraint*: ${\mathbf{n}}\ge 2$.

- m
- Type: System..::..Int32
*On entry*: $m$, the number of variables in the data matrix.*Constraint*: ${\mathbf{m}}\ge 1$.

- x
- Type: array<System..::..Double,2>[,](,)[,][,]An array of size [dim1, m]
**Note:**dim1 must satisfy the constraint: $\mathrm{dim1}\ge {\mathbf{n}}$*On entry*: ${\mathbf{x}}[\mathit{i}-1,\mathit{j}-1]$ must contain the $\mathit{i}$th observation for the $\mathit{j}$th variable, for $\mathit{i}=1,2,\dots ,n$ and $\mathit{j}=1,2,\dots ,m$.

- isx
- Type: array<System..::..Int32>[]()[][]An array of size [m]
*On entry*: ${\mathbf{isx}}\left[j-1\right]$ indicates whether or not the $j$th variable is to be included in the analysis.If ${\mathbf{isx}}\left[\mathit{j}-1\right]>0$, the variable contained in the $\mathit{j}$th column of x is included in the principal component analysis, for $\mathit{j}=1,2,\dots ,m$.

- s
- Type: array<System..::..Double>[]()[][]An array of size [m]
*On entry*: the standardizations to be used, if any.If ${\mathbf{matrix}}=\text{"S"}$, the first $m$ elements of s must contain the standardization coefficients, the diagonal elements of $\sigma $.*Constraint*: if ${\mathbf{isx}}\left[j-1\right]>0$, ${\mathbf{s}}\left[\mathit{j}\right]>0.0$, for $\mathit{j}=0,1,\dots ,m-1$.

- wt
- Type: array<System..::..Double>[]()[][]An array of size [dim1]
**Note:**the dimension of the array wt must be at least ${\mathbf{n}}$ if ${\mathbf{weight}}=\text{"W"}$, and at least $1$ otherwise.*On entry*: if ${\mathbf{weight}}=\text{"W"}$, the first $n$ elements of wt must contain the weights to be used in the principal component analysis.If ${\mathbf{wt}}\left[i-1\right]=0.0$, the $i$th observation is not included in the analysis. The effective number of observations is the sum of the weights.If ${\mathbf{weight}}=\text{"U"}$, wt is not referenced and the effective number of observations is $n$.*Constraints*:- ${\mathbf{wt}}\left[\mathit{i}-1\right]\ge 0.0$, for $\mathit{i}=1,2,\dots ,n$;
- the sum of weights $\text{}\ge {\mathbf{nvar}}+1$.

- nvar
- Type: System..::..Int32
*On entry*: $p$, the number of variables in the principal component analysis.*Constraint*: $1\le {\mathbf{nvar}}\le \mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{n}}-1,{\mathbf{m}}\right)$.

- e
- Type: array<System..::..Double,2>[,](,)[,][,]An array of size [dim1, $6$]
**Note:**dim1 must satisfy the constraint: $\mathrm{dim1}\ge {\mathbf{nvar}}$*On exit*: the statistics of the principal component analysis.- ${\mathbf{e}}[i-1,0]$
- The eigenvalues associated with the $\mathit{i}$th principal component, ${\lambda}_{\mathit{i}}^{2}$, for $\mathit{i}=1,2,\dots ,p$.
- ${\mathbf{e}}[i-1,1]$
- The proportion of variation explained by the $\mathit{i}$th principal component, for $\mathit{i}=1,2,\dots ,p$.
- ${\mathbf{e}}[\mathit{i}-1,2]$
- The cumulative proportion of variation explained by the first $\mathit{i}$th principal components, for $\mathit{i}=1,2,\dots ,p$.
- ${\mathbf{e}}[\mathit{i}-1,3]$
- The ${\chi}^{2}$ statistics, for $i=1,2,\dots ,p$.
- ${\mathbf{e}}[i-1,4]$
- The degrees of freedom for the ${\chi}^{2}$ statistics, for $i=1,2,\dots ,p$.

If ${\mathbf{matrix}}\ne \text{"C"}$, ${\mathbf{e}}[\mathit{i}-1,5]$ contains significance level for the ${\chi}^{2}$ statistic, for $\mathit{i}=1,2,\dots ,p$.If ${\mathbf{matrix}}=\text{"C"}$, ${\mathbf{e}}[i-1,5]$ is returned as zero.

- p
- Type: array<System..::..Double,2>[,](,)[,][,]An array of size [dim1, nvar]
**Note:**dim1 must satisfy the constraint: $\mathrm{dim1}\ge {\mathbf{nvar}}$

- v
- Type: array<System..::..Double,2>[,](,)[,][,]An array of size [dim1, nvar]
**Note:**dim1 must satisfy the constraint: $\mathrm{dim1}\ge {\mathbf{n}}$

- ifail
- Type: System..::..Int32%
*On exit*: ${\mathbf{ifail}}={0}$ unless the method detects an error or a warning has been flagged (see [Error Indicators and Warnings]).

# Description

Let $X$ be an $n$ by $p$ data matrix of $n$ observations on $p$ variables ${x}_{1},{x}_{2},\dots ,{x}_{p}$ and let the $p$ by $p$ variance-covariance matrix of ${x}_{1},{x}_{2},\dots ,{x}_{p}$ be $S$. A vector ${a}_{1}$ of length $p$ is found such that:

The variable ${z}_{1}={\displaystyle \sum _{i=1}^{p}}{a}_{1i}{x}_{i}$ is known as the first principal component and gives the linear combination of the variables that gives the maximum variation. A second principal component, ${z}_{2}={\displaystyle \sum _{i=1}^{p}}{a}_{2i}{x}_{i}$, is found such that:

This gives the linear combination of variables that is orthogonal to the first principal component that gives the maximum variation. Further principal components are derived in a similar way.

$${a}_{1}^{\mathrm{T}}S{a}_{1}\text{\hspace{1em} is maximized subject to \hspace{1em}}{a}_{1}^{\mathrm{T}}{a}_{1}=1\text{.}$$ |

$${a}_{2}^{\mathrm{T}}S{a}_{2}\text{\hspace{1em} is maximized subject to}{a}_{2}^{\mathrm{T}}{a}_{2}=1\text{and}{a}_{2}^{\mathrm{T}}{a}_{1}=0\text{.}$$ |

The vectors ${a}_{1},{a}_{2},\dots ,{a}_{p}$, are the eigenvectors of the matrix $S$ and associated with each eigenvector is the eigenvalue, ${\lambda}_{i}^{2}$. The value of ${\lambda}_{i}^{2}/\sum {\lambda}_{i}^{2}$ gives the proportion of variation explained by the $i$th principal component. Alternatively, the ${a}_{i}$'s can be considered as the right singular vectors in a singular value decomposition with singular values ${\lambda}_{i}$ of the data matrix centred about its mean and scaled by $1/\sqrt{\left(n-1\right)}$, ${X}_{s}$. This latter approach is used in g03aa, with

where $\Lambda $ is a diagonal matrix with elements ${\lambda}_{i}$, $P$ is the $p$ by $p$ matrix with columns ${a}_{i}$ and $V$ is an $n$ by $p$ matrix with ${V}^{\prime}V=I$, which gives the principal component scores.

$${X}_{s}=V\Lambda {P}^{\prime}$$ |

Principal component analysis is often used to reduce the dimension of a dataset, replacing a large number of correlated variables with a smaller number of orthogonal variables that still contain most of the information in the original dataset.

The choice of the number of dimensions required is usually based on the amount of variation accounted for by the leading principal components. If $k$ principal components are selected, then a test of the equality of the remaining $p-k$ eigenvalues is

which has, asymptotically, a ${\chi}^{2}$-distribution with $\frac{1}{2}\left(p-k-1\right)\left(p-k+2\right)$ degrees of freedom.

$$\left(n-\left(2p+5\right)/6\right)\left\{-\sum _{i=k+1}^{p}\mathrm{log}\left({\lambda}_{i}^{2}\right)+\left(p-k\right)\mathrm{log}\left(\sum _{i=k+1}^{p}{\lambda}_{i}^{2}/\left(p-k\right)\right)\right\}$$ |

Equality of the remaining eigenvalues indicates that if any more principal components are to be considered then they all should be considered.

Instead of the variance-covariance matrix the correlation matrix, the sums of squares and cross-products matrix or a standardized sums of squares and cross-products matrix may be used. In the last case $S$ is replaced by ${\sigma}^{-\frac{1}{2}}S{\sigma}^{-\frac{1}{2}}$ for a diagonal matrix $\sigma $ with positive elements. If the correlation matrix is used, the ${\chi}^{2}$ approximation for the statistic given above is not valid.

The principal component scores, $F$, are the values of the principal component variables for the observations. These can be standardized so that the variance of these scores for each principal component is $1.0$ or equal to the corresponding eigenvalue.

Weights can be used with the analysis, in which case the matrix $X$ is first centred about the weighted means then each row is scaled by an amount $\sqrt{{w}_{i}}$, where ${w}_{i}$ is the weight for the $i$th observation.

# References

Chatfield C and Collins A J (1980)

*Introduction to Multivariate Analysis*Chapman and HallCooley W C and Lohnes P R (1971)

*Multivariate Data Analysis*WileyHammarling S (1985) The singular value decomposition in multivariate statistics

*SIGNUM Newsl.***20(3)**2–25Kendall M G and Stuart A (1969)

*The Advanced Theory of Statistics (Volume 1)*(3rd Edition) GriffinMorrison D F (1967)

*Multivariate Statistical Methods*McGraw–Hill# Error Indicators and Warnings

Errors or warnings detected by the method:

Some error messages may refer to parameters that are dropped from this interface
(LDX, LDE, LDP, LDV) In these
cases, an error in another parameter has usually caused an incorrect value to be inferred.

- ${\mathbf{ifail}}=1$
On entry, ${\mathbf{m}}<1$, or ${\mathbf{n}}<2$, or ${\mathbf{nvar}}<1$, or ${\mathbf{nvar}}>{\mathbf{m}}$, or ${\mathbf{nvar}}\ge {\mathbf{n}}$, or ${\mathbf{matrix}}\ne \text{"C"}$, $\text{"S"}$, $\text{"U"}$ or $\text{"V"}$, or ${\mathbf{std}}\ne \text{"S"}$, $\text{"U"}$, $\text{"Z"}$ or $\text{"E"}$, or ${\mathbf{weight}}\ne \text{"U"}$ or $\text{"W"}$.

- ${\mathbf{ifail}}=2$
On entry, ${\mathbf{weight}}=\text{"W"}$ and a value of ${\mathbf{wt}}<0.0$.

- ${\mathbf{ifail}}=3$
On entry, there are not nvar values of ${\mathbf{isx}}>0$, or ${\mathbf{weight}}=\text{"W"}$ and the effective number of observations is less than ${\mathbf{nvar}}+1$.

- ${\mathbf{ifail}}=4$
On entry, ${\mathbf{s}}\left[j-1\right]\le 0.0$ for some $j=1,2,\dots ,m$, when ${\mathbf{matrix}}=\text{"S"}$ and ${\mathbf{isx}}\left[j-1\right]>0$.

- ${\mathbf{ifail}}=5$
- The singular value decomposition has failed to converge. This is an unlikely error exit.

- ${\mathbf{ifail}}=6$
- All eigenvalues/singular values are zero. This will be caused by all the variables being constant.

- ${\mathbf{ifail}}=-9000$
- An error occured, see message report.
- ${\mathbf{ifail}}=-6000$
- Invalid Parameters $\u2329\mathit{\text{value}}\u232a$
- ${\mathbf{ifail}}=-4000$
- Invalid dimension for array $\u2329\mathit{\text{value}}\u232a$
- ${\mathbf{ifail}}=-8000$
- Negative dimension for array $\u2329\mathit{\text{value}}\u232a$
- ${\mathbf{ifail}}=-6000$
- Invalid Parameters $\u2329\mathit{\text{value}}\u232a$

# Accuracy

As g03aa uses a singular value decomposition of the data matrix, it will be less affected by ill-conditioned problems than traditional methods using the eigenvalue decomposition of the variance-covariance matrix.

# Parallelism and Performance

None.

# Further Comments

None.

# Example

A dataset is taken from Cooley and Lohnes (1971), it consists of ten observations on three variables. The unweighted principal components based on the variance-covariance matrix are computed and the principal component scores requested. The principal component scores are standardized so that they have variance equal to the corresponding eigenvalue.

Example program (C#): g03aae.cs