
$$y=X\beta +\epsilon \text{,}$$ 
where  $y$ is a vector of length $n$ of the dependent variable, 
$X$ is an $n$ by $p$ matrix of the independent variables,  
$\beta $ is a vector of length $p$ of unknown arguments,  
and  $\epsilon $ is a vector of length $n$ of unknown random errors such that $\mathrm{var}\epsilon ={\sigma}^{2}I$. 
$$r=y\hat{y}=yX\hat{\beta}$$ 
(i)  The $i$th residual is standardized by its variance when the estimate of ${\sigma}^{2}$, ${s}^{2}$, is calculated from all the data; this is known as internal Studentization.


(ii)  The $i$th residual is standardized by its variance when the estimate of ${\sigma}^{2}$, ${s}_{i}^{2}$ is calculated from the data excluding the $i$th observation; this is known as external Studentization.

(i)  Cook's $D$


(ii)  Atkinson's $T$

On entry,  ${\mathbf{IP}}<1$, 
or  ${\mathbf{N}}\le {\mathbf{IP}}+1$, 
or  ${\mathbf{NRES}}<1$, 
or  ${\mathbf{NRES}}>{\mathbf{N}}$, 
or  ${\mathbf{LDSRES}}<{\mathbf{NRES}}$, 
or  ${\mathbf{RMS}}\le 0.0$. 
On entry,  ${\mathbf{H}}\left(i\right)\le 0.0$ or $\text{}\ge 1.0$, for some $i=1,2,\dots ,{\mathbf{NRES}}$. 
On entry,  the value of a residual is too large for the given value of RMS. 