g02qg performs a multiple linear quantile regression. Parameter estimates and, if required, confidence limits, covariance matrices and residuals are calculated. g02qg may be used to perform a weighted quantile regression. A simplified interface for g02qg is provided by g02qf.

Syntax

C#
```public static void g02qg(
int sorder,
int ic1,
int n,
int m,
double[,] dat,
int[] isx,
int ip,
double[] y,
double[] wt,
int ntau,
double[] tau,
out double df,
double[,] b,
double[,] bl,
double[,] bu,
double[,,] ch,
double[,] res,
G02..::..g02qgOptions options,
G05..::..G05State g05state,
int[] info,
out int ifail
)```
Visual Basic
```Public Shared Sub g02qg ( _
sorder As Integer, _
ic1 As Integer, _
n As Integer, _
m As Integer, _
dat As Double(,), _
isx As Integer(), _
ip As Integer, _
y As Double(), _
wt As Double(), _
ntau As Integer, _
tau As Double(), _
<OutAttribute> ByRef df As Double, _
b As Double(,), _
bl As Double(,), _
bu As Double(,), _
ch As Double(,,), _
res As Double(,), _
options As G02..::..g02qgOptions, _
g05state As G05..::..G05State, _
info As Integer(), _
<OutAttribute> ByRef ifail As Integer _
)```
Visual C++
```public:
static void g02qg(
int sorder,
int ic1,
int n,
int m,
array<double,2>^ dat,
array<int>^ isx,
int ip,
array<double>^ y,
array<double>^ wt,
int ntau,
array<double>^ tau,
[OutAttribute] double% df,
array<double,2>^ b,
array<double,2>^ bl,
array<double,2>^ bu,
array<double,3>^ ch,
array<double,2>^ res,
G02..::..g02qgOptions^ options,
G05..::..G05State^ g05state,
array<int>^ info,
[OutAttribute] int% ifail
)```
F#
```static member g02qg :
sorder : int *
ic1 : int *
n : int *
m : int *
dat : float[,] *
isx : int[] *
ip : int *
y : float[] *
wt : float[] *
ntau : int *
tau : float[] *
df : float byref *
b : float[,] *
bl : float[,] *
bu : float[,] *
ch : float[,,] *
res : float[,] *
options : G02..::..g02qgOptions *
g05state : G05..::..G05State *
info : int[] *
ifail : int byref -> unit
```

Parameters

sorder
Type: System..::..Int32
On entry: determines the storage order of variates supplied in dat.
Constraint: ${\mathbf{sorder}}=1$ or $2$.
ic1
Type: System..::..Int32
On entry: indicates whether an intercept will be included in the model. The intercept is included by adding a column of ones as the first column in the design matrix, $X$.
${\mathbf{ic1}}=1$
An intercept will be included in the model.
${\mathbf{ic1}}=0$
An intercept will not be included in the model.
Constraint: ${\mathbf{ic1}}=0$ or $1$.
n
Type: System..::..Int32
On entry: the total number of observations in the dataset. If no weights are supplied, or no zero weights are supplied or observations with zero weights are included in the model then ${\mathbf{n}}=n$. Otherwise ${\mathbf{n}}=n+$ the number of observations with zero weights.
Constraint: ${\mathbf{n}}\ge 2$.
m
Type: System..::..Int32
On entry: $m$, the total number of variates in the dataset.
Constraint: ${\mathbf{m}}\ge 0$.
dat
Type: array<System..::..Double,2>[,](,)[,][,]
An array of size [dim1, _sddat]
Note: dim1 must satisfy the constraint:
• if ${\mathbf{sorder}}=1$, $\mathrm{dim1}\ge {\mathbf{n}}$;
• otherwise $\mathrm{dim1}\ge {\mathbf{m}}$.
On entry: the $\mathit{i}$th value for the $\mathit{j}$th variate, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{m}}$, must be supplied in
• ${\mathbf{dat}}\left[i-1,j-1\right]$ if ${\mathbf{sorder}}=1$, and
• ${\mathbf{dat}}\left[j-1,i-1\right]$ if ${\mathbf{sorder}}=2$.
The design matrix $X$ is constructed from dat, isx and ic1.
isx
Type: array<System..::..Int32>[]()[][]
An array of size [m]
On entry: indicates which independent variables are to be included in the model.
${\mathbf{isx}}\left[j-1\right]=0$
The $j$th variate, supplied in dat, is not included in the regression model.
${\mathbf{isx}}\left[j-1\right]=1$
The $j$th variate, supplied in dat, is included in the regression model.
Constraints:
• ${\mathbf{isx}}\left[\mathit{j}-1\right]=0$ or $1$, for $\mathit{j}=1,2,\dots ,{\mathbf{m}}$;
• if ${\mathbf{ic1}}=1$, exactly ${\mathbf{ip}}-1$ values of isx must be set to $1$;
• if ${\mathbf{ic1}}=0$, exactly ip values of isx must be set to $1$.
ip
Type: System..::..Int32
On entry: $p$, the number of independent variables in the model, including the intercept, see ic1, if present.
Constraints:
• $1\le {\mathbf{ip}}<{\mathbf{n}}$;
• if ${\mathbf{ic1}}=1$, $1\le {\mathbf{ip}}\le {\mathbf{m}}+1$;
• if ${\mathbf{ic1}}=0$, $1\le {\mathbf{ip}}\le {\mathbf{m}}$.
y
Type: array<System..::..Double>[]()[][]
An array of size [n]
On entry: $y$, observations on the dependent variable.
wt
Type: array<System..::..Double>[]()[][]
An array of size [_lwt]
On entry: if $\mathbf{_weight}=\text{"W"}$, wt must contain the diagonal elements of the weight matrix $W$. Otherwise wt is not referenced.
When
${\mathbf{Drop Zero Weights}}='\mathrm{YES}'$
If ${\mathbf{wt}}\left[i-1\right]=0.0$, the $i$th observation is not included in the model, in which case the effective number of observations, $n$, is the number of observations with nonzero weights. If ${\mathbf{Return Residuals}}='\mathrm{YES}'$, the values of res will be set to zero for observations with zero weights.
${\mathbf{Drop Zero Weights}}='\mathrm{NO}'$
All observations are included in the model and the effective number of observations is n, i.e., $n={\mathbf{n}}$.
Constraints:
• If $\mathbf{_weight}=\text{"W"}$, ${\mathbf{wt}}\left[\mathit{i}-1\right]\ge 0.0$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$;
• The effective number of observations $\text{}\ge 2$.
ntau
Type: System..::..Int32
On entry: the number of quantiles of interest.
Constraint: ${\mathbf{ntau}}\ge 1$.
tau
Type: array<System..::..Double>[]()[][]
An array of size [ntau]
On entry: the vector of quantiles of interest. A separate model is fitted to each quantile.
Constraint: $\sqrt{\epsilon }<{\mathbf{tau}}\left[\mathit{j}-1\right]<1-\sqrt{\epsilon }$ where $\epsilon$ is the machine precision returned by x02aj, for $\mathit{j}=1,2,\dots ,{\mathbf{ntau}}$.
df
Type: System..::..Double%
On exit: the degrees of freedom given by $n-k$, where $n$ is the effective number of observations and $k$ is the rank of the cross-product matrix ${X}^{\mathrm{T}}X$.
b
Type: array<System..::..Double,2>[,](,)[,][,]
An array of size [ip, ntau]
On entry: if ${\mathbf{Calculate Initial Values}}='\mathrm{NO}'$, ${\mathbf{b}}\left[\mathit{i}-1,\mathit{l}-1\right]$ must hold an initial estimates for ${\stackrel{^}{\beta }}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,{\mathbf{ip}}$ and $\mathit{l}=1,2,\dots ,{\mathbf{ntau}}$. If ${\mathbf{Calculate Initial Values}}='\mathrm{YES}'$, b need not be set.
On exit: ${\mathbf{b}}\left[\mathit{i}-1,\mathit{l}-1\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{ip}}$, contains the estimates of the parameters of the regression model, $\stackrel{^}{\beta }$, estimated for $\tau ={\mathbf{tau}}\left[\mathit{l}-1\right]$.
If ${\mathbf{ic1}}=1$, ${\mathbf{b}}\left[0,l-1\right]$ will contain the estimate corresponding to the intercept and ${\mathbf{b}}\left[i,l-1\right]$ will contain the coefficient of the $j$th variate contained in dat, where ${\mathbf{isx}}\left[j-1\right]$ is the $i$th nonzero value in the array isx.
If ${\mathbf{ic1}}=0$, ${\mathbf{b}}\left[i-1,l-1\right]$ will contain the coefficient of the $j$th variate contained in dat, where ${\mathbf{isx}}\left[j-1\right]$ is the $i$th nonzero value in the array isx.
bl
Type: array<System..::..Double,2>[,](,)[,][,]
An array of size [dim1, ntau]
Note: dim1 must satisfy the constraint:
Note: the second dimension of the array bl must be at least ${\mathbf{ntau}}$ if ${\mathbf{Interval Method}}\ne '\mathrm{NONE}'$.
On exit: if ${\mathbf{Interval Method}}\ne '\mathrm{NONE}'$, ${\mathbf{bl}}\left[i-1,l-1\right]$ contains the lower limit of an $\left(100×\alpha \right)%$ confidence interval for ${\mathbf{b}}\left[\mathit{i}-1,\mathit{l}-1\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{ip}}$ and $\mathit{l}=1,2,\dots ,{\mathbf{ntau}}$.
If ${\mathbf{Interval Method}}='\mathrm{NONE}'$, bl is not referenced.
The method used for calculating the interval is controlled by the optional parameters Interval Method and Bootstrap Interval Method. The size of the interval, $\alpha$, is controlled by the optional parameter Significance Level.
bu
Type: array<System..::..Double,2>[,](,)[,][,]
An array of size [dim1, ntau]
Note: dim1 must satisfy the constraint:
Note: the second dimension of the array bu must be at least ${\mathbf{ntau}}$ if ${\mathbf{Interval Method}}\ne '\mathrm{NONE}'$.
On exit: if ${\mathbf{Interval Method}}\ne '\mathrm{NONE}'$, ${\mathbf{bu}}\left[i-1,l-1\right]$ contains the upper limit of an $\left(100×\alpha \right)%$ confidence interval for ${\mathbf{b}}\left[\mathit{i}-1,\mathit{l}-1\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{ip}}$ and $\mathit{l}=1,2,\dots ,{\mathbf{ntau}}$.
If ${\mathbf{Interval Method}}='\mathrm{NONE}'$, bu is not referenced.
The method used for calculating the interval is controlled by the optional parameters Interval Method and Bootstrap Interval Method. The size of the interval, $\alpha$ is controlled by the optional parameter Significance Level.
ch
Type: array<System..::..Double,3>[,](,)[,][,]
An array of size [dim1, dim2, dim3]
Note: dim1 must satisfy the constraint:
Note: dim2 must satisfy the constraint:
Note: dim3 must satisfy the constraint:
Note: the last dimension of the array ch must be at least ${\mathbf{ntau}}$ if ${\mathbf{Interval Method}}\ne '\mathrm{NONE}'$ and ${\mathbf{Matrix Returned}}='\mathrm{COVARIANCE}'$ and at least ${\mathbf{ntau}}+1$ if ${\mathbf{Interval Method}}\ne '\mathrm{NONE}'$, $'\mathrm{IID}'$ or $'\mathrm{BOOTSTRAP XY}'$ and ${\mathbf{Matrix Returned}}='\mathrm{H INVERSE}'$.
On exit: depending on the supplied optional parameters, ch will either not be referenced, hold an estimate of the upper triangular part of the covariance matrix, $\Sigma$, or an estimate of the upper triangular parts of $n{J}_{n}$ and ${n}^{-1}{H}_{n}^{-1}$.
If ${\mathbf{Interval Method}}='\mathrm{NONE}'$ or ${\mathbf{Matrix Returned}}='\mathrm{NONE}'$, ch is not referenced.
If ${\mathbf{Interval Method}}='\mathrm{BOOTSTRAP XY}'$ or $'\mathrm{IID}'$ and ${\mathbf{Matrix Returned}}='\mathrm{H INVERSE}'$, ch is not referenced.
Otherwise, for $i,j=1,2,\dots ,{\mathbf{ip}},j\ge i$ and $l=1,2,\dots ,{\mathbf{ntau}}$:
• If ${\mathbf{Matrix Returned}}='\mathrm{COVARIANCE}'$, ${\mathbf{ch}}\left[i-1,j-1,l-1\right]$ holds an estimate of the covariance between ${\mathbf{b}}\left[i-1,l-1\right]$ and ${\mathbf{b}}\left[j-1,l-1\right]$.
• If ${\mathbf{Matrix Returned}}='\mathrm{H INVERSE}'$, ${\mathbf{ch}}\left[i-1,j-1,0\right]$ holds an estimate of the $\left(i,j\right)$th element of $n{J}_{n}$ and ${\mathbf{ch}}\left[i-1,j-1,l+1-1\right]$ holds an estimate of the $\left(i,j\right)$th element of ${n}^{-1}{H}_{n}^{-1}$, for $\tau ={\mathbf{tau}}\left[l-1\right]$.
The method used for calculating $\Sigma$ and ${H}_{n}^{-1}$ is controlled by the optional parameter Interval Method.
res
Type: array<System..::..Double,2>[,](,)[,][,]
An array of size [n, dim2]
Note: dim2 must satisfy the constraint:
On exit: if ${\mathbf{Return Residuals}}='\mathrm{YES}'$, ${\mathbf{res}}\left[\mathit{i}-1,\mathit{l}-1\right]$ holds the (weighted) residuals, ${r}_{\mathit{i}}$, for $\tau ={\mathbf{tau}}\left[\mathit{l}-1\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$ and $\mathit{l}=1,2,\dots ,{\mathbf{ntau}}$.
If ${\mathbf{wt}}\phantom{\rule{0.25em}{0ex}}\text{is not}\phantom{\rule{0.25em}{0ex}}\mathbf{NULL}$ and ${\mathbf{Drop Zero Weights}}='\mathrm{YES}'$, the value of res will be set to zero for observations with zero weights.
If ${\mathbf{Return Residuals}}='\mathrm{NO}'$, res is not referenced.
options
Type: NagLibrary..::..G02..::..g02qgOptions
An Object of type G02.g02qgOptions. Used to configure optional parameters to this method.
g05state
Type: NagLibrary..::..G05..::..G05State
An Object of type G05.G05State.
info
Type: array<System..::..Int32>[]()[][]
An array of size [${\mathbf{ntau}}$]
On exit: ${\mathbf{info}}\left[i\right]$ holds additional information concerning the model fitting and confidence limit calculations when $\tau ={\mathbf{tau}}\left[i\right]$.
 Code Warning $0$ Model fitted and confidence limits (if requested) calculated successfully $1$ The method did not converge. The returned values are based on the estimate at the last iteration. Try increasing Iteration Limit whilst calculating the parameter estimates or relaxing the definition of convergence by increasing Tolerance. $2$ A singular matrix was encountered during the optimization. The model was not fitted for this value of $\tau$. $4$ Some truncation occurred whilst calculating the confidence limits for this value of $\tau$. See [Algorithmic Details] for details. The returned upper and lower limits may be narrower than specified. $8$ The method did not converge whilst calculating the confidence limits. The returned limits are based on the estimate at the last iteration. Try increasing Iteration Limit. $16$ Confidence limits for this value of $\tau$ could not be calculated. The returned upper and lower limits are set to a large positive and large negative value respectively as defined by the optional parameter Big.
It is possible for multiple warnings to be applicable to a single model. In these cases the value returned in info is the sum of the corresponding individual nonzero warning codes.
ifail
Type: System..::..Int32%
On exit: ${\mathbf{ifail}}={0}$ unless the method detects an error or a warning has been flagged (see [Error Indicators and Warnings]).

Description

Given a vector of $n$ observed values, $y=\left\{{y}_{i}:i=1,2,\dots ,n\right\}$, an $n×p$ design matrix $X$, a column vector, $x$, of length $p$ holding the $i$th row of $X$ and a quantile $\tau \in \left(0,1\right)$, g02qg estimates the $p$-element vector $\beta$ as the solution to
 $minimizeβ∈ℝp∑ i=1 n ρτyi-xiTβ$ (1)
where ${\rho }_{\tau }$ is the piecewise linear loss function ${\rho }_{\tau }\left(z\right)=z\left(\tau -I\left(z<0\right)\right)$, and $I\left(z<0\right)$ is an indicator function taking the value $1$ if $z<0$ and $0$ otherwise. Weights can be incorporated by replacing $X$ and $y$ with $WX$ and $Wy$ respectively, where $W$ is an $n×n$ diagonal matrix. Observations with zero weights can either be included or excluded from the analysis; this is in contrast to least squares regression where such observations do not contribute to the objective function and are therefore always dropped.
g02qg uses the interior point algorithm of Portnoy and Koenker (1997), described briefly in [Algorithmic Details], to obtain the parameter estimates $\stackrel{^}{\beta }$, for a given value of $\tau$.
Under the assumption of Normally distributed errors, Koenker (2005) shows that the limiting covariance matrix of $\stackrel{^}{\beta }-\beta$ has the form
 $Σ=τ1-τnHn-1JnHn-1$
where ${J}_{n}={n}^{-1}\sum _{\mathit{i}=1}^{n}{x}_{i}{x}_{i}^{\mathrm{T}}$ and ${H}_{n}$ is a function of $\tau$, as described below. Given an estimate of the covariance matrix, $\stackrel{^}{\Sigma }$, lower (${\stackrel{^}{\beta }}_{L}$) and upper (${\stackrel{^}{\beta }}_{U}$) limits for an $\left(100×\alpha \right)%$ confidence interval can be calculated for each of the $p$ parameters, via
 $β^Li=β^i-tn-p,1+α/2Σ^ii,β^Ui=β^i+tn-p,1+α/2Σ^ii$
where ${t}_{n-p,0.975}$ is the $97.5$ percentile of the Student's $t$ distribution with $n-k$ degrees of freedom, where $k$ is the rank of the cross-product matrix ${X}^{\mathrm{T}}X$.
Four methods for estimating the covariance matrix, $\Sigma$, are available:
(i) Independent, identically distributed (IID) errors
Under an assumption of IID errors the asymptotic relationship for $\Sigma$ simplifies to
 $Σ=τ1-τnsτ2XTX-1$
where $s$ is the sparsity function. g02qg estimates $s\left(\tau \right)$ from the residuals, ${r}_{i}={y}_{i}-{x}_{i}^{\mathrm{T}}\stackrel{^}{\beta }$ and a bandwidth ${h}_{n}$.
(ii) Powell Sandwich
Powell (1991) suggested estimating the matrix ${H}_{n}$ by a kernel estimator of the form
 $H^n=ncn-1∑ i=1 n Kricn⁢xixiT$
where $K$ is a kernel function and ${c}_{n}$ satisfies $\underset{n\to \infty }{\mathrm{lim}}\phantom{\rule{0.25em}{0ex}}{c}_{n}\to 0$ and $\underset{n\to \infty }{\mathrm{lim}}\phantom{\rule{0.25em}{0ex}}\sqrt{n}{c}_{n}\to \infty$. When the Powell method is chosen, g02qg uses a Gaussian kernel (i.e., $K=\varphi$) and sets
 $cn=minσr,qr3-qr1/1.34×Φ-1τ+hn-Φ-1τ-hn$
where ${h}_{n}$ is a bandwidth, ${\sigma }_{r},{q}_{r1}$ and ${q}_{r3}$ are, respectively, the standard deviation and the $25%$ and $75%$ quantiles for the residuals, ${r}_{i}$.
(iii) Hendricks–Koenker Sandwich
Koenker (2005) suggested estimating the matrix ${H}_{n}$ using
 $H^n=n-1∑ i=1 n 2⁢hnxiTβ^τ+hn-β^τ-hn⁢xixiT$
where ${h}_{n}$ is a bandwidth and $\stackrel{^}{\beta }\left(\tau +{h}_{n}\right)$ denotes the parameter estimates obtained from a quantile regression using the $\left(\tau +{h}_{n}\right)$th quantile. Similarly with $\stackrel{^}{\beta }\left(\tau -{h}_{n}\right)$.
(iv) Bootstrap
The last method uses bootstrapping to either estimate a covariance matrix or obtain confidence intervals for the parameter estimates directly. This method therefore does not assume Normally distributed errors. Samples of size $n$ are taken from the paired data $\left\{{y}_{i},{x}_{i}\right\}$ (i.e., the independent and dependent variables are sampled together). A quantile regression is then fitted to each sample resulting in a series of bootstrap estimates for the model parameters, $\beta$. A covariance matrix can then be calculated directly from this series of values. Alternatively, confidence limits, ${\stackrel{^}{\beta }}_{L}$ and ${\stackrel{^}{\beta }}_{U}$, can be obtained directly from the $\left(1-\alpha \right)/2$ and $\left(1+\alpha \right)/2$ sample quantiles of the bootstrap estimates.
Further details of the algorithms used to calculate the covariance matrices can be found in [Algorithmic Details].
All three asymptotic estimates of the covariance matrix require a bandwidth, ${h}_{n}$. Two alternative methods for determining this are provided:
(i) Sheather–Hall
 $hn=1.5⁢Φ-1αbϕΦ-1τ2n⁢2⁢Φ-1τ+113$
for a user-supplied value ${\alpha }_{b}$,
(ii) Bofinger
 $hn=4.5⁢ϕΦ-1τ4n⁢2⁢Φ-1τ+1215$
g02qg allows optional arguments to be supplied via the iopts and opts arrays (see [Optional Parameters] for details of the available options). Prior to calling g02qg the optional parameter arrays, iopts and opts must be initialized by calling (G02ZKF not in this release) with optstr set to ${\mathbf{Initialize}}={\mathbf{g02qg}}$ (see [Optional Parameters] for details on the available options). If bootstrap confidence limits are required (${\mathbf{Interval Method}}='\mathrm{BOOTSTRAP XY}'$) then one of the random number initialization methods (G05KFF not in this release) (for a repeatable analysis) or (G05KGF not in this release) (for an unrepeatable analysis) must also have been previously called.

References

Koenker R (2005) Quantile Regression Econometric Society Monographs, Cambridge University Press, New York
Mehrotra S (1992) On the implementation of a primal-dual interior point method SIAM J. Optim. 2 575–601
Nocedal J and Wright S J (1999) Numerical Optimization Springer Series in Operations Research, Springer, New York
Portnoy S and Koenker R (1997) The Gaussian hare and the Laplacian tortoise: computability of squared-error versus absolute error estimators Statistical Science 4 279–300
Powell J L (1991) Estimation of monotonic regression models under quantile restrictions Nonparametric and Semiparametric Methods in Econometrics Cambridge University Press, Cambridge

Error Indicators and Warnings

Errors or warnings detected by the method:
Some error messages may refer to parameters that are dropped from this interface (LDDAT, RIP, TDCH, SDRES, LIOPTS, LOPTS, LSTATE) In these cases, an error in another parameter has usually caused an incorrect value to be inferred.
${\mathbf{ifail}}=11$
On entry, ${\mathbf{sorder}}\ne 1$ or $2$.
${\mathbf{ifail}}=21$
On entry, ${\mathbf{ic1}}\ne 1$ or $0$.
${\mathbf{ifail}}=31$
On entry, $\mathbf{_weight}\ne \text{"U"}$ or $\text{"W"}$.
${\mathbf{ifail}}=41$
On entry, ${\mathbf{n}}<2$.
${\mathbf{ifail}}=51$
On entry, ${\mathbf{m}}<0$.
${\mathbf{ifail}}=71$
On entry, ${\mathbf{sorder}}=1$, ${\mathbf{lddat}}<{\mathbf{n}}$.
${\mathbf{ifail}}=72$
On entry, ${\mathbf{sorder}}=2$, ${\mathbf{lddat}}<{\mathbf{m}}$.
${\mathbf{ifail}}=81$
On entry, ${\mathbf{isx}}\left[\mathit{j}-1\right]\ne 0$ or $1$.
${\mathbf{ifail}}=91$
On entry, ${\mathbf{ip}}<1$ or ${\mathbf{ip}}\ge {\mathbf{n}}$.
${\mathbf{ifail}}=92$
On entry, ip is not consistent with isx and ic1.
${\mathbf{ifail}}=111$
On entry, $\mathbf{_weight}=\text{"W"}$ and ${\mathbf{wt}}\left[i-1\right]<0.0$ for at least one $i$.
${\mathbf{ifail}}=112$
On entry, the effective number of observations is less than two.
${\mathbf{ifail}}=121$
On entry, ${\mathbf{ntau}}<1$.
${\mathbf{ifail}}=131$
On entry, tau is invalid.
${\mathbf{ifail}}=201$
On entry, one or more of the optional parameter arrays iopts and opts have not been initialized or have been corrupted.
${\mathbf{ifail}}=221$
On entry, ${\mathbf{Interval Method}}='\mathrm{BOOTSTRAP XY}'$ and state was not initialized or has been corrupted.
${\mathbf{ifail}}=231$
On exit, problems were encountered whilst fitting at least one model. Additional information has been returned in info.
${\mathbf{ifail}}=-4000$
Invalid dimension for array $〈\mathit{\text{value}}〉$
${\mathbf{ifail}}=-8000$
Negative dimension for array $〈\mathit{\text{value}}〉$
${\mathbf{ifail}}=-6000$
Invalid Parameters $〈\mathit{\text{value}}〉$

Not applicable.

Parallelism and Performance

None.

g02qg allocates internally approximately the following elements of real storage: $13n+np+3{p}^{2}+6p+3\left(p+1\right)×{\mathbf{ntau}}$. If ${\mathbf{Interval Method}}='\mathrm{BOOTSTRAP XY}'$ then a further $np$ elements are required, and this increases by $p×{\mathbf{ntau}}×{\mathbf{Bootstrap Iterations}}$ if ${\mathbf{Bootstrap Interval Method}}='\mathrm{QUANTILE}'$. Where possible, any user-supplied output arrays are used as workspace and so the amount actually allocated may be less. If ${\mathbf{sorder}}=2$, ${\mathbf{wt}}\phantom{\rule{0.25em}{0ex}}\text{is}\phantom{\rule{0.25em}{0ex}}\mathbf{NULL}$, ${\mathbf{ic1}}=0$ and ${\mathbf{ip}}={\mathbf{m}}$ an internal copy of the input data is avoided and the amount of locally allocated memory is reduced by $np$.

Example

A quantile regression model is fitted to Engels 1857 study of household expenditure on food. The model regresses the dependent variable, household food expenditure, against two explanatory variables, a column of ones and household income. The model is fit for five different values of $\tau$ and the covariance matrix is estimated assuming Normal IID errors. Both the covariance matrix and the residuals are returned.

Example program (C#): g02qge.cs

Example program data: g02qge.d

Example program results: g02qge.r

Algorithmic Details

By the addition of slack variables the minimization (1) can be reformulated into the linear programming problem
 $minimizeu,v,β∈ℝ+n×ℝ+n×ℝpτeTu+1-τeTv​ subject to y=Xβ+u-v$ (2)
and its associated dual
 $maximizedyTd​ subject to XTd=0,d∈τ-1,τn$ (3)
where $e$ is a vector of $n$ $1$s. Setting $a=d+\left(1-\tau \right)e$ gives the equivalent formulation
 $maximizeayTa​ subject to XTa=1-τXTe,a∈0,1n.$ (4)
The algorithm introduced by Portnoy and Koenker (1997) and used by g02qg, uses the primal-dual formulation expressed in equations (2) and (4) along with a logarithmic barrier function to obtain estimates for $\beta$. The algorithm is based on the predictor-corrector algorithm of Mehrotra (1992) and further details can be obtained from Portnoy and Koenker (1997) and Koenker (2005). A good description of linear programming, interior point algorithms, barrier functions and Mehrotra's predictor-corrector algorithm can be found in Nocedal and Wright (1999).

Interior Point Algorithm

In this section a brief description of the interior point algorithm used to estimate the model parameters is presented. It should be noted that there are some differences in the equations given here – particularly (7) and (9) – compared to those given in Koenker (2005) and Portnoy and Koenker (1997).

Central path

Rather than optimize (4) directly, an additional slack variable $s$ is added and the constraint $a\in {\left[0,1\right]}^{n}$ is replaced with $a+s=e,{a}_{i}\ge 0,{s}_{i}\ge 0$, for $i=1,2,\dots ,n$.
The positivity constraint on $a$ and $s$ is handled using the logarithmic barrier function
 $Ba,s,μ=yTa+μ⁢∑ i=1 n log ai+log si.$
The primal-dual form of the problem is used giving the Lagrangian
 $La,s,β,u,μ=Ba,s,μ-βTXTa-1-τXTe-uTa+s-e$
whose central path is described by the following first order conditions
 $XTa=1-τXTea+s=eXβ+u-v=ySUe=μeAVe=μe$ (5)
where $A$ denotes the diagonal matrix with diagonal elements given by $a$, similarly with $S,U$ and $V$. By enforcing the inequalities on $s$ and $a$ strictly, i.e., ${a}_{i}>0$ and ${s}_{i}>0$ for all $i$ we ensure that $A$ and $S$ are positive definite diagonal matrices and hence ${A}^{-1}$ and ${S}^{-1}$ exist.
Rather than applying Newton's method to the system of equations given in (5) to obtain the step directions ${\delta }_{\beta },{\delta }_{a},{\delta }_{s},{\delta }_{u}$ and ${\delta }_{v}$, Mehrotra substituted the steps directly into (5) giving the augmented system of equations
 $XTa+δa=1-τXTea+δa+s+δs=eXβ+δβ+u+δu-v+δv=yS+ΔsU+Δue=μeA+ΔaV+Δve=μe$ (6)
where ${\Delta }_{a},{\Delta }_{s},{\Delta }_{u}$ and ${\Delta }_{v}$ denote the diagonal matrices with diagonal elements given by ${\delta }_{a},{\delta }_{s},{\delta }_{u}$ and ${\delta }_{v}$ respectively.

Affine scaling step

The affine scaling step is constructed by setting $\mu =0$ in (5) and applying Newton's method to obtain an intermediate set of step directions
 $XTWXδβ=XTWy-Xβ+τ-1XTe+XTaδa=Wy-Xβ-Xδβδs=-δaδu=S-1Uδa-Ueδv=A-1Vδs-Ve$ (7)
where $W={\left({S}^{-1}U+{A}^{-1}V\right)}^{-1}$.
Initial step sizes for the primal (${\stackrel{^}{\gamma }}_{P}$) and dual (${\stackrel{^}{\gamma }}_{D}$) parameters are constructed as
 $γ^P=σ×minmini,δai<0ai/δai,mini,δsi<0si/δsiγ^D=σ×minmini,δui<0ui/δui,mini,δvi<0vi/δvi$ (8)
where $\sigma$ is a user-supplied scaling factor. If ${\stackrel{^}{\gamma }}_{P}×{\stackrel{^}{\gamma }}_{D}\ge 1$ then the nonlinearity adjustment, described in [Nonlinearity Adjustment], is not made and the model parameters are updated using the current step size and directions.

In the nonlinearity adjustment step a new estimate of $\mu$ is obtained by letting
 $g^γ^P,γ^D=s+γ^PδsTu+γ^Dδu+a+γ^PδaTv+γ^Dδv$
and estimating $\mu$ as
 $μ=g^γ^P,γ^Dg^0,03⁢g^0,02⁢n.$
This estimate, along with the nonlinear terms ($\Delta u$, $\Delta s$, $\Delta a$ and $\Delta v$) from (6) are calculated using the values of ${\delta }_{a},{\delta }_{s},{\delta }_{u}$ and ${\delta }_{v}$ obtained from the affine scaling step.
Given an updated estimate for $\mu$ and the nonlinear terms the system of equations
 $XTWXδβ=XTWy-Xβ+μS-1-A-1e+S-1ΔsΔue-A-1ΔaΔve+τ-1XTe+XTaδa=Wy-Xβ-Xδβ+μS-1-A-1δs=-δaδu=μS-1e+S-1Uδa-Ue-S-1ΔsΔueδv=μA-1e+A-1Vδs-Ve-A-1ΔaΔve$ (9)
are solved and updated values for ${\delta }_{\beta },{\delta }_{a},{\delta }_{s},{\delta }_{u},{\delta }_{v},{\stackrel{^}{\gamma }}_{P}$ and ${\stackrel{^}{\gamma }}_{D}$ calculated.

Update and convergence

At each iteration the model parameters $\left(\beta ,a,s,u,v\right)$ are updated using step directions, $\left({\delta }_{\beta },{\delta }_{a},{\delta }_{s},{\delta }_{u},{\delta }_{v}\right)$ and step lengths $\left({\stackrel{^}{\gamma }}_{P},{\stackrel{^}{\gamma }}_{D}\right)$.
Convergence is assessed using the duality gap, that is, the differences between the objective function in the primal and dual formulations. For any feasible point $\left(u,v,s,a\right)$ the duality gap can be calculated from equations (2) and (3) as
 $τeTu+1-τ⁢eTv-dTy=τeTu+1-τ⁢eTv-a-1-τeTy=sTu+aTv=eTu-aTy+1-τeTXβ$
and the optimization terminates if the duality gap is smaller than the tolerance supplied in the optional parameter Tolerance.

Initial values are required for the parameters $a,s,u,v$ and $\beta$. If not supplied by the user, initial values for $\beta$ are calculated from a least squares regression of $y$ on $X$. This regression is carried out by first constructing the cross-product matrix ${X}^{\mathrm{T}}X$ and then using a pivoted $QR$ decomposition as performed by f08bf. In addition, if the cross-product matrix is not of full rank, a rank reduction is carried out and, rather than using the full design matrix, $X$, a matrix formed from the first $p$-rank columns of $XP$ is used instead, where $P$ is the pivot matrix used during the $QR$ decomposition. Parameter estimates, confidence intervals and the rows and columns of the matrices returned in the parameter ch (if any) are set to zero for variables dropped during the rank-reduction. The rank reduction step is performed irrespective of whether initial values are supplied by the user.
Once initial values have been obtained for $\beta$, the initial values for $u$ and $v$ are calculated from the residuals. If $\left|{r}_{i}\right|<{\epsilon }_{u}$ then a value of $±{\epsilon }_{u}$ is used instead, where ${\epsilon }_{u}$ is supplied in the optional parameter Epsilon. The initial values for the $a$ and $s$ are always set to $1-\tau$ and $\tau$ respectively.
The solution for ${\delta }_{\beta }$ in both (7) and (9) is obtained using a Bunch–Kaufman decomposition, as implemented in (F07MDF not in this release).

Calculation of Covariance Matrix

g02qg supplies four methods to calculate the covariance matrices associated with the parameter estimates for $\beta$. This section gives some additional detail on three of the algorithms, the fourth, (which uses bootstrapping), is described in [Description].
(i) Independent, identically distributed (IID) errors
When assuming IID errors, the covariance matrices depend on the sparsity, $s\left(\tau \right)$, which g02qg estimates as follows:
 (a) Let ${r}_{i}$ denote the residuals from the original quantile regression, that is ${r}_{i}={y}_{i}-{x}_{i}^{\mathrm{T}}\stackrel{^}{\beta }$. (b) Drop any residual where $\left|{r}_{i}\right|$ is less than ${\epsilon }_{u}$, supplied in the optional parameter Epsilon. (c) Sort and relabel the remaining residuals in ascending order, by absolute value, so that ${\epsilon }_{u}<\left|{r}_{1}\right|<\left|{r}_{2}\right|<\dots$. (d) Select the first $l$ values where $l={h}_{n}n$, for some bandwidth ${h}_{n}$. (e) Sort and relabel these $l$ residuals again, so that ${r}_{1}<{r}_{2}<\dots <{r}_{l}$ and regress them against a design matrix with two columns ($p=2$) and rows given by ${x}_{i}=\left\{1,i/\left(n-p\right)\right\}$ using quantile regression with $\tau =0.5$. (f) Use the resulting estimate of the slope as an estimate of the sparsity.
(ii) Powell Sandwich
When using the Powell Sandwich to estimate the matrix ${H}_{n}$, the quantity
 $cn=minσr,qr3-qr1/1.34×Φ-1τ+hn-Φ-1τ-hn$
is calculated. Dependent on the value of $\tau$ and the method used to calculate the bandwidth (${h}_{n}$), it is possible for the quantities $\tau ±{h}_{n}$ to be too large or small, compared to machine precision ($\epsilon$). More specifically, when $\tau -{h}_{n}\le \sqrt{\epsilon }$, or $\tau +{h}_{n}\ge 1-\sqrt{\epsilon }$, a warning flag is raised in info, the value is truncated to $\sqrt{\epsilon }$ or $1-\sqrt{\epsilon }$ respectively and the covariance matrix calculated as usual.
(iii) Hendricks–Koenker Sandwich
The Hendricks–Koenker Sandwich requires the calculation of the quantity ${d}_{i}={x}_{i}^{\mathrm{T}}\left(\stackrel{^}{\beta }\left(\tau +{h}_{n}\right)-\stackrel{^}{\beta }\left(\tau -{h}_{n}\right)\right)$. As with the Powell Sandwich, in cases where $\tau -{h}_{n}\le \sqrt{\epsilon }$, or $\tau +{h}_{n}\ge 1-\sqrt{\epsilon }$, a warning flag is raised in info, the value truncated to $\sqrt{\epsilon }$ or $1-\sqrt{\epsilon }$ respectively and the covariance matrix calculated as usual.
In addition, it is required that ${d}_{i}>0$, in this method. Hence, instead of using $2{h}_{n}/{d}_{i}$ in the calculation of ${H}_{n}$, $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(2{h}_{n}/\left({d}_{i}+{\epsilon }_{u}\right),0\right)$ is used instead, where ${\epsilon }_{u}$ is supplied in the optional parameter Epsilon.

Description of Monitoring Information

See the description of the optional argument Monitoring.