hide long namesshow long names
hide short namesshow short names
Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

NAG Toolbox: nag_correg_lars (g02ma)

 Contents

    1  Purpose
    2  Syntax
    7  Accuracy
    9  Example

Purpose

nag_correg_lars (g02ma) performs Least Angle Regression (LARS), forward stagewise linear regression or Least Absolute Shrinkage and Selection Operator (LASSO).

Syntax

[b, fitsum, ifail] = g02ma(mtype, m, d, y, 'pred', pred, 'prey', prey, 'n', n, 'isx', isx, 'mnstep', mnstep, 'ropt', ropt)
[b, fitsum, ifail] = nag_correg_lars(mtype, m, d, y, 'pred', pred, 'prey', prey, 'n', n, 'isx', isx, 'mnstep', mnstep, 'ropt', ropt)

Description

nag_correg_lars (g02ma) implements the LARS algorithm of Efron et al. (2004) as well as the modifications needed to perform forward stagewise linear regression and fit LASSO and positive LASSO models.
Given a vector of n observed values, y = yi : i=1,2,,n  and an n×p design matrix X, where the jth column of X, denoted xj, is a vector of length n representing the jth independent variable xj, standardized such that i=1 n xij =0 , and i=1 n xij2 =1  and a set of model parameters β to be estimated from the observed values, the LARS algorithm can be summarised as:
1. Set k=1 and all coefficients to zero, that is β=0.
2. Find the variable most correlated with y, say xj1. Add xj1 to the ‘most correlated’ set A. If p=1 go to 8.
3. Take the largest possible step in the direction of xj1 (i.e., increase the magnitude of βj1) until some other variable, say xj2, has the same correlation with the current residual, y-xj1βj1.
4. Increment k and add xjk to A.
5. If A=p go to 8.
6. Proceed in the ‘least angle direction’, that is, the direction which is equiangular between all variables in A, altering the magnitude of the parameter estimates of those variables in A, until the kth variable, xjk, has the same correlation with the current residual.
7. Go to 4.
8. Let K=k.
As well as being a model selection process in its own right, with a small number of modifications the LARS algorithm can be used to fit the LASSO model of Tibshirani (1996), a positive LASSO model, where the independent variables enter the model in their defined direction (i.e., βkj0), forward stagewise linear regression (Hastie et al. (2001)) and forward selection (Weisberg (1985)). Details of the required modifications in each of these cases are given in Efron et al. (2004).
The LASSO model of Tibshirani (1996) is given by
minimize α , βk p y- α- XT βk 2   subject to   βk1 tk  
for all values of tk, where α=y-=n-1 i=1 n yi. The positive LASSO model is the same as the standard LASSO model, given above, with the added constraint that
βkj 0 ,   j=1,2,,p .  
Unlike the standard LARS algorithm, when fitting either of the LASSO models, variables can be dropped as well as added to the set A. Therefore the total number of steps K is no longer bounded by p.
Forward stagewise linear regression is an iterative procedure of the form:
1. Initialize k=1 and the vector of residuals r0=y-α.
2. For each j=1,2,,p calculate cj=xjTrk-1. The value cj is therefore proportional to the correlation between the jth independent variable and the vector of previous residual values, rk.
3. Calculate jk = argmax j cj , the value of j with the largest absolute value of cj.
4. If cjk<ε then go to 7.
5. Update the residual values, with
rk = rk-1 + δ ​ ​ signcjk xjk  
where δ is a small constant and signcjk=-1 when cjk<0 and 1 otherwise.
6. Increment k and go to 2.
7. Set K=k.
If the largest possible step were to be taken, that is δ=cjk then forward stagewise linear regression reverts to the standard forward selection method as implemented in nag_correg_linregm_fit_onestep (g02ee).
The LARS procedure results in K models, one for each step of the fitting process. In order to aid in choosing which is the most suitable Efron et al. (2004) introduced a Cp-type statistic given by
Cpk = y- XT βk 2 σ2 -n+2νk,  
where νk is the approximate degrees of freedom for the kth step and
σ2 = n-yTyνK .  
One way of choosing a model is therefore to take the one with the smallest value of Cpk.

References

Efron B, Hastie T, Johnstone I and Tibshirani R (2004) Least Angle Regression The Annals of Statistics (Volume 32) 2 407–499
Hastie T, Tibshirani R and Friedman J (2001) The Elements of Statistical Learning: Data Mining, Inference and Prediction Springer (New York)
Tibshirani R (1996) Regression Shrinkage and Selection via the Lasso Journal of the Royal Statistics Society, Series B (Methodological) (Volume 58) 1 267–288
Weisberg S (1985) Applied Linear Regression Wiley

Parameters

Compulsory Input Parameters

1:     mtype int64int32nag_int scalar
Indicates the type of model to fit.
mtype=1
LARS is performed.
mtype=2
Forward linear stagewise regression is performed.
mtype=3
LASSO model is fit.
mtype=4
A positive LASSO model is fit.
Constraint: mtype=1, 2, 3 or 4.
2:     m int64int32nag_int scalar
m, the total number of independent variablesm=p when all variables are included.
Constraint: m1.
3:     dldd: – double array
The first dimension of the array d must be at least n.
The second dimension of the array d must be at least m.
D, the data, which along with pred and isx, defines the design matrix X. The ith observation for the jth variable must be supplied in dij, for i=1,2,,n and j=1,2,,m.
4:     yn – double array
y, the observations on the dependent variable.

Optional Input Parameters

1:     pred int64int32nag_int scalar
Default: pred=3
Indicates the type of data preprocessing to perform on the independent variables supplied in d to comply with the standardized form of the design matrix.
pred=0
No preprocessing is performed.
pred=1
Each of the independent variables, xj, for j=1,2,,p, are mean centered prior to fitting the model. The means of the independent variables, x-, are returned in b, with x-j=bjnstep+2, for j=1,2,,p.
pred=2
Each independent variable is normalized, with the jth variable scaled by 1/xjTxj. The scaling factor used by variable j is returned in bjnstep+1.
pred=3
As pred=1 and 2, all of the independent variables are mean centered prior to being normalized.
Constraint: pred=0, 1, 2 or 3.
2:     prey int64int32nag_int scalar
Default: prey=1
Indicates the type of data preprocessing to perform on the dependent variable supplied in y.
prey=0
No preprocessing is performed, this is equivalent to setting α=0.
prey=1
The dependent variable, y, is mean centered prior to fitting the model, so α=y-. Which is equivalent to fitting a non-penalized intercept to the model and the degrees of freedom etc. are adjusted accordingly.
The value of α used is returned in fitsum1nstep+1.
Constraint: prey=0 or 1.
3:     n int64int32nag_int scalar
Default: the dimension of the array y and the first dimension of the array d. (An error is raised if these dimensions are not equal.)
n, the number of observations.
Constraint: n1.
4:     isxlisx int64int32nag_int array
Indicates which independent variables from d will be included in the design matrix, X.
If isx is not supplied, all variables are included in the design matrix.
Otherwise, for j=1,2,,m when isxj must be set as follows:
isxj=1
To indicate that the jth variable, as supplied in d, is included in the design matrix;
isxj=0
To indicated that the jth variable, as supplied in d, is not included in the design matrix;
and p= j=1 m isxj
Constraint: isxj=0 or 1 and at least one value of isxj0, for j=1,2,,m.
5:     mnstep int64int32nag_int scalar
Default:
  • if mtype=1, mnstep=m;
  • otherwise mnstep=200×m.
The maximum number of steps to carry out in the model fitting process.
If mtype=1, i.e., a LARS is being performed, the maximum number of steps the algorithm will take is minp,n if prey=0, otherwise minp,n-1.
If mtype=2, i.e., a forward linear stagewise regression is being performed, the maximum number of steps the algorithm will take is likely to be several orders of magnitude more and is no longer bound by p or n.
If mtype=3 or 4, i.e., a LASSO or positive LASSO model is being fit, the maximum number of steps the algorithm will take lies somewhere between that of the LARS and forward linear stagewise regression, again it is no longer bound by p or n.
Constraint: mnstep1.
6:     roptlropt – double array
Optional parameters to control various aspects of the LARS algorithm.
The default value will be used for ropti if lropt<i. The default value will also be used if an invalid value is supplied for a particular argument, for example, setting ropti=-1 will use the default value for argument i.
ropt1
The minimum step size that will be taken.
Default is 100×eps is used, where eps is the machine precision returned by nag_machine_precision (x02aj).
ropt2
General tolerance, used amongst other things, for comparing correlations.
Default is ropt1.
ropt3
If set to 1, parameter estimates are rescaled before being returned.
If set to 0, no rescaling is performed.
This argument has no effect when pred=0 or 1.
Default is for the parameter estimates to be rescaled.
ropt4
If set to 1, it is assumed that the model contains an intercept during the model fitting process and when calculating the degrees of freedom.
If set to 0, no intercept is assumed.
This has no effect on the amount of preprocessing performed on y.
Default is to treat the model as having an intercept when prey=1 and as not having an intercept when prey=0.
ropt5
As implemented, the LARS algorithm can either work directly with y and X, or it can work with the cross-product matrices, XTy and XTX. In most cases it is more efficient to work with the cross-product matrices. This flag allows you direct control over which method is used, however, the default value will usually be the best choice.
If ropt5=1, y and X are worked with directly.
If ropt5=0, the cross-product matrices are used.
Default is 1 when p500 and n<p and 0 otherwise.
Constraints:
  • ropt1>machine precision;
  • ropt2>machine precision;
  • ropt3=0 or 1;
  • ropt4=0 or 1;
  • ropt5=0 or 1.

Output Parameters

1:     bp: – double array
The second dimension of the array b will be nstep+2.
Note: nstep is equal to K, the actual number of steps carried out in the model fitting process. See Description for further information.
β the parameter estimates, with bjk=βkj, the parameter estimate for the jth variable, j=1,2,,p at the kth step of the model fitting process, k=1,2,,nstep.
By default, when pred=2 or 3 the parameter estimates are rescaled prior to being returned. If the parameter estimates are required on the normalized scale, then this can be overridden via ropt.
The values held in the remaining part of b depend on the type of preprocessing performed.
If pred=0,
bjnstep+1 = 1 bjnstep+2 = 0
If pred=1,
bjnstep+1 = 1 bjnstep+2 = x-j
If pred=2,
bjnstep+1 = 1/ xjTxj bjnstep+2 = 0
If pred=3,
bjnstep+1 = 1/ xj-x-jT xj-x-j bjnstep+2 = x-j
for j=1,2,,p.
The number of parameter estimates, p, is the number of columns in d when isx is not supplied, i.e., p=m. Otherwise p is the number of nonzero values in isx.
2:     fitsum6mnstep+1 – double array
The second dimension of the array fitsum will be nstep+1.
Note: nstep is equal to K, the actual number of steps carried out in the model fitting process. See Description for further information.
Summaries of the model fitting process. When k=1,2,,nstep,
fitsum1k
βk1, the sum of the absolute values of the parameter estimates for the kth step of the modelling fitting process. If pred=2 or 3, the scaled parameter estimates are used in the summation.
fitsum2k
RSSk, the residual sums of squares for the kth step, where RSSk= y- XT βk 2 .
fitsum3k
νk, approximate degrees of freedom for the kth step.
fitsum4k
Cpk, a Cp-type statistic for the kth step, where Cpk=RSSkσ2-n+2νk.
fitsum5k
C^k, correlation between the residual at step k-1 and the most correlated variable not yet in the active set A, where the residual at step 0 is y.
fitsum6k
γ^k, the step size used at step k.
In addition
fitsum1nstep+1
α, with α=y- if prey=1 and 0 otherwise.
fitsum2nstep+1
RSS0, the residual sums of squares for the null model, where RSS0=yTy when prey=0 and RSS0=y-y-Ty-y- otherwise.
fitsum3nstep+1
ν0, the degrees of freedom for the null model, where ν0=0 if prey=0 and ν0=1 otherwise.
fitsum4nstep+1
Cp0, a Cp-type statistic for the null model, where Cp0=RSS0σ2-n+2ν0.
fitsum5nstep+1
σ2, where σ2=n-RSSKνK and K=nstep.
Although the Cp statistics described above are returned when ifail=112 they may not be meaningful due to the estimate σ2 not being based on the saturated model.
3:     ifail int64int32nag_int scalar
ifail=0 unless the function detects an error (see Error Indicators and Warnings).

Error Indicators and Warnings

Note: nag_correg_lars (g02ma) may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the function:

Cases prefixed with W are classified as warnings and do not generate an error of type NAG:error_n. See nag_issue_warnings.

   ifail=11
Constraint: mtype=1, 2, 3 or 4.
   ifail=21
Constraint: pred=0, 1, 2 or 3.
   ifail=31
Constraint: prey=0 or 1.
   ifail=41
Constraint: n1.
   ifail=51
Constraint: m1.
   ifail=81
Constraint: isxi=0 or 1 for all i.
   ifail=82
On entry, all values of isx are zero.
Constraint: at least one value of isx must be nonzero.
   ifail=111
Constraint: mnstep1.
W  ifail=112
Fitting process did not finish in mnstep steps. Try increasing the size of mnstep.
All output is returned as documented, up to step mnstep, however, σ and the Cp statistics may not be meaningful.
W  ifail=161
σ2 is approximately zero and hence the Cp-type criterion cannot be calculated. All other output is returned as documented.
W  ifail=162
νK=n, therefore σ has been set to a large value. Output is returned as documented.
W  ifail=163
Degenerate model, no variables added and nstep=0. Output is returned as documented.
   ifail=181
Constraint: 0lropt5.
   ifail=-99
An unexpected error has been triggered by this routine. Please contact NAG.
   ifail=-399
Your licence key may have expired or may not have been installed correctly.
   ifail=-999
Dynamic memory allocation failed.

Accuracy

Not applicable.

Further Comments

nag_correg_lars (g02ma) returns the parameter estimates at various points along the solution path of a LARS, LASSO or stagewise regression analysis. If the solution is required at a different set of points, for example when performing cross-validation, then nag_correg_lars_param (g02mc) can be used.
For datasets with a large number of observations, n, it may be impractical to store the full X matrix in memory in one go. In such instances the cross-product matrices XTy and XTX can be calculated, using for example, multiple calls to nag_correg_ssqmat (g02bu) and nag_correg_ssqmat_combine (g02bz), and nag_correg_lars_xtx (g02mb) called to perform the analysis.
The amount of workspace used by nag_correg_lars (g02ma) depends on whether the cross-product matrices are being used internally (as controlled by ropt). If the cross-product matrices are being used then nag_correg_lars (g02ma) internally allocates approximately 2p2+4p+maxnp elements of real storage compared to p2+3p+maxnp+2n+n×p elements when X and y are used directly. In both cases approximately 5p elements of integer storage are also used. If a forward linear stagewise analysis is performed than an additional p2+5p elements of real storage are required.

Example

This example performs a LARS on a simulated dataset with 20 observations and 6 independent variables.
function g02ma_example


fprintf('g02ma example results\n\n');

% Going to be fitting a LAR model via g02ma
mtype = int64(1);

% Independent variables
d = [10.28  1.77  9.69 15.58  8.23 10.44;
      9.08  8.99 11.53  6.57 15.89 12.58;
     17.98 13.10  1.04 10.45 10.12 16.68;
     14.82 13.79 12.23  7.00  8.14  7.79;
     17.53  9.41  6.24  3.75 13.12 17.08;
      7.78 10.38  9.83  2.58 10.13  4.25;
     11.95 21.71  8.83 11.00 12.59 10.52;
     14.60 10.09 -2.70  9.89 14.67  6.49;
      3.63  9.07 12.59 14.09  9.06  8.19;
      6.35  9.79  9.40 12.79  8.38 16.79;
      4.66  3.55 16.82 13.83 21.39 13.88;
      8.32 14.04 17.17  7.93  7.39 -1.09;
     10.86 13.68  5.75 10.44 10.36 10.06;
      4.76  4.92 17.83  2.90  7.58 11.97;
      5.05 10.41  9.89  9.04  7.90 13.12;
      5.41  9.32  5.27 15.53  5.06 19.84;
      9.77  2.37  9.54 20.23  9.33  8.82;
     14.28  4.34 14.23 14.95 18.16 11.03;
     10.17  6.80  3.17  8.57 16.07 15.93;
      5.39  2.67  6.37 13.56 10.68  7.35];

% Dependent variable
y = [-46.47; -35.80; -129.22;  -42.44; -73.51;
     -26.61; -63.90;  -76.73;  -32.64; -83.29;
     -16.31;  -5.82;  -47.75;   18.38; -54.71;
     -55.62; -45.28;  -22.76; -104.32; -55.94];

% g02ma can issue warnings, but return sensible results,
% so save current warning state and turn warnings on
warn_state = nag_issue_warnings();
nag_issue_warnings(true);

[b,fitsum,ifail] = g02ma(mtype,d,y);

% Reset the warning state to its initial value
nag_issue_warnings(warn_state);

% Print the results
ip = size(b,1);
K = size(b,2) - 2;

fprintf('  Step %s Parameter Estimate\n ',repmat(' ',1,max(ip-2,0)*5));
fprintf(repmat('-',1,5+ip*10));
fprintf('\n');
for k = 1:K
  fprintf('  %3d',k);
  for j = 1:ip
    fprintf(' %9.3f',b(j,k));
  end
  fprintf('\n');
end
fprintf('\n');
fprintf(' alpha: %9.3f\n', fitsum(1,K+1));
fprintf('\n');
fprintf('  Step     Sum      RSS       df       Cp       Ck     Step Size\n ');
fprintf(repmat('-',1,64));
fprintf('\n');
for k = 1:K
  fprintf('  %3d %9.3f %9.3f %6.0f  %9.3f %9.3f %9.3f\n', ...
          k,fitsum(1,k),fitsum(2,k),fitsum(3,k), ...
          fitsum(4,k),fitsum(5,k),fitsum(6,k));
end
fprintf('\n');
fprintf(' sigma^2: %9.3f\n', fitsum(5,K+1));

% Plot the parameter estimates
fig1 = figure;

% Extract the sum of the absolute parameter estimates
xpos = transpose(repmat(fitsum(1,1:K),ip,1));

% Extract the parameter estimates
ypos = transpose(b(1:ip,1:K));

% Start both xpos and ypos at zero
xpos = [zeros(1,ip);xpos];
ypos = [zeros(1,ip);ypos];

% Get min and max for X and Y
xmin = min(min(xpos));
xmax = max(max(xpos));
ymin = min(min(ypos));
ymax = max(max(ypos));

% Get a range that is 10% past the data
ext = 1 + [-0.1 0.1];
xrng = [min(xmin*ext),max(xmax*ext)];
yrng = [min(ymin*ext),max(ymax*ext)];

% Extend the end of the lines we plot to cover this range
xpos = [xpos;xrng(2)*ones(1,ip)];
ypos = [ypos;ypos(end,:)];

% Produce the plot
plot(xpos,ypos);

% Change the axis limits
xlim(xrng);
ylim(yrng);

% Add title and labels
title({'{\bf g02ma Example Plot}'; ...
       'Estimates for LAR model fitted to simulated dataset'});
xlabel('{\bf \Sigma_j |\beta_{kj} |}');
ylabel('{\bf Parameter Estimates (\beta_{kj})}');

% Add legend
label = [repmat('\beta_{k',ip,1) num2str(transpose(linspace(1,ip,ip))) ...
         repmat('}',ip,1)];
h = legend(label,'Location','SouthOutside','Orientation','Horizontal');
set(h,'FontSize',get(h,'FontSize')*0.8);


g02ma example results

  Step                      Parameter Estimate
 -----------------------------------------------------------------
    1     0.000     0.000     3.125     0.000     0.000     0.000
    2     0.000     0.000     3.792     0.000     0.000    -0.713
    3    -0.446     0.000     3.998     0.000     0.000    -1.151
    4    -0.628    -0.295     4.098     0.000     0.000    -1.466
    5    -1.060    -1.056     4.110    -0.864     0.000    -1.948
    6    -1.073    -1.132     4.118    -0.935    -0.059    -1.981

 alpha:   -50.037

  Step     Sum      RSS       df       Cp       Ck     Step Size
 ----------------------------------------------------------------
    1    72.446  8929.855      2     13.355   123.227    72.446
    2   103.385  6404.701      3      7.054    50.781    24.841
    3   126.243  5258.247      4      5.286    30.836    16.225
    4   145.277  4657.051      5      5.309    19.319    11.587
    5   198.223  3959.401      6      5.016    12.266    24.520
    6   203.529  3954.571      7      7.000     0.910     2.198

 sigma^2:   304.198
g02ma_fig1.png
This example plot shows the regression coefficients (βk) plotted against the scaled absolute sum of the parameter estimates (βk1).

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2015