R/benchmark.pls.R
benchmark.pls.Rd
This function computes the test error over several runs for different model selection strategies.
benchmark.pls( X, y, m = ncol(X), R = 20, ratio = 0.8, verbose = TRUE, k = 10, ratio.samples = 1, use.kernel = FALSE, criterion = "bic", true.coefficients = NULL )
X  matrix of predictor observations. 

y  vector of response observations. The length of 
m  maximal number of Partial Least Squares components. Default is

R  number of runs. Default is 20. 
ratio  ratio no of training examples/(no of training examples + no of test examples). Default is 0.8 
verbose  If 
k  number of crossvalidation splits. Default is 10. 
ratio.samples  Ratio of (no of training examples + no of test
examples)/ 
use.kernel  Use kernel representation? Default is

criterion  Choice of the model selection criterion. One of the three options aic, bic, gmdl. Default is "bic". 
true.coefficients  The vector of true regression coefficients (without
intercept), if available. Default is 
data frame of size R x 5. It contains the test error for the five different methods for each of the R runs.
data frame of size R x 5. It contains the optimal number of components for the five different methods for each of the R runs.
data frame of size R x
5. It contains the Degrees of Freedom (corresponding to M
) for the
five different methods for each of the R runs.
data frame of size R x 4. It contains the runtime for all methods (apart from the zero model) for each of the R runs.
data frame of size R x 2. It contains the number of components for which the Krylov representation and the Lanczos representation return negative Degrees of Freedom, hereby indicating numerical problems.
if true.coefficients
are
available, this is a data frame of size R x 5. It contains the model error
for the five different methods for each of the R runs.
data frame of size R x 5. It contains the estimation of the noise level provided by the five different methods for each of the R runs.
The function estimates the optimal number of PLS components based on four different criteria: (1) crossvalidation, (2) information criteria with the naive Degrees of Freedom DoF(m)=m+1, (3) information criteri with the Degrees of Freedom computed via a Lanczos represenation of PLS and (4) information criteri with the Degrees of Freedom computed via a Krylov represenation of PLS. Note that the latter two options only differ with respect to the estimation of the model error.
In addition, the function computes the test error of the "zero model", i.e.
mean(y)
on the training data is used for prediction.
If true.coefficients
are available, the function also computes the
model error for the different methods, i.e. the sum of squared differences
between the true and the estimated regression coefficients.
Kraemer, N., Sugiyama M. (2011). "The Degrees of Freedom of Partial Least Squares Regression". Journal of the American Statistical Association 106 (494) https://www.tandfonline.com/doi/abs/10.1198/jasa.2011.tm10107
Nicole Kraemer
# generate artificial data n<50 # number of examples p<5 # number of variables X<matrix(rnorm(n*p),ncol=p) true.coefficients<runif(p,1,3) y<X%*%true.coefficients + rnorm(n,0,5) my.benchmark<benchmark.pls(X,y,R=10,true.coefficients=true.coefficients)#> iteration no 1 #> iteration no 2 #> iteration no 3 #> iteration no 4 #> iteration no 5 #> iteration no 6 #> iteration no 7 #> iteration no 8 #> iteration no 9 #> iteration no 10