Available regression methods#
The following linear regression methods are supported via the trainstation interface.
They can be selected using the fit_method
keyword of the available optimizers.
Ordinary Least-Squares (OLS)
Least-Squares with regularization matrix
Least Absolute Shrinkage and Selection Operator (LASSO)
Adaptive-LASSO
Ridge and Bayesian-ridge
Elasticnet
Recursive Feature Elimination (RFE)
Automatic Relevance Determination Regression (ARDR)
Fitting using Orthogonal Matching Pursuit (OMP)
L1-regularization with split-Bregman
The most commonly used fit methods for constructing cluster and force constant expansion are automatic relevance determination regression (ARDR), recursive feature elimination with \(\ell_2\)-fitting (RFE-L2), LASSO as well as ordinary least-squares optimization (OLS). Below follows a short summary of the main algorithms. More information about the available linear models can be found in the scikit-learn documentation.
Least-squares#
Ordinary least-squares (OLS) optimization is providing a solution to the linear problem
where \(\boldsymbol{A}\) is the sensing matrix, \(\boldsymbol{y}\) is the vector of target values, and \(\boldsymbol{x}\) is the solution (parameter vector) that one seeks to obtain. The objective is given by
The OLS method is chosen by setting the fit_method
keyword to
least-squares
.
Least-squares with regularization matrix#
Similar to OLS, least-squares with regularization matrix optimization solves the linear problem
with the difference that in this case, an explicit regularization matrix \(\boldsymbol{\Lambda}\) is present such that the objective becomes
From a Bayesian perspective, the regularization matrix \(\boldsymbol{\Lambda}\) can be thought of as the inverse of the covariance matrix for the prior probability of the features [MueCed09]. It can be used to scale and couple features based on prior beliefs.
The least-squares with regularization matrix method is chosen by setting the fit_method
keyword to least-squares-with-reg-matrix
.
The regularization matrix is set via the reg_matrix
keyword.
If no matrix is specified, all elements will be set to zero and the OLS result is recovered.
Note that if standardization is used, the regularization matrix should still be designed with respect to the input sensing matrix and not the standardized on.
Parameter |
Type |
Description |
Default |
---|---|---|---|
|
|
regularization matrix |
|
LASSO#
The least absolute shrinkage and selection operator (LASSO) is a method for performing variable selection and regularization in problems in statistics and machine learning. The optimization objective is given by
While the first term ensures that \(\boldsymbol{x}\) is a solution to the linear problem at hand, the second term introduces regularization and guides the algorithm toward finding sparse solutions, in the spirit of compressive sensing. In general, LASSO is suited for solving strongly underdetermined problems.
The LASSO optimizer is chosen by setting the fit_method
keyword to lasso
.
The \(\alpha\) parameter is set via the alpha
keyword.
If no value is specified a line scan will be carried out automatically to determine the optimal value.
Parameter |
Type |
Description |
Default |
---|---|---|---|
|
|
controls the sparsity of the solution vector |
|
Automatic relevance determination regression (ARDR)#
Automatic relevance determination regression (ARDR) is an optimization algorithm provided by scikit-learn that is similar to Bayesian Ridge Regression, which provides a probabilistic model of the regression problem at hand. The method is also known as Sparse Bayesian Learning and Relevance Vector Machine.
The ARDR optimizer is chosen by setting the fit_method
keyword to ardr
.
The threshold lambda parameter, which controls the sparsity of the solution vector, is set via the threshold_lambda
keyword (default: 1e4).
Parameter |
Type |
Description |
Default |
---|---|---|---|
|
|
controls the sparsity of the solution vector |
|
Split-Bregman#
The split-Bregman method [GolOsh09] is designed to solve a broad class of \(\ell_1\)-regularized problems. The solution vector \(\boldsymbol{x}\) is given by
where \(\boldsymbol{d}\) is an auxiliary quantity, while \(\mu\) and \(\lambda\) are hyperparameters that control the sparseness of the solution and the efficiency of the algorithm.
The approach can be optimized by addition of a preconditioning step [ZhoSadAbe19].
This speed-up enables efficient hyperparameter optimization of \(\mu\) values.
By default, the split-bregman
fit method will trial a range of \(\mu\) values and choose the optimal based on cross validation.
The split-Bregman implementation supports the following keywords.
Parameter |
Type |
Description |
Default |
---|---|---|---|
|
|
sparseness parameter |
|
|
|
weight of additional L2-norm in split-Bregman |
|
|
|
maximal number of split-Bregman iterations |
|
|
|
convergence criterion of iterative minimization |
|
|
|
convergence criterion of conjugate gradient step |
|
|
|
maximal number of conjugate gradient iterations |
|
|
|
number of CV splits for finding optimal mu value |
|
|
|
how often to print fitting information to stdout |
|
Recursive feature elimination#
Recursive feature elimination (RFE) is a feature selection algorithm that obtains the optimal features by carrying out a series of fits, starting with the full set of parameters and then iteratively eliminating the less important ones. RFE needs to be combined with a specific fit method. Since RFE may require many hundreds of single fits its often advisable to use ordinary least-squares as training method, which is the default behavior. The present implementation is based on the implementation of feature selection in scikit-learn.
The RFE optimizer is chosen by setting the fit_method
keyword to rfe
.
The n_features
keyword allows one to specify the number of features to select.
If this parameter is left unspecified RFE with cross-validation will be used to determine the optimal number of features.
After the optimal number of features has been determined the final model is trained.
The fit method for the final fit can be controlled via final_estimator
.
Here, estimator
and final_estimator
can be set to any of the fit methods described in this section.
For example, estimator='lasso'
implies that a LASSO CV scan is carried out for each fit in the RFE algorithm.
Parameter |
Type |
Description |
Default |
---|---|---|---|
|
|
number of features to select |
|
|
|
number parameters to eliminate |
|
|
percentage of parameters to eliminate |
|
|
|
|
number of CV splits (90/10) used when optimizing |
|
|
|
fit method to be used in RFE algorithm |
|
|
|
fit method to be used in the final fit |
= |
|
|
keyword arguments for fit method defined by |
|
|
|
keyword arguments for fit method defined by |
|
Note
When running on multi-core systems please be mindful of memory consumption.
By default all CPUs will be used (n_jobs=-1
), which will duplicate data and can require a lot of memory, potentially giving rise to errors.
To prevent this behavior you can set the n_jobs parameter explicitly, which is handed over directly to scikit-learn.
Other methods#
The optimizers furthermore support the
ridge method (ridge
),
the elastic net method (elasticnet
)
as well as
Bayesian ridge regression (bayesian-ridge
).