I’m currently working on building a predictive model for a binary outcome on a dataset with ~300 variables and 800 observations. I’ve read much on this site about the problems associated with stepwise regression and why not to use it.
I’ve been reading into LASSO regression and its ability for feature selection and have been successful in implementing it with the use of the “caret” package and “glmnet”.
I am able to extract the coefficient of the model with the optimal
alphafrom “caret”; however, I’m unfamiliar with how to interpret the coefficients.
- Are the LASSO coefficients interpreted in the same method as logistic regression?
- Would it be appropriate to use the features selected from LASSO in logistic regression?
Interpretation of the coefficients, as in the exponentiated coefficients from the LASSO regression as the log odds for a 1 unit change in the coefficient while holding all other coefficients constant.
Are the LASSO coefficients interpreted in the same method as logistic regression?
Let me rephrase: Are the LASSO coefficients interpreted in the same way as, for example,
OLS maximum likelihood coefficients in a logistic regression?
LASSO (a penalized estimation method) aims at estimating the same quantities (model coefficients) as, say,
OLS maximum likelihood (an unpenalized method). The model is the same, and the interpretation remains the same. The numerical values from LASSO will normally differ from those from OLS maximum likelihood: some will be closer to zero, others will be exactly zero. If a sensible amount of penalization has been applied, the LASSO estimates will lie closer to the true values than the OLS maximum likelihood estimates, which is a desirable result.
Would it be appropriate to use the features selected from LASSO in logistic regression?
There is no inherent problem with that, but you could use LASSO not only for feature selection but also for coefficient estimation. As I mention above, LASSO estimates may be more accurate than, say,
OLS maximum likelihood estimates.