I am new to the

`glmnet`

package, and I am still unsure of how to interpret the results. Could anyone please help me read the following trace plot?The graph was obtaining by running the following:

`library(glmnet) return <- matrix(ret.ff.zoo[which(index(ret.ff.zoo)==beta.df$date[2]), ]) data <- matrix(unlist(beta.df[which(beta.df$date==beta.df$date[2]), ][ ,-1]), ncol=num.factors) model <- cv.glmnet(data, return, standardize=TRUE) op <- par(mfrow=c(1, 2)) plot(model$glmnet.fit, "norm", label=TRUE) plot(model$glmnet.fit, "lambda", label=TRUE) par(op)`

**Answer**

In both plots, each colored line represents the value taken by a different coefficient in your model. **Lambda** is the weight given to the regularization term (the L1 norm), so as lambda approaches zero, the loss function of your model approaches the OLS loss function. Here’s one way you could specify the LASSO loss function to make this concrete:

\beta_{lasso} = \text{argmin } [ RSS(\beta) + \lambda *\text{L1-Norm}(\beta) ]

Therefore, when lambda is very small, the LASSO solution should be very close to the OLS solution, and all of your coefficients are in the model. As lambda grows, the regularization term has greater effect and you will see fewer variables in your model (because more and more coefficients will be zero valued).

As I mentioned above, the **L1 norm** is the regularization term for LASSO. Perhaps a better way to look at it is that the x-axis is the *maximum permissible value the L1 norm can take*. So when you have a small L1 norm, you have a lot of regularization. Therefore, an L1 norm of zero gives an empty model, and as you increase the L1 norm, variables will “enter” the model as their coefficients take non-zero values.

The plot on the left and the plot on the right are basically showing you the same thing, just on different scales.

**Attribution***Source : Link , Question Author : Mayou , Answer Author : David Marx*