# Difference between ep-SVR and nu-SVR (and least squares SVR)

I am trying to find out which SVR is suited for that kind of data.

I know 4 types of SVRs:

• epsilon
• nu
• least squares and
• linear.

I understand linear SVR is more or less like lasso with L1 Reg, But what is the difference between the remaining 3 techniques?

In $\nu$-SVR, the parameter $\nu$ is used to determine the proportion of the number of support vectors you desire to keep in your solution with respect to the total number of samples in the dataset. In $\nu$-SVR the parameter $\epsilon$ is introduced into the optimization problem formulation and it is estimated automatically (optimally) for you.
However, in $\epsilon$-SVR you have no control on how many data vectors from the dataset become support vectors, it could be a few, it could be many. Nonetheless, you will have total control of how much error you will allow your model to have, and anything beyond the specified $\epsilon$ will be penalized in proportion to $C$, which is the regularization parameter.
Depending of what I want, I choose between the two. If I am really desperate for a small solution (fewer support vectors) I choose $\nu$-SVR and hope to obtain a decent model. But if I really want to control the amount of error in my model and go for the best performance, I choose $\epsilon$-SVR and hope that the model is not too complex (lots of support vectors).