When you are the one doing the work, being aware of what you are doing you develop a sense of when you have over-fit the model. For one thing, you can track the trend or deterioration in the Adjusted R Square of the model. You can also track a similar deterioration in the p values of the regression coefficients of the main variables.
But, when you just read someone else study and you have no insight as to their own internal model development process how can you clearly detect if a model is over-fit or not.
Cross validation is a fairly common way to detect overfitting, while regularization is a technique to prevent it. For a quick take, I’d recommend Andrew Moore’s tutorial slides on the use of cross-validation (mirror) — pay particular attention to the caveats. For more detail, definitely read chapters 3 and 7 of EOSL, which cover the topic and associated matter in good depth.