# Why doesn’t measurement error in the dependent variable bias the results?

When there is measurement error in the independent variable I have understood that the results will be biased against 0. When the dependent variable is measured with error they say it just affects the standard errors but this doesn’t make much sense to me because we are estimating the effect of $X$ not on the original variable $Y$ but on some other $Y$ plus an error. So how does this not affect the estimates? In this case can I also use instrumental variables to remove this problem?

and instead of the true $Y_i$ you only observe it with some error $\widetilde{Y}_i = Y_i + \nu_i$ which is such that it is uncorrelated with $X$ and $\epsilon$, if you regress
your estimated $\beta$ is
because the covariance between a random variable and a constant ($\alpha$) is zero as well as the covariances between $X_i$ and $\epsilon_i, \nu_i$ since we assumed that they are uncorrelated.
So you see that your coefficient is consistently estimated. The only worry is that $\widetilde{Y}_i = Y_i + \nu_i = \alpha + \beta X_i + \epsilon_i + \nu_i$ gives you an additional term in the error which reduces the power of your statistical tests. In very bad cases of such measurement error in the dependent variable you may not find a significant effect even though it might be there in reality. Generally, instrumental variables will not help you in this case because they tend to be even more imprecise than OLS and they can only help with measurement error in the explanatory variable.