# Random Forest can’t overfit?

I’ve read some literature that random forests can’t overfit. While this sounds great, it seems too good to be true. Is it possible for rf’s to overfit?

Try for example to estimate the model $y = log(x) + \epsilon$ with a random forest. You will get an almost zero training error but a bad prediction error