In section 3.2 of Bishop’s Pattern Recognition and Machine Learning, he discusses the bias-variance decomposition, stating that for a squared loss function, the expected loss can be decomposed into a squared bias term (which describes how far the average predictions are from the true model), a variance term (which describes the spread of the predictions around the average), and a noise term (which gives the intrinsic noise of the data).

- Can bias-variance decomposition be performed with loss functions other than squared loss?
- For a given model dataset, is there more than one model whose expected loss is the minimum over all models, and if so, does that mean that there could be different combinations of bias and variance that yield the same minimum expected loss?
- If a model involves regularization, is there a mathematical relationship between bias, variance, and the regularization coefficient λ?
- How can you calculate bias if you don’t know the true model?
- Are there situations in which it makes more sense to minimize bias or variance rather than expected loss (the sum of squared bias and variance)?

**Answer**

…the expected [squared error] loss can be decomposed into a squared

bias term (which describes how far the average predictions are from

the true model), a variance term (which describes the spread of the

predictions around the average), and a noise term (which gives the

intrinsic noise of the data).

When looking at the squared error loss decomposition

Eθ[(θ−δ(X1:n))2]=(θ−Eθ[δ(X1:n)])2+Eθ[(Eθ[δ(X1:n)]−δ(X1:n))2]

I only see two terms: one for the bias and another one for the variance of the estimator or predictor, δ(X1:n). There is no additional noise term in the expected loss. As should be since the variability is the variability of δ(X1:n), not of the sample itself.

- Can bias-variance decomposition be performed with loss functions other

than squared loss?

My interpretation of the squared bias+variance decomposition [and the way I teach it] is that this is the statistical equivalent of Pythagore’s Theorem, namely that the squared distance between an estimator and a point within a certain set is the sum of the squared distance between an estimator and the set, plus the squared distance between the orthogonal projection on the set and the point within the set. Any loss based on a distance with a notion of orthogonal projection, i.e., an inner product, i.e., essentially Hilbert spaces, satisfies this decomposition.

- For a given model dataset, is there more than one model whose expected

loss is the minimum over all models, and if so, does that mean that

there could be different combinations of bias and variance that yield

the same minimum expected loss?

The question is unclear: if by minimum over models, you mean

min

then there are many examples of statistical models and associated decisions with a *constant* expected loss (or risk). Take for instance the MLE of a Normal mean.

- How can you calculate bias if you don’t know the true model?

In a generic sense, the bias is the distance between the true model and the closest model within the assumed family of distributions. If the true model is unknown, the bias can be ascertained by bootstrap.

- Are there situations in which it makes more sense to minimize bias or variance rather than expected loss (the sum of squared bias and

variance)?

When considering another loss function like

(\theta-\mathbb{E}_\theta[\delta(X_{1:n})])^2+\alpha[(\mathbb{E}_\theta[\delta(X_{1:n})]-\delta(X_{1:n}))^2]\qquad 0<\alpha

pushing \alpha to zero puts most of the evaluation on the bias while pushing \alpha to infinity switches the focus on the variance.

**Attribution***Source : Link , Question Author : Vivek Subramanian , Answer Author : Ben*