# Intuition Behind Completeness [duplicate]

The definition for completeness is that if a statistic $s(x)$ is complete, we have that for every measurable $g$,
$$E_\theta(g(s(x))) = 0\,, \ \forall\,\theta\ \Rightarrow\ g(s) = 0 \text{ a.s.}$$

I’ve heard that we can think of completeness as saying that if we wanted to estimate the zero function using a complete $s(x)$, among the class of all zero-unbiased functions of the statistic, the only one is one which takes value 0 almost surely. This seems like a bizarre notion–why would we want to estimate the zero function?

I’ve also heard that in estimating parameters of a probability model $P_\theta$, one does not need any more than a sufficient statistic. I heard that having any more than the sufficient statistic provides no additional information. How does this connect to the definition of completeness above (Basu’s, maybe?) ?

Is there some better intuition for the (seemingly) bizarre condition above?

For completeness

Recall if $E(f(x)) = u$ then $f(x)$ is an unbiased estimate of $u$.

Now so will $g(x) = f(x) + h(x)$ if $E(h(x)) = 0$

So its to assure that $f(x)$ is the unique unbiased estimate.
(The rest and many fine details I have forgotten.)

For sufficiency

The easiest way to appreciate sufficiency is via the relative belief ratio which is just the posterior probability divided by the prior probabilty (this is k * likelihood) calculated via ABC or two stage simulation. After conditioning on a sufficent statistic, conditioning on anything further does not change the posterior distribution. If two different statistics are sufficient, conditioning on either will give the same posterior distribution and the minimal sufficent statistic will give the best approximation of the same posterior distribution. But if this is for a course this will likely just be distracting as you will likely need to answer in terms of that statistic indexing the likelihood function.