Suppose X1,…,Xn are a random sample from a normal distribution with mean θ and variance 1. Find the maximum likelihood estimator of θ, under the restriction that θ≥0.

I have found the MLE when there is no restriction on θ.

L(θ|x)=(2π)−n/2exp(−12n∑i=1(xi−θ)2)

Differentiating L(θ|x) with respect to theta, and setting it equal to zero yields n∑i=1(xi−θ)=0⟹θ=1nn∑i=1xi=ˉx

The second derivative is negative, hence the MLE of θ is ˆθ=ˉX.How would I

incorporate the restriction θ≥0.

**Answer**

Although the OP did not respond, I am answering this to showcase the method I proposed (and indicate what statistical intuition it may contain).

First, it is important to distinguish *on which entity is the constraint imposed*. In a deterministic optimization setting, there is no such issue : there is no “true value”, *and* an estimator of it. We just have to find the optimizer. But in a stochastic setting, there are conceivably two different cases:

a) “Estimate the parameter given a sample that has been generated by a population that has a non-negative mean” (i.e. θ≥0) and

b) “Estimate the parameter under the constraint that your estimator cannot take negative values”(i.e. ˆθ≥0).

In the first case, imposing the constraint is including prior *knowledge* on the unknown parameter. In the second case, the constraint can be seen as reflecting a prior *belief* on the unknown parameter (or some technical, or “strategic”, limitation of the estimator).

The mechanics of the solution are the same, though:The objective function, (the log-likelihood augmented by the non-negativity constraint on θ) is

˜L(θ|x)=−n2ln(2π)−12n∑i=1(xi−θ)2+ξθ,ξ≥0

Given concavity, f.o.c is also sufficient for a global maximum. We have

∂∂θ˜L(θ|x)=n∑i=1(xi−θ)+ξ=0⇒ˆθ=ˉx+ξn

1) If the solution lies in an interior point (⇒ˆθ>0), then ξ=0 and so the solution is {ˆθ=ˉx>0,ξ∗=0}.

2) If the solution lies on the boundary (⇒ˆθ=0) then we obtain the value of the multiplier at the solution ξ∗=−nˆx, and so the full solution is {ˆθ=0,ξ∗=−nˉx}. But since the multiplier must be non-negative, this necessarily implies that in this case we would have ˉx≤0

*(There is nothing special about setting the constraint to zero. If say the constraint was θ≥−2, then if the solution lied on the boundary, ˆθ=−2, it would imply (in order for the multiplier to have a positive value), that ˉx≤−2).*

So, if the optimizer is 0 what are we facing here?

If we are in “constraint type-a”, i.e we have been told that the sample comes from a population that it has a non-negative mean, then with ˆθ=0 chances are that the sample may not be representative of this population.

If we are in “constraint type-b”, i.e. we had the *belief* that the population has a non-negative mean, with ˆθ=0 this belief is questioned.

(This is essentially an alternative way to deal with prior beliefs, outside the formal bayesian approach).

Regarding the properties of the estimator, one should carefully distinguish this constrained estimation case, with the case where the *true* parameter lies on the boundary of the parameter space.

**Attribution***Source : Link , Question Author : user30438 , Answer Author : Alecos Papadopoulos*