Maximum value of coefficient of variation for bounded data set

In the discussion following a recent question about whether the standard deviation can exceed the mean, one question was raised briefly but never fully answered. So I am asking it here.

Consider a set of $n$ nonnegative numbers
$x_i$ where $0 \leq x_i \leq c$ for $1 \leq i \leq n$. It is not
required that the $x_i$ be distinct, that is, the set could be a multiset.
The mean and variance
of the set are defined as

and the standard deviation is $\sigma_x$. Note that the set
of numbers is not a sample from a population and we are
not estimating a population mean or a population variance.
The question then is:

What is the maximum
value of $\dfrac{\sigma_x}{\bar{x}}$, the coefficient of variation, over
all choices of the $x_i$‘s in the interval $[0,c]$?

The maximum value that I can find for $\frac{\sigma_x}{\bar{x}}$ is $\sqrt{n-1}$
which is achieved when $n-1$ of the $x_i$ have value $0$ and the remaining
(outlier) $x_i$
has value $c$, giving

But this does not depend on $c$ at all, and I am wondering if larger
values, possibly dependent on both $n$ and $c$, can be achieved.

Any ideas? I am sure that this question has been studied in the statistical literature before, and so references, if not the actual results, would be much
appreciated.

Geometric solution

We know, from the geometry of least squares, that $\mathbf{\bar{x}} = (\bar{x}, \bar{x}, \ldots, \bar{x})$ is the orthogonal projection of the vector of data $\mathbf{x}=(x_1, x_2, \ldots, x_n)$ onto the linear subspace generated by the constant vector $(1,1,\ldots,1)$ and that $\sigma_x$ is directly proportional to the (Euclidean) distance between $\mathbf{x}$ and $\mathbf{\bar{x}}.$ The non-negativity constraints are linear and distance is a convex function, whence the extremes of distance must be attained at the edges of the cone determined by the constraints. This cone is the positive orthant in $\mathbb{R}^n$ and its edges are the coordinate axes, whence it immediately follows that all but one of the $x_i$ must be zero at the maximum distances. For such a set of data, a direct (simple) calculation shows $\sigma_x/\bar{x}=\sqrt{n}.$

Solution exploiting classical inequalities

$\sigma_x/\bar{x}$ is optimized simultaneously with any monotonic transformation thereof. In light of this, let’s maximize

(The formula for $f$ may look mysterious until you realize it just records the steps one would take in algebraically manipulating $\sigma_x/\bar{x}$ to get it into a simple looking form, which is the left hand side.)

An easy way begins with Holder’s Inequality,

(This needs no special proof in this simple context: merely replace one factor of each term $x_i^2 = x_i \times x_i$ by the maximum component $\max(\{x_i\})$: obviously the sum of squares will not decrease. Factoring out the common term $\max(\{x_i\})$ yields the right hand side of the inequality.)

Because the $x_i$ are not all $0$ (that would leave $\sigma_x/\bar{x}$ undefined), division by the square of their sum is valid and gives the equivalent inequality

Because the denominator cannot be less than the numerator (which itself is just one of the terms in the denominator), the right hand side is dominated by the value $1$, which is achieved only when all but one of the $x_i$ equal $0$. Whence

Alternative approach

Because the $x_i$ are nonnegative and cannot sum to $0$, the values $p(i) = x_i/(x_1+x_2+\ldots+x_n)$ determine a probability distribution $F$ on $\{1,2,\ldots,n\}$. Writing $s$ for the sum of the $x_i$, we recognize

The axiomatic fact that no probability can exceed $1$ implies this expectation cannot exceed $1$, either, but it’s easy to make it equal to $1$ by setting all but one of the $p_i$ equal to $0$ and therefore exactly one of the $x_i$ is nonzero. Compute the coefficient of variation as in the last line of the geometric solution above.