# I don’t understand the variance of the binomial

I feel really dumb even asking such a basic question but here goes:

If I have a random variable $X$ that can take values $0$ and $1$, with $P(X=1) = p$ and $P(X=0) = 1-p$, then if I draw $n$ samples out of it, I’ll get a binomial distribution.

The mean of the distribution is

$\mu = np = E(X)$

The variance of the distribution is

$\sigma^2 = np(1-p)$

Here is where my trouble begins:

Variance is defined by $\sigma^2 = E(X^2) - E(X)^2$. Because the square of the two possible $X$ outcomes don’t change anything ($0^2 = 0$ and $1^2 = 1$), that means $E(X^2) = E(X)$, so that means

$\sigma^2 = E(X^2) - E(X)^2 = E(X) - E(X)^2 = np - n^2p^2 = np(1-np) \neq np(1-p)$

Where does the extra $n$ go? As you can probably tell I am not very good at stats so please don’t use complicated terminology :s

A random variable $X$ taking values $0$ and $1$ with probabilities $P(X=1)=p$ and $P(X=0)=1-p$ is called a Bernoulli random variable with parameter $p$. This random variable has
Suppose you have a random sample $X_{1},X_{2},\cdots,X_{n}$ of size $n$ from $Bernoulli(p)$, and define a new random variable $Y=X_{1}+X_{2}+\cdots +X_{n}$, then the distribution of $Y$ is called Binomial, whose parameters are $n$ and $p$.