What is the importance of the function e−x2e^{-x^2} in statistics?

In my calculus class , we encountered the function $e^{-x^2}$, or the “bell curve”, and I was told that it has frequent applications in statistics.

Out of curiosity, I want to ask: Is the function $e^{-x^2}$ truly important in statistics? If so, what is it about $e^{-x^2}$ that makes it useful, and what are some of its applications?

I couldn’t find much info about the function on the internet, but after doing some research, I found a link between bell curves in general, and something called normal distribution. A Wikipedia page links these types of functions to statistics application, with highlighting by me, that states:

“The normal distribution is considered the most prominent probability distribution in statistics. There are several reasons for this:1 First, the normal distribution arises from the central limit theorem, which states that under mild conditions the sum of a large number of random variables drawn from the same distribution is distributed approximately normally, irrespective of the form of the original distribution.”

So, if I gather a large amount of data from some kind of survey or the like, they could be distributed equally among a function like $e^{-x^2}$? The function is symmetrical, so is its symmetry i.e its usefulness to normal distribution, what makes it so useful in statistics? I’m merely speculating.

In general, what does make $e^{-x^2}$ useful in statistics? If normal distribution is the only area, then what makes $e^{-x^2}$ unique or specifically useful among other gaussian type functions in normal distribution?

The reason that this function is important is indeed the normal distribution and its closely linked companion, the central limit theorem (we have some good explanations of the CLT in other questions here).

In statistics, the CLT can typically be used to compute probabilites approximately, making statements like “we are 95 % confident that…” possible (the meaning of “95 % confident” is often misunderstood, but that’s a different matter).

The function $\exp\Big(-\frac{(x-\mu)^2}{2\sigma^2}\Big)$ is (a scaled version of) the density function of the normal distribution. If a random quantity can be modelled using the normal distribution, this function describes how likely different possible values of said quantity are. Outcomes in regions with high density are more likely than outcomes in regions with low density.

$\mu$ and $\sigma$ are parameters that determine the location and scale of the density function. It is symmetric about $\mu$, so changing $\mu$ means that you shift the function to the right or to the left. $\sigma$ determines the value of the density function at its maximum ($x=\mu$) and how quickly it goes to 0 as $x$ moves away from $\mu$. In that sense, changing $\sigma$ changes the scale of the function.

For the particular choice $\mu=0$ and $\sigma=1/\sqrt{2}$ the density is (proportional to) $e^{-x^2}$. This is not a particularly interesting choice of these parameters, but it has the benefit of yielding a density function that looks slightly simpler than all others.

On the other hand, we can go from $e^{-x^2}$ to any other normal density by the change-of-variables $x=\frac{u-\mu}{\sqrt{2}\sigma}$. The reason that your textbook says that $e^{-x^2}$, and not $\exp\Big(-\frac{(x-\mu)^2}{2\sigma^2}\Big)$, is a very important function is that $e^{-x^2}$ is simpler to write.