# Comparing two distributions in Fourier space

There exist a number of tools that provide a distance between two continuous probability distributions. Most (semi)distances, like the Kullback-Leibler divergence, use probability density functions. However, the literature is quite sparse when it comes to comparing two distributions in Fourier space, i.e. via their characteristic function. Is there an elegant way to do so?

One notable distance that can be considered in Fourier space is the maximum mean discrepancy (MMD). One first selects a positive semi-definite kernel $$k:X×X→Rk : \mathcal X \times \mathcal X \to \mathbb R$$ corresponding to a reproducing kernel Hilbert space (RKHS) $$Hk\mathcal H_k$$. The MMD is then

MMDk(P,Q)=sup\begin{align} \operatorname{MMD}_k(\mathbb P, \mathbb Q) &= \sup_{f \in \mathcal H_k : \lVert f \rVert_{\mathcal H_k} \le 1} \mathbb E_{X \sim \mathbb P}[ f(X)] - \mathbb E_{Y \sim \mathbb Q}[ f(Y)] \\&= \left\lVert \mathbb E_{X \sim \mathbb P}[k(X, \cdot)] - \mathbb E_{Y \sim \mathbb Q}[k(Y, \cdot)] \right\rVert \\&= \sqrt{ \mathbb{E}_{\substack{X, X' \sim \mathbb P\\Y, Y' \sim \mathbb Q}}\left[ k(X, X') + k(Y, Y') - 2 k(X, Y) \right] } .\end{align}
You might be familiar with the energy distance; that is a special case of the MMD for a particular choice of kernel.
This is a proper metric for many choices of kernel, called characteristic kernels; it is always a semimetric.

What does this have to do with the Fourier transform? Well, if $$\mathcal X = \mathbb R^d\mathcal X = \mathbb R^d$$ and $$k(x, y) = \psi(x – y)k(x, y) = \psi(x - y)$$, so that $$\psi : \mathbb R^d \to \mathbb R\psi : \mathbb R^d \to \mathbb R$$ is a positive-definite function, then it turns out the MMD can also be written as
$$\operatorname{MMD}_k(\mathbb P, \mathbb Q) = \sqrt{\int \left\lvert \varphi_{\mathbb P}(\omega) – \varphi_{\mathbb Q}(\omega) \right\rvert^2 \, \mathrm{d}\hat\psi(\omega)} \operatorname{MMD}_k(\mathbb P, \mathbb Q) = \sqrt{\int \left\lvert \varphi_{\mathbb P}(\omega) - \varphi_{\mathbb Q}(\omega) \right\rvert^2 \, \mathrm{d}\hat\psi(\omega)}$$
where $$\varphi\varphi$$ denotes the characteristic function, and $$\hat\psi\hat\psi$$ is the Fourier transform of $$\psi\psi$$ in the measure sense. (It will always be a finite nonnegative measure; you can see from this definition that a translation-invariant kernel is characteristic iff its Fourier transform is everywhere positive.)
For a proof, see Corollary 4 of

Sriperumbudur et al., Hilbert space embeddings and metrics on probability measures, JMLR 2010.

The MMD – which you can easily estimate via the third form above – thus compares distributions by the $$L_2L_2$$ distance between their full characteristic functions, with frequencies weighted according to the choice of kernel. For example, the common Gaussian kernel $$k(x, y) = \exp\left( -\frac{1}{2\sigma^2} \lVert x – y \rVert^2 \right)k(x, y) = \exp\left( -\frac{1}{2\sigma^2} \lVert x - y \rVert^2 \right)$$ will weight the frequencies with a Gaussian with mean 0 and variance $$1/\sigma^21/\sigma^2$$.

Sometimes it’s better, and sometimes computationally faster / more informative, to instead compare the characteristic functions at particular locations rather than everywhere. It turns out it’s better to make a slight tweak, evaluating differences in smoothed characteristic functions at random locations:

Chwialkowski et al., Fast Two-Sample Testing with Analytic Representations of Probability Measures, NeurIPS 2015.

A follow-up work finds the most informative frequencies to test, rather than random:

Jitkrittum et al., Interpretable Distribution Features with Maximum Testing Power, NeurIPS 2016.

These are all closely connected to classical tests based on empircal characteristic functions as mentioned by kjetil in the comments.