# A random variable that induces a σ\sigma-algebra the same as the one in the sample space

Consider a probability space $(\Omega, \mathcal{F}, P)$ where $\Omega$ is the sample space, $\mathcal{F}$ is the $\sigma$-algebra of $\Omega$, and $P$ is the probability measure. Let $X:\Omega\to\mathbb{R}$ be a random variable, inducing a $\sigma$-algebra $\mathcal{F}_X=\{X^{-1}(B)\mid B\in\mathcal{B}\}$ where $\mathcal{B}$ is the Borel algebra on $\mathbb{R}$.

Suppose we have another random variable $Y$ as well as the induced $\sigma$-algebra $\mathcal{F}_Y$, the conditional expectation can be defined as $E[Y|X]\stackrel{\text{a.s.}}{=}g$ where $g$ is determined by $E[YI_A]=\int_A g(s)P(s)\,\mathrm{d}s, \forall A\in \mathcal{F}_X$ according to Radon–Nikodym theorem.

I wonder what will happen if $\mathcal{F}_X=\mathcal{F}$ already? It looks like $E[Y|X]=Y$ so $X$ is kind of independent of any arbitrary random variable but I am not sure. Or, can anybody give an example of this kind of $X$? Thank you.

Some references: Theory of Statistics by Mark J. Schervish.

When you chase the definitions, the issues become trivial (although perhaps still unintuitive):

• $\mathbb{E}(Y\mid X) = \mathbb{E}(Y\mid \mathcal{F}_X)$ by definition.

• For any subalgebra $\mathcal{G}\subset \mathcal{F}$, $\mathbb{E}(Y\mid \mathcal{G})$ is defined to be any $\mathcal{G}$-measurable function for which

for every $G\in \mathcal{G}$.

Therefore, whenever $\mathcal{F}_X = \mathcal{F}$, it must be the case that

1. $\mathbb{E}(Y\mid X)$ is $\mathcal{F}$-measurable and

2. $\int_F \mathbb{E}(Y\mid X)(\omega) \,\mathrm d \mathbb{P}(\omega) = \int_F Y(\omega) \,\mathrm d \mathbb{P}(\omega)$ for every $F\in \mathcal{F}$.

The equality of the integrals for all measurable sets implies (as is well known and established early in any account of Lebesgue integration) that $\mathbb{E}(Y\mid X)$ must equal $Y$ almost surely (a.s.): they can differ only on a set of measure zero.

The second part of the question requests an example. Let’s construct a very simple but not entirely trivial one. It concerns a finite binomial process used to model (among other things) changes in prices of a financial asset over time. For simplicity, I have restricted it to a sequence of two times during which the price could go up ($+$) or down ($-$), whence

• $\Omega$ can be identified with the set $\{++, +-, -+, --\}$.

• $\mathcal{F}$ consists of all subsets of $\Omega$ (the discrete algebra).

• $\mathbb{P}$ is determined by its values on the atoms, written $p_{++}=\mathbb{P}(\{++\})$, etc.

Let $Y$ be the price of the asset after the first time and $X$ be its price after the second time. (These natural and meaningful descriptions show this is not some pathological construction we’re about to review.)

The figure displays this model as a binary tree in which the individual (conditional!) probabilities label the branches, the elements of $\Omega$ are the four possible paths from left to right through the tree, and the values of $Y$ and $X$ are indicated at the points where they are determined.

Suppose all four prices assigned by $X$ are distinct. Then, since any individual price is measurable in $\mathcal{B}(\mathbb{R})$, $\mathcal{F}_X$ contains all the atoms, whence it consists of $\mathcal{F}$ itself. But $Y$ can assign at most two distinct prices, $Y_{+} = Y(++) = Y(+-)$ and $Y_{-} = Y(-+) = Y(--)$. The inverse images of these two prices then are the sets $+_1=\{++,+-\}$ and $-_1=\{-+,--\}$. They generate a strict subalgebra of $\mathcal{F}$: it has four measurable sets and does not include any of the atoms. It describes what is “known” after the first time but before the second one.

The definition of conditional expectation needs to be checked only on a basis for $\mathcal{F}_X$. The set of its atoms is most convenient. Here is an example of a calculation for the atom $\{-+\}$:

The parallel calculations for the other atoms make it clear that for all $\omega\in\Omega$,

where the second equality computes the integral directly. From this we can construct two interesting examples:

1. Suppose every outcome has nonzero probability. Then we may always divide both sides by $p_\omega$, no matter what $\omega$ may be, and obtain

The conditional expectation of $Y$ is just $Y$ itself.

2. Suppose $p_{++}=p_{+-}=1/2$ and $p_{--}=p_{-+}=0$. (This models a situation where an initial decrease is impossible.) Then we may define $Y_{-}$ to be any value, since it does not matter (due to the impossibility of this event): the defining equality for $\omega={--}$

and its counterpart for $\omega=-+$ automatically hold. Thus, it is not necessarily the case that $\mathbb{E}(Y|\mathcal{F}) = Y$, but the set of $\omega$ where the two sides differ must have zero probability (and, of course, be measurable with respect to $Y$).

Looking back at the tree might supply some intuition: in conditioning $Y$ on $X$, whose values were determined later, we thereby have complete information about $Y$ along any sets of paths having nonzero probability of occurring.