↑ Return to Probability Basics

Change of Random Variables

I always forget the rules for transforming random variables, so I’ve made this page to show two simple examples, both involving the humble Gaussian.

Generic Theory

In general, we know random variable X with density p_X( x ), for x \in \mathcal{X}.

Suppose we have a quantity of interest Y = \phi( X ).

We know that the function \phi is invertible, so each x maps to exactly one y and vice versa.

    \begin{align*} x = \phi^{-1}( \phi( x ) ) \hspace{1cm}  y = \phi( \phi^{-1}( y) ) \end{align*}

To help us remember the input/output of the transformation function, sometimes I’ll write

    \begin{align*} \phi(x) &= g_{x \rightarrow y}( x) \\ \phi^{-1}(y) &= g_{y\rightarrow x}( y ) \end{align*}

We wish to find the density p_Y(y) of random variable Y, for y \in \mathcal{Y}.

    \begin{align*} p_Y( y ) = p_X(  g_{y\rightarrow x}( y)  ) |J[ g_{y\rightarrow x} ] | \end{align*}

There are two pieces here on the RHS, which together create a density over the variable Y.

  • the density function p_X()
  • The p_X() term takes a transformed version of y as input instead of x. This determines the general shape of the resulting density.

  • the Jacobian determinant |J|
  • The determinant of the Jacobian of the transformation from Y to X captures how the change of variables “warps” the density function. For example, if Y is a tiny fraction of X, the Jacobian makes sure that a small change in the density of Y is comparable to a large change in the original density.

For 1D variables x,y, the Jacobian is simply the first-derivative:

    \begin{align*} J[ g_{y\rightarrow x} ] = \frac{\partial}{\partial y} g_{y\rightarrow x}( y ) = \frac{\partial}{\partial y} \phi^{(-1)}( y ) \end{align*}

For multivariate x,y, let g_{y\rightarrow x}(1) denote the d-th dimension of the resulting vector. Then

    \begin{align*} | J[ g_{y\rightarrow x} ] | = \det \left[ \begin{array}{ c c c c} \frac{\partial}{\partial y_1} g_{y\rightarrow x}(1) & \frac{\partial}{\partial y_1} g_{y\rightarrow x}(2) & \ldots & \frac{\partial}{\partial y_1} g_{y\rightarrow x}(D)\\ \frac{\partial}{\partial y_2} g_{y\rightarrow x}(1) &  & & \\ \vdots & & & \\ \frac{\partial}{\partial y_D} g_{y\rightarrow x}(1) & \frac{\partial}{\partial y_D} g_{y\rightarrow x}(2) & \ldots & \frac{\partial}{\partial y_D} g_{y\rightarrow x}(D)\\ \end{array} \right] \end{align*}

Simple Multiplication in 1D

Consider the standard 1D Gaussian random variable: X \sim \mathcal{N}( 0, 1 ).

    \begin{align*} p_X( x ) = \frac{1}{\sqrt{2\pi}} e^{ -\frac{1}{2} x^2 } \end{align*}

Now, what is the distribution of Y = \phi(X) = \frac{1}{2} X?

First, we use simple algebra to identify the reverse transform: X = g_{y \rightarrow x}(Y) = 2Y. So we’ll replace x with 2y in the density function p_X().

Next, we compute the Jacobian term: |J| = \frac{\partial}{\partial y} 2y = 2.

Finally, we plug these results into the final density equation:

    \begin{align*} p_Y( y ) &= p_X( g_{y\rightarrow x} (y) ) |J| \\ &= \frac{1}{\sqrt{2\pi}} |2| e^{ -\frac{1}{2} 4 y^2 } &= \frac{1}{\sqrt{2\pi}} \frac{1}{\sigma}  e^{ -\frac{1}{2} \frac{ y^2 }{\sigma^2} } \end{align*}

when \sigma = \frac{1}{2}. So we recover the density for Y \sim \mathcal{N}(0, \sigma^2=\frac{1}{4} ), as expected!

2D Gaussian Transformation from Cartesian to Polar

We start with a simple 2D zero-mean standard spherical Gaussian. \vec{x} = [x_1 x_2] and

    \begin{align*} p_X( \vec{x} ) = \frac{1}{2\pi} e^{ -\frac{1}{2} (x_1^2 + x_2^2) } \end{align*}

We wish to transform to new variables \vec{y} = [ r \theta], where the domain changes to 0 \le r \le \infty and 0 \le \theta \le 2\pi.

Step 1: Find the inverse mapping g_{y \rightarrow x}(). Standard results show this to be:

    \begin{align*} x_1 = r \cos( \theta )  \hspace{2cm} x_2 = r \sin(\theta) \end{align*}

Step 2: Compute the Jacobian matrix and find its determinant:

    \begin{align*} J &=  \left[ \begin{array}{c c} \frac{\partial}{\partial r} x_1( \vec{y} ) & \frac{\partial}{\partial r} x_2( \vec{y} ) \\ \frac{\partial}{\partial \theta} x_1( \vec{y} ) & \frac{\partial}{\partial \theta} x_2( \vec{y} ) \\ \end{array} \right] \\ &= \left[ \begin{array}{c c} \cos \theta & \sin \theta \\ -r \sin \theta & r \cos \theta \\ \end{array} \right] \\ \end{align*}

Using the simple rule for 2×2 determinants, we find |J| = r \cos^2( \theta ) + r \sin^2( \theta ) = r.

Finally, we come to Step 3: plug these results into the general formula.

    \begin{align*} p_Y( \vec{y} ) &= p_X( g_{y\rightarrow x}(\vec{y}) ) |J| \\ &= \frac{1}{2\pi} e^{ -\frac{1}{2} (r^2 \cos^2 \theta + r^2 \sin^2 \theta) } |r| &= \frac{1}{2\pi} e^{ -\frac{1}{2} r^2 } r \end{align*}

where since r is always positive, |r| = r.

Importantly, keep in mind that this density is a function of both r and \theta, even though \theta does not appear in the density function. This implies the distribution is uniform over all possible angles \theta, which makes sense because the Gaussian is spherically symmetric.

Aside: Calculating the normalization constant of the Gaussian

We can use the polar form of the 2D Gaussian to figure out why the pesky \sqrt(2\pi) normalization constant always appears in every Gaussian PDF formula.

We can show using high school calculus that \frac{1}{2\pi} is the normalization constant for the generic density shape given by p_Y (\vec(y)=[r \theta]) \propto r e^{-0.5r^2}.

Integrating over both r and \theta, we find`u

    \begin{align*} \mathbf{Z}_{2D} &= \int_{\theta=0}^{2\pi} \int_{r=0}^{\infty} r e^{-0.5 r^2} d\theta dr \\ &= \int_{\theta=0}^{2\pi}d\theta  \cdot \int_{r=0}^{\infty} r e^{-0.5 r^2} dr \\ &= 2\pi \int_{r=0}^{\infty} r e^{-0.5 r^2} dr  \end{align*}

Using the substitution w = 0.5 r^2, which enforces dw = r dr but leaves the bounds the same, we have

    \begin{align*} \mathbf{Z}_{2D} &= 2\pi \int_{w=0}^{\infty} e^{-w} dw  &= 2\pi \big[ -e^{-w} \Big|_{0}^{\infty} \big] = 2\pi (  0 - -1 ) = 2\pi  \end{align*}

Thus, the normalization constant \mathbf{Z}_{2D} for the 2D standard Gaussian is just 2\pi.

Finally, since the 2D spherical Gaussian is the product of two independent 1D Gaussians, the normalization constant for the 1D case must be simply $\mathbf{Z}_{1D} = \sqrt(2\pi) = \int_{-\infty}^{\infty} \
e.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>