# Characteristic Function

Let’s derive the characteristic function of the Arithmetic Brownian motion. Recall that the characteristic function of a real valued random variable Y with probability distribution function \(P[Y]\) is defined as follows,

$$\Psi_Y \left(t\right)=E \left[ e^{ity}\right]=\int_{-\infty}^{\infty}{e^{ity}dP\left(y\right)}$$

If the distribution function is absolutely continuous, meaning density which we represent by \(p(Y)\), exists, then we can write the characteristic function in terms of the density,

$$\Psi_Y \left(t\right)=E \left[ e^{ity}\right]=\int_{-\infty}^{\infty}{e^{ity}p\left(y\right)dy}$$

The characteristic function is more like the Fourier transform of the probability distribution. It is an important concept in the sense that it uniquely characterises a distribution - e.g., two distributions are identical if and only if their characteristics functions are identical. This alternative characterisation of the distribution in terms of the Characteristic function can be useful to simplify some calculations/proof of theorems. For example, the characteristic function of the sum of two independent variables is the product of their characteristic functions, and you can see this is quite useful thing when one is dealing with sum of random variables, a nasty but a common problem in stochastic calculus. For another example, to prove the existence of the moments of a random variable, one just has to show that the characteristic function is differentiable. There are numerous other applications of characteristic function, some of which will show up later on in these notes.

Now back to the problem at hand! To derive expression for the characteristic function of Arithmetic Brownian, it is easier to start with the characteristic function of a standard normal variable Z with density,

$$f_Z\left(z\right)=\frac{1}{\sqrt{2 \pi}}e^{-\frac{1}{2}z^2}$$

Then apply the definition of the characteristic function,

$$\Psi_Z \left(t\right)=E \left[ e^{itz}\right]=\int_{-\infty}^{\infty} {e^{itz}\frac{1}{\sqrt{2 \pi}}e^{-\frac{1}{2}z^2}dz}$$

We can combine the exponential terms, and then factor out -1/2,

$$\Psi_Z \left(t\right)=\int_{-\infty}^{\infty} {\frac{1}{\sqrt{2 \pi}}e^{itz-\frac{1}{2}z^2}}$$ $$=\int_{-\infty}^{\infty} {\frac{1}{\sqrt{2 \pi}}e^{-\frac{1}{2} \left(z^2 -2itz\right)}dz}$$

We then add and subtract the missing term to complete the square in the exponent,

$$\Psi_Z \left(t\right)=\int_{-\infty}^{\infty} {\frac{1}{\sqrt{2 \pi}}e^{-\frac{1}{2} \left(z^2 -2itz+\left(it\right)^2-\left(it\right)^2\right)}dz}$$

And replacing the square, and noting that \(i^2=-1\), this simplifies,

$$\Psi_Z \left(t\right)=e^{-\frac{1}{2} t^2}\int_{-\infty}^{\infty} {\frac{1}{\sqrt{2 \pi}}e^{-\frac{1}{2} \left(z -it\right)^2}dz}$$ $$=e^{-\frac{1}{2} t^2}$$

Where in the last equality we used the fact that the total area under the normal with mean \(it\) and variance 1 is equal to 1.

It is now easy to derive the CF of the Arithmetic Brownian motion,

$$X_T=X_0 + \mu \, T +\sigma \, B_T$$

We can write \(B_T\) in terms of standard normal,

$$X_T=X_0 + \mu \, T +\sigma \sqrt{T} \,Z$$

And let’s derive its CF using some basic properties of expectation,

$$\Psi_{X_T} \left(t\right)=\Psi_{X_0 + \mu \, T +\sigma \sqrt{T} \,Z} \left(t\right)$$ $$=E \left[ e^{it \left(X_0+\mu T+\sigma \sqrt{T}\,Z\right)}\right]$$ $$=e^{it \left(X_0+\mu T\right)}E \left[ e^{it\,\sigma \sqrt{T}\,Z}\right]$$ $$=e^{it \left(X_0+\mu T\right)}e^{-\frac{1}{2} t^2\,\sigma^2 T}$$

Where in the last equality, we use the previously derived relationship,

$$E \left[ e^{itZ}\right]=e^{-\frac{1}{2} t^2}$$

which when t replaced by \(t\,\sigma \sqrt{T}\) becomes,

$$E \left[ e^{it\,\sigma \sqrt{T}\,Z}\right]=e^{-\frac{1}{2} t^2\,\sigma^2 T}$$

This is the characteristic function of the distribution at one observation point. Let’s say, as before, we are interested in the finite-dimensional distribution at 4 arbitrary points: s,t,u,v. So our X is a vector,

$$X=\left[\begin{align} X_s\\X_t\\X_u\\X_v\end{align}\right]$$

And the characteristic function would then be,

$$\Psi_X \left(\phi_s,\phi_t,\phi_u,\phi_v\right)=E \left[ e^{i\left(\phi_sX_s+\phi_tX_t+\phi_uX_u+\phi_vX_v\right)}\right]$$

Where we changed the dummy variable in the definition of characteristic function from \(t\) to \(\phi\), so as not to cause confusion with time t. The density or distribution will be quadrivariate, which is a long expression, but if one just expresses it using vectors and matrices, then it is not much different from the one-dimension case,

$$\Psi_X \left(\phi\right)=E \left[ e^{i\,\phi^T X}\right]=e^{i\,\phi^T M-\frac{1}{2}\phi^T\Sigma \phi}$$

It is just we have vectors/matrices in place of the scalars,

$$\phi=\left[\begin{align} \phi_s\\\phi_t\\\phi_u\\\phi_v\end{align}\right]$$

$$M=\left[\begin{align} \mu s\\\mu t\\\mu u\\\mu v\end{align}\right]$$

And \(\Sigma\) is the same variance-covariance matrix we saw earlier,

$$\Sigma =\sigma^2\left[\begin{align} s && \min (s,t)&& \min (s,u) && \min (s,v) \\ \min (t,s) && t && \min (t,u) && \min (t,v) \\ \min (u,s) && \min (u,t) && u && \min (u,v)\\ \min (v,s) && \min (v,t) && \min (v,u) && v \end{align} \right]$$