Arithmetic Brownian Motion Process and SDEs

We discuss various things related to the Arithmetic Brownian Motion Process- these include solution of the SDEs, derivation of its Characteristic Function and Moment Generating Function, derivation of the mean, variance, and covariance, and explanation of the calibration and simulation of the process.

Mean, Variance, Covariance, Probability

We saw that the solution of the Arithmetic Brownian motion,

$$dX_t=\mu \,dt+\sigma \,dB_t$$

is as follows,

$$X_T=X_0 + \mu \, T +\sigma \, B_T$$

We can interpret this solution as giving the value of the process as at T, or as giving the change in its value from time 0 to T. The solution depends on Brownian motion, which is random, and hence the solution is not as prescriptive as one would see in the deterministic calculus. This was to be expected as the whole point of stochastic calculus is that it deals with random quantities.

Though it is stochastic, we can still give it a meaningful characterisation. For example, we can easily derive its distribution. To see that, recall that if Z is normally distributed with some given mean and variance,

$$Z \thicksim N \left[\mu,\sigma^2 \right] $$

then a linear transformation of Z is also normally distributed,

$$Y=a+bZ \thicksim N \left[a+b \mu,b^2\sigma^2 \right]$$

The mean and variance of the transformed variable can be easily derived. Let’s start with the mean, which is the expected value,

$$E \left[Y \right]=E \left[a+bZ \right]\overset{1}{=} a+b E\left[ Z \right]\overset{2}{=}a+b\mu$$

Where we just used the fact that 1) expected value of sum is equal to sum of expected value, and the constant can be taken out of expectation, and 2) expected value of Z is equal to \(\mu\).

Variance is also easy,

$$V \left[Y \right]=V \left[a+bZ \right]\overset{3}{=} 0+b^2 \, V\left[ Z \right]\overset{4}{=}b^2 \sigma^2$$

Where we used the fact that 3) the variance of a constant is zero, and taking a constant out of the variance operator produces square of the constant, and 4) variance of Z is equal to \(\sigma^2\).

The reason for recalling the properties of the normal distribution is that the Brownian motion, which appears in our solution is normally distributed, with mean zero, and variance T, which is the length of time interval (observation period),

$$B_T \thicksim N[0,T]$$

Our solution, X, here is just a linear function of the Brownian,

$$X_T=X_0+\mu \, T +\sigma \, B_T $$

so we can easily see that it follows a normal distribution. And we can easily derive its mean by taking expected values of both sides,

$$E \left[X_T \right]=E \left[ X_0+\mu \, T +\sigma \, B_T \right]$$ $$\overset{1}{=} X_0+\mu \, T +\sigma \, E\left[B_T \right]$$ $$\overset{2}{=}X_0+\mu \, T$$

and variance by taking variances of both sides,

$$V \left[X_T \right]=V\left[ X_0+\mu \, T +\sigma \, B_T \right]$$ $$\overset{3}{=} 0+ \sigma^2 V \left[B_T \right]\overset{4}{=}\sigma^2 \, T$$

Where we used the fact that 1) E of sum is equal to sum of E, and we can take constant out of E, 2) mean of Brownian motion is equal to zero, 3) variance of a constant is zero, taking constant out of variance produces its square, and 4) variance of Brownian is equal to the length of the interval.

Now X is a stochastic process defined over time, so it is like a collection of random variables, and you know collection introduces covariance, so let’s also calculate its covariance. Note the concept of covariance here is similar to the concept of auto-covariance (or its cousin auto-correlation) in time series analysis. So we are aiming to calculate the auto-covariance between the value of the process at time t, and its value at another time s; where the time difference between t and s can be interpreted as the equivalent of lag in the time series analysis.

$$C \left[X_T, X_S \right]=C\left[ X_0+\mu \, T +\sigma \, B_T, X_0+\mu \, S+\sigma \, B_S\right]$$ $$\overset{1}{=} C\left[ \sigma B_T, \sigma B_S\right]$$ $$\overset{2}{=}\sigma^2 C\left[ B_T,B_S\right]$$ $$\overset{3}{=}\sigma^2 \min (T,S)$$

Where we have used the fact that 1) the covariance of a deterministic and a stochastic process is zero, 2) we can take the constant sigma’s out of the covariance, and 3) the covariance of two Brownian motion is simply the length of the overlapping time.

Does this covariance formula looks a bit strange? It should not really. Say you have two variables with volatilities \(\sigma \sqrt{S} \, {\rm and} \, \sigma \sqrt{T}\) respectively. The covariance between the two would then be correlation times the two standard deviations - \(C=\rho \sigma \sqrt{S}\,\sigma \sqrt{T}\). Our covariance formula is just this same very formula, except the correlation is equal to 1, and the two processes are simultaneously alive only over the overlapping period, which is the minimum of S and T.

We will often be interested in the values of the process at more than 2 points. But we don't need any more derivations, just a convenient representation. To see, let’s consider four points, \(s,t,u,v\), with the process taking random values: \(X_s\,,\,X_t\,,\,X_u\,,\,X_v\) at these four points. We chose four points just for illustration, you can change it to any number of points you like. It is easier with vector, so let’s switch to representation in terms of vectors/matrices. The four random values become a random vector,

$$X=\left[ \begin{align}X_s\\X_t\\X_u\\X_v \end{align} \right]$$

And so does the mean,

$$M =\left[\begin{align} \mu s \\ \mu t \\ \mu u \\ \mu v \end{align} \right]$$

But the variance-covariance representation will now require a matrix,

$$ \Sigma =\left[\begin{align} C \left[X_s, X_s \right]&&C \left[X_s, X_t \right]&&C \left[X_s, X_u \right]&&C \left[X_s, X_v \right] \\ C \left[X_t, X_s \right]&&C \left[X_t, X_t \right]&&C \left[X_t, X_u \right]&&C \left[X_t, X_v \right] \\ C \left[X_u, X_s \right]&&C \left[X_u, X_t \right]&&C \left[X_u, X_u \right]&&C \left[X_u, X_v \right]\\ C \left[X_v, X_s \right]&&C \left[X_v, X_t \right]&&C \left[X_v, X_u \right]&&C \left[X_v, X_v \right]\end{align} \right] $$

And we then just write the individual variances and covariances of pairs in terms of the formulae we derived above,

$$ \Sigma =\left[\begin{align} \sigma^2 s && \sigma^2 \min (s,t)&& \sigma^2 \min (s,u) && \sigma^2 \min (s,v) \\\sigma^2 \min (t,s) && \sigma^2 t && \sigma^2 \min (t,u) && \sigma^2 \min (t,v) \\\sigma^2 \min (u,s) && \sigma^2 \min (u,t) && \sigma^2 u && \sigma^2 \min (u,v)\\\sigma^2 \min (v,s) && \sigma^2 \min (v,t) && \sigma^2 \min (v,u) && \sigma^2 v\end{align} \right]$$

Which then becomes after factoring out \(\sigma^2\),

$$\Sigma =\sigma^2\left[\begin{align} s && \min (s,t)&& \min (s,u) && \min (s,v) \\ \min (t,s) && t && \min (t,u) && \min (t,v) \\ \min (u,s) && \min (u,t) && u && \min (u,v)\\ \min (v,s) && \min (v,t) && \min (v,u) && v \end{align} \right]$$

And the joint probability distribution/density of the X will then just be the multivariate normal distribution/density,

$$ X\thicksim N \left[ M, \Sigma \right]$$

And you know when finite dimensional distribution is Gaussian, then the process is called Gaussian, and Arithmetic Brownian is thus a Gaussian process.

We explore alternative characterisation of the process in terms of the moment generating and characteristic functions in the other tabs.