跳转至
\[ \newcommand{\bs}{\boldsymbol} \newcommand{\bsX}{\boldsymbol{X}} \newcommand{\bf}{\mathbf} \newcommand{\msc}{\mathscr} \newcommand{\mca}{\mathcal} \newcommand{\T}{\text{T}} \newcommand{\rme}{\mathrm{e}} \newcommand{\rmi}{\mathrm{i}} \newcommand{\rmj}{\mathrm{j}} \newcommand{\rmd}{\mathrm{d}} \newcommand{\rmm}{\mathrm{m}} \newcommand{\rmb}{\mathrm{b}} \newcommand{\and}{\land} \newcommand{\or}{\lor} \newcommand{\exist}{\exists} \newcommand{\sube}{\subseteq} \newcommand{\lr}[3]{\left#1 #2 \right#3} \newcommand{\intfy}{\int_{-\infty}^{+\infty}} \newcommand{\sumfy}[1]{\sum_{#1=-\infty}^{+\infty}} \newcommand{\vt}{\vartheta} \newcommand{\ve}{\varepsilon} \newcommand{\vp}{\varphi} \newcommand{\Var}{\text{Var}} \newcommand{\Cov}{\text{Cov}} \newcommand{\edef}{\xlongequal{def}} \newcommand{\prob}{\text{P}} \newcommand{\Exp}{\text{E}} \newcommand{\t}[1]{\text#1} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\versionofnewcommand}{\text{260125}} \]

3. Stochastic Differential Equations

3.1 Examples and Some Solution Methods

We have questions about a given stochastic differential equations from two aspects: (i) The theoretical one: Can we obtain existence and uniqueness theorems for such equations? What are the properties of the solutions? (ii) How can we solve a given such equation?

The following some examples may give us some intuitive hints.

Example 3.1.1:

Consider a population growth model described as

\[ \frac{\rmd}{\rmd t} N_t = (r_t+\alpha W_t)N_t\ , \]

where \(W_t\) is white noise and \(\alpha\) is a constant. Assume that \(r_t = r\) is also a constant. Then

\[ \rmd N_t = r N_t\ \rmd t + \alpha N_t\ \rmd B_t \]

or equivalently

\[ \frac{\rmd N_t}{N_t} = r\ \rmd t + \alpha\ \rmd B_t\ . \]

There must be some relationship between \(\ln N_t\) and \({\rmd N_t}/{N_t}\). So we put \(g(t,x) = \ln x\), hence we have

\[ \begin{aligned} &&\rmd (\ln N_t) =&\ \frac{1}{N_t}\ \rmd N_t - \frac{1}{2{N_t}^2}(\rmd N_t)^2 \\ &&=& \ \frac{\rmd N_t}{N_t} - \frac{\alpha^2}{2}\ \rmd t\\ &\Leftrightarrow& \quad \frac{\rmd N_t}{N_t} = &\ \rmd(\ln N_t) + \frac{\alpha^2}{2} \rmd t\\ &\Leftrightarrow& \quad r\ \rmd t+\alpha\ \rmd B_t =&\ \rmd(\ln N_t) +\frac{\alpha^2}{2}\ \rmd t\\ &\Leftrightarrow& \quad \rmd (\ln N_t) =&\ \lr({r-\frac{\alpha^2}{2}})\rmd t + \alpha\ \rmd B_t\\ &\Leftrightarrow& \quad \ln N_t-\ln N_0 =&\ \lr({r-\frac{\alpha^2}{2}}) t + \alpha B_t \\ &\Leftrightarrow& \quad N_t = & \ N_0\exp\lr[{\lr({r-\frac{\alpha^2}{2}})t+\alpha B_t}]\ . \end{aligned} \]

We say a process \(X_t\) is Geometric Brownian Motion if \(X_t\) is of the form

\[ X_t = X_0 \exp (\mu t+\alpha B_t) \quad \mu,\alpha \text{ const.} \]

Proposition 3.1.2:

Suppose \(X_t\) is a geometric Brownian motion described by

\[ X_t = X_0 \exp (\mu t + \alpha B_t)\ . \]

If \(B_t\) is independent of \(X_0\) then

\[ \Exp[X_t] = \Exp[X_0]\cdot \exp\lr({\mu t+\frac{1}{2}\alpha^2 t})\ , \]

which is the same as when there is no noise term.

Proof: Let \(Y_t = \exp(\mu t+\alpha B_t)\) and apply Itô formula:

\[ \rmd Y_t =\mu\cdot Y_t\ \rmd t + \alpha\cdot Y_t\ \rmd B_t + \frac{1}{2}\alpha^2 Y_t\ \rmd t \]

​ or in integral form

\[ Y_t = Y_0 + \mu \int_0^t Y_s\ \rmd s + \alpha \int_0^t Y_s\ \rmd B_s + \frac{\alpha^2}{2}\int_0^t Y_s\ \rmd s\ . \]

​ By theorem 1.2.1,

\[ \Exp \lr[{ \int_0^t Y_s\ \rmd B_s }] = 0\ . \]

​ Thus we get

\[ \begin{aligned} &&\Exp[Y_t] =&\ \Exp[Y_0] + \lr({\mu + \frac{\alpha^2}{2}})\int_0^t E[Y_s]\ \rmd s\\ &\Leftrightarrow& \frac{\rmd }{\rmd t} \Exp[Y_t] = &\ \lr({\mu + \frac{\alpha^2}{2}})\Exp[Y_t]\\ &\Leftrightarrow& \Exp[Y_t] =&\ \exp\lr({\mu t+\frac{1}{2}\alpha^2 t})\ . \end{aligned} \]

​ Since \(B_t\) is independent of \(X_0\) we have

\[ \Exp[X_t] = \Exp[X_0\cdot Y_t] = \Exp[X_0] \cdot \Exp[Y_t] = \Exp[X_0]\cdot \exp\lr({\mu t+\frac{1}{2}\alpha^2 t})\ . \]

Theorem 5.1.2 (The Law of Iterated Logarithm):

\[ \underset{t\to\infty}{\lim \sup} \frac{B_t}{\sqrt{2t\log\log t}} = 1 \quad \text{a.s.} \]