Stochastic Integration Note

June 15, 2008

If X is a measurable, adapted process, we require that in order to define the stochastic integral \int_0^1X_s dW_s, it must be that \int_0^1 X_s^2 ds < \infty almost surely. Here is why this requirement is necessary.

Suppose that P \left [ \int_0^t X_s^2 ds < \infty \right ] = 1, and let E = \left \{ \int_0^1 X_s^2 ds = \infty \right \}. Then on the set E, we have

\limsup_{t \uparrow 1} \int_0^t X_s dW_s = - \liminf_{t \uparrow 1} \int_0^t X_s dW_s = +\infty

almost surely on E. In other words, we get huge oscillations in the value of the integral as we try to extend it to the whole interval. To prove this fact, the big tools are a representation result which reduces the question to one of Brownian Motion, and the Law of the Iterated Logarithm, which is used in its weakened form to show that Brownian Motion oscillates almost surely between arbitrary highs and lows.

Let \phi be a strictly increasing function from \text{[}0,\infty) onto \text{[}0,1). Define M_t = \int_0^{\phi(t)} X_s dW_s. Note that M \in \mathcal{M}^{\text{c, loc}} and that the quadratic variation process \langle M \rangle_t = \int_0^{\phi(t)} X_s^2 ds. On the set E, note that by assumption, \lim_{t \rightarrow \infty} \langle M \rangle_t = \infty. By the Time-Change Theorem for Martingales, it then follows that M_t = B_{p(t)}, where p(t) = \langle M \rangle_{\phi^{-1}(t)}. Note that p(t) is random, depending on the values of \langle M \rangle_t.

As \phi(t) tends to 1 as t tends towards infinity, we have \lim_{t \uparrow 1} p(t) = \infty on E. Finally, by the Law of the Iterated Logarithm for Brownian motion, we have \limsup_{t \uparrow 1} B_{p(t)} = - \liminf_{t \uparrow 1} B_{p(t)} = +\infty almost surely on E.

Benois – The Italian Comedy

May 22, 2008

Painting by Alexander Benois, Russian of French descent, in 1906. Not really sure what this painting means or anything, there’s just this huge and awesome sense of foreboding I get whenever I see it. I like the sense of depth in the painting…there are figures at three different levels. Part of the allure is that it’s hard to tell what’s going on. Is the man in the background attacking the woman? When I see him, he makes me think that he’s conjuring or performing some kind of black incantation. At the same time, it’s not exactly certain that the two characters are even interacting. It’s also easy to quickly dismiss the female character. It’s interesting that she seems to explicitly be an object, to be acted upon, in contrast to the male figures. The two characters in the middle ground are pretty similar. They both seem to be sitting back on their heels, simultaneously like gentleman aloof from the violence and jackals waiting for the killing to happen. Especially the man in the white suit, his posture is amazing, its full of tension; you can feel him pushing backwards while wanting to be drawn in. And finally, you have the harlequin type guy in the foreground. When I saw this, my immediate comparison was to Seurat’s “Circus” (1891), where a clown-type figure in the foreground sets the stage, so to speak, on the action of the painting. His checkered clothing is also suggestive of a matador, and looking quickly at the painting, you might easily see the middle-ground black suited man as a cape that the harlequin is waving. That last part is probably not true, but it makes sense if its referencing the killing that’s going on in the background. The Harlequin’s pose is just ridiculous, it’s impossible for someone to stand like that unless they’re in the process of moving. So basically I have no idea.

Malevich – Red Cavalry Riding

May 22, 2008

Painting is by Kazimir Malevich, from 1928-1932. Malevich was Russian, the cavalry depicted probably refers to the Red Army. Malevich was at the forefront of the Communist movement in Russia but was one of the early communist idealists, and when Stalin came to power, he was persecuted pretty badly. No one really knows what Malevich thought of Stalinist Russia, but paintings from this period tend to project loneliness, of distant images that are too far to reach. The obvious comparison is to “Red House”, dating from around the same time; he uses the same sort of muted color scheme in that painting. There is just this immense and endless sky above the riders, it seems to stretch all the way up to space. Not sure what the meaning of the striped foreground is, but it’s something that appears in almost all of his late pieces. As far as the cavalry figures, you could probably draw some sort of comparison to earlier suprematist pieces like “Suprematism with Eight Red Triangles”, and a lot of Malevich’s late compositions have some sort of synthesis of his earlier Suprematism. The cavalry seem indomitable and incapable of having their progress slowed. I think Malevich generally portrays these figures in a positive light. The time when the White Army was being vanquished was for him the glory days of the movement, and the red cavalry marching to battle represent that. I think there is a small element of nostalgia for the old imagery of Cossack warriors marching across the steppes; it seems like nostalgia always creeps into the works of Russian modernist painters. At this point, Malevich still feels proud these riders, but he knows that he does not belong; he is an outsider.

Continuous Time Filtration Pathologies

May 22, 2008

Let (\Omega, \mathcal{F}, P) be a probability space. A stochastic process is a (Borel) measurable map X: \Omega\times [0,\infty) \rightarrow \mathbb{R}^d. The elements \omega \in \Omega can be thought of as experiments, which each yield as output some path or function in \mathbb{R}^d, while t \in [0,\infty) parametrizes time. A process X is said to be continuous if for each \omega, the path X(\omega): [0,\infty) \rightarrow \mathbb{R}^d is continuous in the normal function sense. Also, for each t \in [0, \infty), the map X_t : \Omega \rightarrow \mathbb{R}^d will be a normal random variable.

Basically, a filtration is an ascending chain of \sigma-algebras indexed by time. So, formally, it is a collection of \sigma-algebras \{\mathcal{F}\}_{t \in [0,\infty)}, where if a < b, then \mathcal{F}_a \subset \mathcal{F}_b. Every guy in the filtration is supposed to be contained in the “universal” \sigma-algebra, which is \mathcal{F}.

After creating a filtration in continuous time, it is possible to create two auxiliary filtrations. After looking at the construction, note that both of these filtrations would be trivial in the discrete time case. For each t, define \mathcal{F}_{t+} = \cap_{\epsilon > 0} \mathcal{F}_{t + \epsilon}. If you philosophically consider \mathcal{F}_t to be the information accrued up to time t, then \mathcal{F}_{t+} represents the information you can obtain if you are allowed to peek infinitesimally into the future at time t. A filtration is said to be right continuous if \mathcal{F}_t = \mathcal{F}_{t+} for all t. You can also define a \{\mathcal{F}_{t-}\} filtration, which will represent the information that you have just before time t.

When you are considering a stochastic process, if you want it to model anything, it has to be true that by time t, you know what the value of X is at time t. Technically, you require that for all t \in [0, \infty), X_t should be measurable with respect to \mathcal{F}_t. If this condition holds, people say that the process X is adapted (to the filtration \mathcal{F}_t).

If you’re given a stochastic process X, there is a canonical way to make a filtration so that X is adapted with respect to it. This process is basically the same as defining the \sigma-algebra generated by a random variable. What you do is define \{ \mathcal{F}^X_t\} = \sigma(X_s : 0 \leq s \leq t). You might be initially tempted to say that this is overkill, and that one should define \mathcal{F}^X_t = \sigma(X_t). This doesn’t work though, because to have a filtration, you need to make sure that the containment property holds. Keeping up the information theme, filtrations have this rule because as time progresses, you should never lose information about what your process has done up to that point.

So the point of all this is that it is not necessarily true that a continuous stochastic process generates a continuous filtration. This fact fails for a pretty simple but interesting reason. Let \Omega = C[0,\infty), the space of continuous functions into \mathbb{R}^d. There is a natural way to put a Borel-type measure on this space, but its not necessary for this problem, so I’m not going to talk about it. The process we will consider on \Omega is pretty awesome in its simplicity, and its very important. It is called the coordinate-mapping process, and it is defined by X(\omega, t) = \omega(t). Rembmer, that here \omega represents a continuous function.

Now consider this set. Let F = \{\omega : X(\omega) \text{ has a local maximum at } t \}. We will show that for any t, F \in \mathcal{F}_{t+}, but that it is not in \mathcal{F}_t. First let me give an intuitive reason why this is true, and then a formal justification. Basically, suppose that you are traveling along a function, and you happen to get to this point, which is a local maximum. Until you move forward some, any small amount, you have no idea that its a local maximum, because you have no way of knowing whether or not the function will start to decrease or keep on getting bigger. Since you don’t know at time t, F shouldn’t be in \mathcal{F}. Since you will know if you’re in F by stepping infinitesimally far into the future, F should be in \mathcal{F}_{t+}.

Formal proofs from Karatzas/Shreve. Showing that F is in \mathcal{F}_{t+}: Write, for any n\geq 0,

F = \cup_{m=n}^\infty \cap_{r \in \mathbb{Q},\ |t - r | < 1/m} \{\omega : \omega(t) \geq \omega(r)\} \in \mathcal{F}_{t + 1/n}.

This equation is just saying that you are a local maximum if and only if in some small enough neighborhood, all the rational points around are further down. We can get away with the rational points chicanery because our process is continuous. This is necessary because set-theoretic business has to be countable.

To show that F \not \in \mathcal{F}_t, we do sabotage. Let G \in \mathcal{F}_t, and suppose that F \cap G \neq \emptyset. What we’ll do is take something in their intersection and tweak it so that it is not in F but still in G. For such an \omega, define \tilde{\omega}(s) to be equal to \omega(s) for 0 \leq s \leq t, and \tilde{\omega}(s) = \omega(t) + s -t for s \geq t. What we’ve done is destroy the local maximum without changing the function before time t. Since \tilde{\omega}(s) (check) doesn’t have a local maximum at t, it is not a member of F. To be slightly informal, G only cares about what happens up to time t. Whether or not a function belongs to G depends only on what it does by time t. Since we didn’t do anything to \tilde{\omega(s)} up to that time, then it must still be a member of G. Hence, F \neq G, and so F can’t be an element of \mathcal{F}_t