Example 1. Each sample point! Thus, we can think of the sample space as either the set of all possible outcomes from three coin tosses or as the set of all possible paths through the tree. Introduction to Probability Theory 13 rate for both borrowing and lending. This is not as ridiculous as it first seems, because in a many applications of the model, an agent is either borrowing or lending not both and knows in advance which she will be doing; in such an application, she should take r to be the rate of interest for her activity.
One should borrow money at interest rate r and invest in the stock, since even in the worst case, the stock price rises at least as fast as the debt used to buy it. This option confers the right to buy the stock at time 1 for K dollars, and so is worth S1 , K at time 1 if S1 , K is positive and is otherwise worth zero. We denote by V1! Of course, V1! Our first task is to compute the arbitrage price of this option at time zero.
Suppose at time zero you sell the call for V0 dollars, where V0 is still to be determined. You can use the proceeds V0 of the sale of the option for this purpose, and then borrow if necessary at interest rate r to complete the purchase. Depending on how uncertainty enters the model, there can be cases in which the number of shares of stock a hedge should hold is not the calculus derivative of the derivative security with respect to the price of the underlying asset.
To complete the solution of 1. Because of 1. They are the risk-neutral probabilites. They ap- peared when we solved the two equations 1. In fact, at this point, they are nothing more than a convenient tool for writing 1. We now consider a European call which pays off K dollars at time 2. We want to determine the arbitrage price for this option at time zero. Suppose an agent sells the option at time zero for V0 dollars, where V0 is still to be determined.
Thus, there are really two equations implicit in 1. In the next period, her wealth will be given by the right-hand side of the following equation, and she wants it to be V2. Considering all four possible outcomes, we can write 1. We define this quantity to be the arbitrage value of the option at time 1 if!
This formula is analgous to formula 1. The first two equations implicit in 1. The pattern emerging here persists, regardless of the number of periods. If Vk denotes the value at time k of a derivative security, and this depends on the first k coin tosses! Let F be the set of all subsets of. How many sets are there in F? Introduction to Probability Theory 17 Definition 1. Let A be a subset of F. Imagine that is the set of all possible outcomes of some random experiment.
There is a certain probability, between 0 and 1, that when that experiment is performed, the outcome will lie in the set A.
We think of IP A as this probability. For the individual elements of in 2. As in the above example, it is generally the case that we specify a probability measure on only some of the subsets of and then use property ii of Definition 1.
In the above example, we specified the probability measure only for the sets containing a single element, and then used Definition 1. Definition 1. Suppose the coin is tossed three times, and you are not told the outcome, but you are told, for every set in F1 whether or not the outcome is in that set. For example, you would be told that the outcome is not in ; and is in. Moreover, you might be told that the outcome is not in AH but is in AT. In effect, you have been told that the first toss was a T , and nothing more.
Knowing whether the outcome! A random variable is a function mapping into IR. Then S0, S1 , S2 and S3 are all random variables. Nonetheless, it is a function mapping into IR, and thus technically a random variable, albeit a degenerate one. A random variable maps into IR, and we can look at the preimage under the random variable of sets in IR.
Consider, for example, the random variable S2 of Example 1. The preimage under S2 of this interval is defined to be f! More specifically, suppose the coin is tossed three times and you do not know the outcome! You might be told, for example, that!
Then you know that in the first two tosses, there was a head and a tail, and you know nothing more. This information is the same you would have gotten by being told that the value of S2! This means that the information in the first two tosses is greater than the information in S2.
In particular, if you see the first two tosses, you can distinguish AHT from ATH , but you cannot make this distinction from knowing the value of S2 alone.
Let X be a random variable on ; F. Note: We normally write simply fX 2 Ag rather than f! In the case of S2 above with the probability measure of Example 1. If X is discrete, as in the case of S2 above, we can either tell where the masses are and how large they are, or tell what the cumulative distribution function is.
Important Note. In order to work through the concept of a risk-neutral measure, we set up the definitions to make a clear distinction between random variables and their distributions. A random variable is a mapping from to IR, nothing more. It has an existence quite apart from discussion of probabilities. It depends on the random variable X and the probability measure IP we use in. If we set the probability of H to be 13 , then LS2 assigns mass 19 to the number If we set the probability of H to be 12 , then LS2 assigns mass 14 to the number The distribution of S2 has changed, but the random variable has not.
In a similar vein, two different random variables can have the same distribution. Suppose in the binomial model of Example 1. Consider a European call with strike price 14 expiring at time 2. The probability the payoff is 2 is 14 , and the probability it is zero is Consider also a European put with strike price 3 expiring at time 2.
Like the payoff of the call, the payoff of the put is 2 with probability 14 and 0 with probability The payoffs of the call and the put are different random variables having the same distribution. Since is a finite set, X can take only finitely many values, which we label x1; : : :; xn. To make the above set of equations absolutely clear, we consider S2 with the distribution given by 2. We define the Lebesgue measure of intervals in IR to be their length.
This definition and the properties of measure determine the Lebesgue measure of many, but not all, subsets of IR. The collection of subsets of IR we consider, and for which Lebesgue measure is defined, is the collection of Borel sets defined below. We use Lebesgue measure to construct the Lebesgue integral, a generalization of the Riemann integral.
We need this integral because, unlike the Riemann integral, it can be defined on abstract spaces, such as the space of infinite sequences of coin tosses or the space of paths of Brownian motion. This section concerns the Lebesgue integral on the space IR only; the generalization to other spaces will be given later. Introduction to Probability Theory 23 Definition 1. The sets in B IR are called Borel sets.
Every set which can be written down and just about every set imaginable is in B IR. By definition, every open interval a; b is in B IR , where a and b are real numbers.
Half-open and half-closed intervals are also Borel, since they can be written as intersec- tions of open half-lines and closed half-lines. There are, however, sets which are not Borel. We have just seen that any non-Borel set must have uncountably many points. This example gives a hint of how complicated a Borel set can be. We use it later when we discuss the sample space for an infinite sequence of coin tosses. Consider the unit interval [0; 1], and remove the middle half, i.
From each of these pieces, remove the middle half, i. Continue this process, so at stage k, the set Ck has 2k pieces, and each piece has length 41k. Note that the length of A1 , the first set removed, is In particular, none of the endpoints of the pieces of the sets C1; C2; : : : is ever removed. This is a countably infinite set of points. We shall see eventually that the Cantor set has uncountably many points. Introduction to Probability Theory 25 Definition 1. A measure has all the properties of a probability measure given in Problem 1.
We specify that the Lebesgue measure of each interval is its length, and that determines the Lebesgue measure of all other Borel sets.
For example, the Lebesgue measure of the Cantor set in Example 1. The Lebesgue measure of a set containing only one point must be zero. We may not compute the Lebesgue measure of an uncountable set by adding up the Lebesgue measure of its individual members, because there is no way to add up uncountably many numbers. The integral was invented to get around this problem. In order to think about Lebesgue integrals, we must first consider the functions to be integrated.
Definition 3. Introduction to Probability Theory 27 It is possible that this integral is infinite. If it is finite, we say that f is integrable. Finally, let f be a function defined on IR, possibly taking the value 1 at some points and the value ,1 at other points.
Let f be a function defined on IR, possibly taking the value 1 at some points and the value ,1 at other points. Let A be a subset of IR. The Lebesgue integral has two important advantages over the Riemann integral. The first is that the Lebesgue integral is defined for more functions, as we show in the following examples.
Since these two do not converge to a common value as the partition becomes finer, the Riemann integral is not defined. When we partition [,1; 1] into subintervals, one of these will contain R the point 0, and when we compute the upper approximating sum for ,11 f x dx, this point will contribute 1 times the length of the subinterval containing it. Thus the upper approximating sum is 1. On the other hand, the lower approximating sum is 0, and again the Riemann integral does not exist.
Introduction to Probability Theory 29 There are three convergence theorems satisfied by the Lebesgue integral. Pointwise convergence just means that! Before we state the theorems, we given two examples of pointwise convergence which arise in probability theory. The function f is not the Dirac delta; the Lebesgue integral of this function was already seen in Example 1. We could modify either Example 1. R By definition, the smaller one, 1, is lim inf n! Theorem 3. Assume that there is a nonnegative integrable function g i.
Introduction to Probability Theory 31 Remark 1. We repeat these and give some examples of infinite probability spaces as well. Toss a coin n times, so that is the set of all sequences of H and T which have n components. We will use this space quite a bit, and so give it a name: n. Let F be the collection of all subsets of n.
Suppose the probability of H on each toss is p, a number between zero and one. For each! Toss a coin repeatedly without stopping, so that is the set of all nonterminating sequences of H and T. We call this space 1. However, for each positive integer n, the set fH on the first n tossesg is in Fn and hence in F.
Let A 2 F be given. If there is a positive integer n such that A 2 Fn , then the description of A depends on only the first n tosses, and it is clear how to define IP A. Then A is in F2. Such is the case for the set fH on every tossg. To determine the probability of these sets, we write them in terms of sets which are in Fn for positive integers n, and then use the properties of probability measures listed in Remark 1. We are in a case very similar to Lebesgue measure: every point has measure zero, but sets can have positive measure.
Of course, the only sets which can have positive probabilty in 1 are those which contain uncountably many elements. For example, 34 is a dyadic rational.
Every dyadic rational in 0,1 corresponds to two sequences! The only way this can be is for LX to be Lebesgue measure.
It is interesing to consider what LX would look like if we take a value of p other than 12 when we construct the probability measure IP on. We conclude this example with another look at the Cantor set of Example 3. Let pairs be the subset of in which every even-numbered toss is the same as the odd-numbered toss immediately preceding it.
Therefore, none of these numbers is in C 0. Similarly, the numbers between ; can be written as X! Continuing this process, we see that C 0 will not contain any of the numbers which were removed in the construction of the Cantor set C in Example 3. Introduction to Probability Theory 35 In addition to tossing a coin, another common random experiment is to pick a number, perhaps using a random number generator.
Here are some probability spaces which correspond to different ways of picking a number at random. Furthermore, we construct the experiment so that the probability of getting 1 is 49 , the probability of getting 4 is 49 and the probability of getting 16 is For example, the probability of the interval 0; 5] is 89 , because this interval contains the numbers 1 and 4, but not the number This distribution was discussed immediately following Definition 2.
Since there are infinitely mean numbers in [0; 1], this requires that every number have probabilty zero of being chosen. Nonetheless, we can speak of the probability that the number chosen lies in a particular set, and if the set has uncountably many points, then this probability can be positive.
Nonetheless, both Examples 1. We repeat this construction below. When a condition holds with probability one, we say it holds almost surely. Theorem 4. In fact, the market measure and the risk-neutral measures in financial markets are related this way. We say that ' in 4.
Introduction to Probability Theory 39 dIP for ' in 4. The standard machine argument proceeds in four steps. Step 1. Assume that f is an indicator function, i. In that case, 4. Step 2. Now that we know that 4. In the last step, we consider an integrable function f , which can take both positive and negative values.
The probability that! Suppose you are not told! Conditional on this information, the probability that! This discussion is symmetric with respect to A and B ; if A and B are independent and you know that! Whether two sets are independent depends on the probability measure IP. If you are told that the coin tosses resulted in a head on the first toss, the probability of B , which is now the probability of a T on the second toss, is still However, if you are told that the first toss resulted in H , it becomes very likely that the two tosses result in one head and one tail.
In fact, conditioned on getting a H on the first toss, the probability of one H and one T is the probability of a T on the second toss, which is We say that G and H are independent if every set in G is independent of every set in H, i.
We say that the different tosses are independent when we construct probabilities this way. It is also possible to construct probabilities such that the different tosses are not independent, as shown by the following example. These sets are not independent.
In the probability space of three independent coin tosses, the price S2 of the stock at time 2 is independent of SS This is because S2 depends on only the first two coin tosses, whereas SS32 is either u or d, depending on whether the third coin toss is H or T. Suppose X and Y are independent random variables.
Then X and Y are independent variables if and only if the joint density is the product of the marginal densities. This follows from the fact that 5. Theorem 5. Let g and h be functions from IR to IR. Then g X and h Y are also independent random variables. Now use the standard machine to get the result for general functions g and h. Unfortunately, two random variables can have zero correlation and still not be independent.
Con- sider the following example. We show that Y is also a standard normal random variable, X and Y are uncorrelated, but X and Y are not independent. The last claim is easy to see. Introduction to Probability Theory 45 We next check that Y is standard normal. Being standard normal, both X and Y have expected value zero. We conclude this section with the observation that for independent random variables, the variance of their sum is the sum of their variances.
We now return to property k for conditional expectations, presented in the lecture dated October 19, The partial averaging equation for general X independent of H follows by the standard machine. Here is the first one. We are not going to give the proof of this theorem, but here is an argument which makes it plausible.
We will use this argument later when developing stochastic calculus. The argument proceeds in two steps. We next check that Var Yn! Introduction to Probability Theory 47 1.
This is because the denominator in the definition of Yn is so large that the variance of Yn converges to zero. If we want p to prevent this, we should divide by n rather than n.
The Central Limit Theorem asserts that as n! At each time step, the stock price either goes up by a factor of u or down by a factor of d. Note that we are not specifying the probability of heads here. Consider a sequence of 3 tosses of the coin See Fig. We write Sk!
Note that Sk! Thus in the 3-coin-toss example we write for instance, S1! Each Sk is an F -measurable function! IR, that is, Sk,1 is a function B! In general we denote the collection of sets determined by the first k tosses by F k. Example 2. Conditional Expectation 51 1. The collection F 2 of sets determined by the first two tosses consists of: 1.
The complements of the above sets, 6. Any union of the above sets including the complements , 7. Definition 2. Let X be a random variable! In general, if X is a random variable!
Complements of the above sets, 5. Any union of the above sets, 6. Denote the estimate by IE S1jS2. In particular, — If! If we know that S2! Problem 4 is the Dirichlet problem. The content of this book has been used successfully with students whose mathematics background consists of calculus and calculus-based probability.
The text gives both precise statements of results, plausibility arguments, and even some … Stochastic calculus is not required, and this material should be accessible to anyone familiar with elementary probability theory and linear algebra. The basic … introduction to stochastic calculus for finance a new didactic approach lecture notes in economics and mathematical Tue, 25 Dec GMT introduction to stochastic calculus for pdf — This is an introduction to stochastic calculus.
I will assume that the reader has had a post-calculus course in probability or statistics. The various problems which we will be dealing with, Personally, I like these books and learned following the respective order: 1. Mastering Mathematical Finance cambridge. Page XIX, line 2. Homework: Mikosch, T. Elementary Stochastic Calculus: Ch. The purpose of this section is to get some feeling for the distributional and pathwise properties of Brownian motion.
Authors : Steven Shreve. Series Title : Springer Finance. Copyright : Publisher : Springer New York. Hardcover ISBN : Series ISSN : Edition Number : 1. Number of Pages : XIX, Skip to main content.
Search SpringerLink Search. Tax calculation will be finalised during checkout Buy Softcover Book. Hardcover Book EUR
0コメント