Probability and Distribution Theory

1. How many different 5 card poker hands can be dealt from a deck of 52 cards?

There are I 5 I = (52x51x51...x1)/[(5x4x3x2x1)(47x46x...x1)] = 2,598,960 possible hands.

2. Compute the probability of being dealt 4 of a kind in a poker hand.

There are 48(13) possible hands containing 4 of a kind and any of the remaining 48 cards. Thus, given the answer to the previous problem, the probability of being dealt one of these hands is 48(13)/2598960 =.00024, or less than one chance in 4000. □

3. Suppose a lottery ticket costs $1 per play. The game is played by drawing 6 numbers without replacement from the numbers 1 to 48. If you guess all six numbers, you win the prize. Now, suppose that N = the number of tickets sold and P = the size of the prize. N and P are related by

N and P are in millions. What is the expected value of a ticket in this game? (Don't forget that you might have to share the prize with other winners.)

The size of the prize and number of tickets sold are jointly determined. The solutions to the two equations are N = 11.92 million tickets and P = $5.77 million. The number of possible combinations of 48

numbers without replacement is I 6 I = (48x47x46...x1)/[(6x5x4x3x2x1)(42x41x...x1)] = 12,271,512 so the probability of making the right choice is 1/12271512 = .000000081. The expected number of winners is the expected value of a binomial random variable with N trials and this success probability, which is N times the probability, or 11.92/12.27 = .97, or roughly 1. Thus, one would not expect to have to share the prize. Now, the expected value of a ticket is Prob[win](5.77 million - 1) + Prob[lose](-1) . -53 cents.

4. If x has a normal distribution with mean 1 and standard deviation 3, what are

(b) Prob[x > -1 | x < 1.5]. Using the normal table,

(b) Prob[x > -1 | x < 1.5] = Prob[-1 < x < 1.5] / Prob[x < 1.5] Prob[-1 < x < 1.5] = Prob[(-1-1)/3 < z < (1.5-1)/3)]

= Prob[z < 1/6] - Prob[z < -2/3] = .5662 - .2525 = .3137. The conditional probability is .3137/.5662 = .5540.

5. Approximately what is the probability that a random variable with chi-squared distribution with 264 degrees of freedom is less than 297?

We use the approximation in (3-37), z = [2(297)]2 - [2(264) - 1]2 = 1.4155, so the probability is approximately .9215. To six digits, the approximation is .921539 while the correct value is .921559.

6. Chebychev Inequality For the following two probability distributions, find the lower limit of the probability of the indicated event using the Chebychev inequality (3-18) and the exact probability using the appropriate table:

(b) x ~ chi-squared, 8 degrees of freedom, 0 < x < 16.

The inequality given in (3-18) states that Prob[|x - || < ka] > 1 - 1/k2. Note that the result is not informative if k is less than or equal to 1.

(a) The range is 4/3 standard deviations, so the lower limit is 1 - (3/4)2 or 7/16 = .4375. From the standard normal table, the actual probability is 1 - 2Prob[z < -4/3] = .8175.

(b) The mean of the distribution is 8 and the standard deviation is 4. The range is, therefore, | ± 2a. The lower limit according to the inequality is 1 - (1/2)2 = .75. The actual probability is the cumulative chi-squared(8) at 16, which is a bit larger than .95. (The actual value is .9576.)

7. Given the following joint probability distribution,

(a) Compute the following probabilities: Prob[Y < 2], Prob[Y < 2, X > 0], Prob[Y = 1, X > 1].

(b) Find the marginal distributions of X and Y.

(c) Calculate EX], E[Y], Var[X], Var[Y], Cov[X,Y], and E[X2Y 3].

(e) What are the conditional distributions of Y given X = 2 and of X given Y > 0?

(f) Find E[Y|X] and Var[Y|X]. Obtain the two parts of the variance decomposition Var[Y] = Ex[Var[YX]] + Varx[E[YX]].

We first obtain the marginal probabilities. For the joint distribution, these will be X: P(0) = .34, P(1) = .36, P(2) = .30 Y: P(0) = .18, P(1) = .51, P(2) = .31

0

1

2

.05

.1

.03

.21

.11

.19

.08

.15

Prob[Y < 2, X > 0] = .1 + .03 + .11 + .19 = .43. Prob[Y = 1, X $ 1] = .11 + .19 = .30.

(b) They are shown above.

(c) E[X] = 0(.34) + 1(.36) + 2(.30) = .96 E[Y] = 0(.18) + 1(.51) + 2(.31) = 1.13

Var[X] = 1.56 - .962 = .6384 Var[Y] = 1.75 - 1.132 = .4731

E[XY] = 1(1)(.11)+1(2)(.15)+2(1)(.19)+2(2)(.08) = 1.11

(d) E[YX2] = 1(12).11+1(22).19+2(12).15+2(22).08 = 1.81 Cov[Y,X2] = 1.81 - 1.13(1.56) = .0472.

(e) Prob[Y = 0 * X = 2] = .03/.3 = .1 Prob[Y = 1 * X = 2] = .19/.3 = .633 Prob[Y = 1 * X = 2] = .08/.3 = .267

Prob[X = 0 * Y> 0] = (.21 + .08)/(.51 + .31) = .3537 Prob[X = 1 * Y > 0] = (.11 + .15)/(.51 + .31) = .3171 Prob[X = 2 * Y > 0] = (.19 + .08)/(.51 + .31) = .3292.

(f) E[Y * X=0] = 0(.05/.34)+1(.21/.34)+2(.08/.34) = 1.088

E[Y*X=2] = 0(.03/.30)+1(.19/.30)+2(.08/.30) = 1.167

E[Var[Y*X]] = .34(.3751)+.36(.6749)+.30(.3381) = .4719 Var[E[Y*X]] = .34(1.0882)+.36(1.1392)+.30(1.1672) - 1.132 = 1.2781 - 1.2769 = .0012 E[Var[Y*X]] + Var[E[Y*X]] = .4719 + .0012 = .4731 = Var[Y].

8. Minimum mean squared error predictor. For the joint distribution in Exercise 7, compute

E[y - E[y|x]]2 Now, find the a and b which minimize the function E[y - a - bx]2 Given the solutions, verify that E[y - E[y|x]]2 < E[y - a - bx]2. The result is fundamental in least squares theory. Verify that the a and b which you found satisfy (3-68) and (3-69).

E[y - E[y|x]]2 = (y=0) .05(0 - 1.088)2 + .10(0 - 1.139)2 + .03(0 - 1.167)2 (y=1) + .21(1 - 1.088)2 + .11(1 - 1.139)2 + .19(1 - 1.167)2 (y=2) + .08(2 - 1.088)2 + .15(2 - 1.139)2 + .08(2 - 1.167)2 = .4719 = E[Var[y|x]]. The necessary conditions for minimizing the function with respect to a and b are dE[y - a - bx]2/da = 2E{[y - a - bx](-1)} = 0 dE[y - a - bx]2/db = 2E{[y - a - bx](-x)} = 0. First dividing by -2, then taking expectations produces E[y] - a - bE[x] = 0 E[xy] - aE[x] - bE[x2] = 0. Solve the first for a = E[y] - bE[x] and substitute this in the second to obtain

E[xy] - E[x](E[y] - bE[x]) - bE[x2] = 0 or (E[xy] - E[x]E[y]) = b(E[x2] - (E[x])2)

and a = E[y] - bE[x] = 1.13 - (-.1497)(.96) = 1.274.

The linear function compared to the conditional mean produces x=0 x=1 x=2 E[y|x] 1.088 1.139 1.167 a + bx 1.274 1.124 .974 Now, repeating the calculation above using a + bx instead of E[y|x] produces

E[y - a - bx]2 = (y=0) .05(0 - 1.274)2 + .10(0 - 1.124)2 + .03(0 - .974)2

(y=1) + .21(1 - 1.274)2 + .11(1 - 1.124)2 + .19(1 - .974)2 (y=2) + .08(2 - 1.274)2 + .15(2 - 1.124)2 + .08(2 - .974)2 = .4950 > .4719.

9. Suppose x has an exponential distribution, fx) = 8e-0x, x > 0. (For an application, see Examples 3.5, 3.8, and 3.10.) Find the mean, variance, skewness, and kurtosis of x. (Hints: The latter two are defined in Section 3.3. The Gamma integral in Section 5.4.2b will be useful for finding the raw moments.)

6xrexdx. These

can be obtained by using the gamma integral. Making the appropriate substitutions, we have

E[xr] = [er(r+1)]/er+1 = r!/e r. The first four moments are:E [x] = 1/e, E[x2] = 2/e2, E[x3] = 6/e3, and E[x4] = 24/e4. The mean is, thus, 1/e and the variance is 2/e2 - (1/e)2 = 1/e2. For the skewness and kurtosis coefficients, we have E[x - 1/e]3 = E[x3] - 3E[x2]/e + 3E[x]/e2 - 1/e3 = 2/e3.

The normalized skewness coefficient is 2. The kurtosis coefficient is

E[x " 1/6]4 = E[x4] " 4E[x3]/0 + 6E[x2]/02 " 4E[x]/03 + 1/94 = 9/94.

The degree of excess is 6.

10. For the random variable in Exercise 9, what is the probability distribution of the random variable y = e"x? What is E[y]? Prove that the distribution of this y is a special case of the beta distribution in (3"40).

Ify = ex, then x = "lny, so the Jacobian is |dx/dy| = 1/y. The distribution of y is, therefore, fy) = Qe"e("lny)(1/y) = (0ye)/y = 0/"1 for 0 < y < 1.

This is in the form of (3"40) with y instead of x, c = 1, p = 1, and a= 0.

11. If the probability density of y is ay2(1"y)3 for y between 0 and 1, what is a? What is the probability that y is between .25 and .75?

This is a beta distribution of the form in (3"40) with a = 3 and p = 4. Therefore, the constant is r(3+4)/(r(3)r(4)) = 60. The probability is i2755 60y2(1"y)3dy = 60 P'5 (y2 " 3y3 + 3y4 "y5)dy = 60(y3/3 " 3y4/4 + 3y5/5 "y6/6)| 75 = .79296.

12. Suppose x has the following discrete probability distribution: X 1 2 3 4

Find the exact mean and variance ofX. Now, suppose Y = 1/X Find the exact mean and variance of Y. Find the mean and variance of the linear and quadratic approximations to Y=f(X). Are the mean and variance of the quadratic approximation closer to the true mean than those of the linear approximation? We will require a number of moments of x, which we derive first: E[x] = .1(1) + .2(2) + .4(3) + .3(4) = 2.9 = | E[x2] = .1(1) + .2(4) + .4(9) + .3(16) = 9.3 Var[x] = 9.3 " 2.92 = .89 = ct2

For later use, we also obtain

E[x " |]3 = .1(1 " 2.9)3 + ... = ".432

The approximation is y = 1/x. The exact mean and variance are E[y] = .1(1) + .2(1/2) + .4(1/3) + .3(1/4) = .40833 Var[y] = .1(12) + .2(1/4) + .4(1/9) + .3(1/16) " .408332 = .04645. The linear Taylor series approximation around | is y « 1/| + ("1/|2)(x " |). The mean of the linear approximation is 1/| = .3448 while its variance is (1/|4)Var[x"|] = ct2/|4 = .01258. The quadratic approximation is y « 1/| + ("1/|2)(x " |) + (1/2)(2/|3)(x " |)2

= 1/| " (1/|2)(x " |) + (1/|3)(x " |)2 The mean of this approximation is E[y] « 1/| + ct2/|3 = .3813 while the variance is approximated by the variance of the right hand side,

(1/|4)Var[x " |] + (1/|6)Var[x " |]2 " (2/|5)Cov[(x"|),(x"|)2]

= (1/|4)ct2 + (1/|6)(E[x " |]4 " ct4] " (2/|5)E[x " |]3 = .01498.

Neither approximation provides a close estimate of the variance. Note that in both cases, it would be possible simply to evaluate the approximations at the four values of x and compute the means and variances directly. The virtue of the approach above is that it can be applied when there are many values of x, and is necessary when the distribution of x is continuous.

13. Interpolation in the chi-squared table. In order to find a percentage point in the chi-squared table which is between two values, we interpolate linearly between the reciprocals of the degrees of freedom. The chi-squared distribution is defined for noninteger values of the degrees of freedom parameter [see (3-39)], but your table does not contain critical values for noninteger values. Using linear interpolation, find the 99% critical value for a chi-squared variable with degrees of freedom parameter 11.3. (For an application of this calculation, see Section 8.5.1. and Example 8.6.)

The 99% critical values for 11 and 12 degrees of freedom are 24.725 and 26.217. To interpolate linearly between these values for the value corresponding to 11.3 degrees of freedom, we use c = 26.217 + (1113 -1/12) (24.725 - 26.217) = 25.2009.

14. Suppose x has a standard normal distribution. What is the pdf of the following random variable?

1 ^ 1 2 y = ,— e 2 ,0 < y < .— . [Hints: You know the distribution of z = x2 from (3-30). The density of this z V2n v2n is given in (3-39). Solve the problem in terms of y = g(z).]

We know that z = x2 is distributed as chi-squared with 1 degree of freedom. We seek the density of y

= ke-z/2 where k = (2n)-2. The inverse transformation is z = 2lnk - 2lny, so the Jacobian is |-2/y| = 2/y. The density of z is that of Gamma with parameters 1/2 and 1/2. [See (3-39) and the succeeding discussion.] Thus,

Note, r(1/2) = Vn . Making the substitution for z and multiplying by the Jacobian produces fy) = (1/2^ 2 e (-1/2)(21n k-21n y) ^ k - 2ln y)-1/2 r(1/2) y

The exponential term reduces to y/k. The scale factor is equal to 2k/y. Therefore, the density is simply fy) = 2(2lnk - 2lny)-1/2 = V2 (lnk - lny)-1/2 = {2/[ln(1/(y(2n)1/2))]}, 0 < y < (2n)-1/2

15. The fundamental probability transformation. Suppose that the continuous random variable x has cumulative distribution F(x). What is the probability distribution of the random variable y = F(x)? (Observation: This result forms the basis of the simulation of draws from many continuous distributions.)

The inverse transformation is x(y) = F"1(y), so the Jacobian is dx/dy = F"1' (y) = 1/fx(y)) where f(.) is the density of x. The density of y is fy) = f [F"1(y)] x 1/f (x(y)) = 1, 0 < y < 1. Thus, y has a continuous uniform distribution. Note, then, for purposes of obtaining a random sample from the distribution, we can sample y1,...,y„ from the distribution of y, the continuous uniform, then obtain x1 = x^), .. . xn xn(yn).

16. Random number generators. Suppose x is distributed uniformly between 0 and 1, so fx) = 1, 0 < x <1. Let 8 be some positive constant. What is the pdf of y = -(1/8)lnx. (Hint: See Section 3.5.) Does this suggest a means of simulating draws from this distribution if one has a random number generator which will produce draws from the uniform distribution? To continue, suggest a means of simulating draws from a logistic distribution, fx) = e"x/(1+e"x)2.

The inverse transformation is x = e"8y so the Jacobian is dx/dy = 8e'Qy. Since fx) = 1, this Jacobian is also the density of y. One can simulate draws y from any exponential distribution with parameter 8 by drawing observations x from the uniform distribution and computing y = -(1/8)lnx. Likewise, for the logistic distribution, the CDF is F(x) = 1/(1 + e'x). Thus, draws y from the uniform distribution may be taken as draws on F(x). Then, we may obtain x as x = ln[F(x)/(1 - F(x)] = ln[y/(1 - y)].

17. Suppose that xi and x2 are distributed as independent standard normal. What is the joint distribution ofyi = 2 + 3xi + 2x2 and y2 = 4 + 5xi? Suppose you were able to obtain two samples of observations from independent standard normal distributions. How would you obtain a sample from the bivariate normal distribution with means i and 2 variances 4 and 9 and covariance 3? We may write the pair of transformations as

The problem also states that x ~ ^[0,I]. From (3-i03), therefore, we have y ~ ^[b + A0, AIAN] where

yi

"2i

"3

2"

xi

=

+

_ y2 _

4

5

0

x2 _

i3 i5

i5 25

For the second part of the problem, using our result above, we would require the A and b such that

"4 3l

3 9 . The vector is obviously b = (i,2)'. In order to find the elements of A, b + AO = (i,2)' and AA' =

there are a few ways to proceed. The Cholesky factorization used in Exercise 9 is probably the simplest. Let yi = i + 2xi. Thus, yi has mean i and variance 4 as required. Now, let y2 = 2 + wixi + w2x2. The covariance between yi and y2 is 2wi, since xi and x2 are uncorrelated. Thus, 2wi = 3, or wi = i.5. Now, Var[y2] =

i.52 = 6.75. The transformation matrix is, therefore, A =

This is the Cholesky factorization of the desired AA' above. It is worth noting, this provides a simple method of finding the requisite A matrix for any number of variables. Finally, an alternative method would be to use the characteristic roots and vectors of AA'. The inverse square root defined in Section 2.7.i2 would also provide a method of transforming x to obtain the desired covariance matrix.

18. The density of the standard normal distribution, denoted ^(x), is given in (3-28). The function based on the ith derivative of the density given by Hi = [(-i)'d1^ (x)/dx']/^(x), i = 0,i,2,... is called a Hermite polynomial. By definition, H0 = i.

(a) Find the next three Hermite polynomials.

(b) A useful device in this context is the differential equation d§(x)/dxr + xd~l§(x)/dxrA + (r-i)d-2^(x)/dxr-2 = 0. Use this result and the results of part a. to find H4 and H5.

The crucial result to be used in the derivations is d^(x)/dx = -x^(x). Therefore, and

The polynomials are

Therefore, and

Thus, d2^(x)/dx2 = (x2 - i)^(x) d3^(x)/dx3 = (3x - x3)^(x). Hi = x, H2 = x2 - i, and H3 = x3 - 3x. d§(x)/dxr = -xd->(x)/drr-1 - (r-i)d-2^(x)/drr-2 d4^(x)/dr4 = -x(3x - x3)^(x) - 3(x2 - i)^(x) = (x4 - 6x2 + 3)4>(x) d5^(x)/dr5 = (-x5 + i0x3 - i5x)^(x). H4 = x4 - 6x2 + 3 and H5 = x5 - i0x3 + i5x.

19. Continuation: orthogonal polynomials: The Hermite polynomials are orthogonal if x has a standard normal distribution. That is, E[HiHj] = 0 if i ^ j. Prove this for the Hi, H2, and H3 which you obtained above.

E[Hi(x)H2(x)] = E[x(x2 - i)] = E[x3 - x] = 0 since the normal distribution is symmetric. Then,

E[Hi(x)H3(x)] = E[x(x3 - 3x)] = E[x4 - 3x2] = 0. The fourth moment of the standard normal distribution is 3 times the variance. Finally, E[H2(x)H3(x)] = E[(x2 - i)(x3 - 3x)] = E[x5 - 4x3 + 3x] = 0 because all odd order moments of the normal distribution are zero. (The general result for extending the preceding is that in a product of Hermite polynomials, if the sum of the subscripts is odd, the product will be a sum of odd powers of x, and if even, a sum of even powers. This provides a method of determining the higher moments of the normal distribution if they are needed. (For example, E[HiH3] = 0 implies that E[x4] = 3E[x2].)

20. If x and y have means | and | and variances a], and a2y and covariance a^, what is the approximation of the covariance matrix of the two random variables f1 = x/y and f2 = xy?

Uy Uy Uy

21. Factorial Moments. For finding the moments of a distribution such as the Poisson, a useful device is the factorial moment. (The Poisson distribution is given in Example 3.1.) The density is fx) = e\x / x!, x = 0,1,2,...

To find the mean, we can use E[x] = Z 0 xf (x) = Z xe~XXx / x!

= X, since the probabilities sum to 1. To find the variance, we will extend this method by finding E[x(x-1)], and likewise for other moments. Use this method to find the variance and third central moment of the Poisson distribution. (Note that this device is used to transform the factorial in the denominator in the probability.) Using the same technique,

Jx=0"

Since E[x] = X, it follows that Var[x] = (X2 + X) - X2 = X. Following the same pattern, the preceding produces

Then, E[x - E[x]]3 = E[x3] - 3XE[x2] + 3X2E[x] - X3

22. If x has a normal distribution with mean | and standard deviation a, what is the probability distribution of y = e x?

If y = e x, then x = lny and the Jacobian is dx/dy = 1/y. Making the substitution,

ay^j 2n

This is the density of the lognormal distribution.

23. If y has a lognormal distribution, what is the probability distribution of y 2?

Let z = y2 Then, y = vz and dy/dz = 1/(2 Vz ). Inserting these in the density above, we find fz)

Thus, z has a lognormal distribution with parameters 2| and 2a. The general result is that ify has a lognormal distribution with parameters | and a, yr has a lognormal distribution with parameters r| and ra.

24. Supposey, x1, and x2 have ajoint normal distribution with parameters ^N = [1, 2, 4]

and covariance matrix 2 =

(a) Compute the intercept and slope in the function E[y*xi], Var[y*xi], and the coefficient of determination in this regression. (Hint: See Section 3.10.1.)

(b) Compute the intercept and slopes in the conditional mean function, E[y*x1 ,x2]. What is E[y*xi=2.5,x2=3.3]? What is Var[y*xi=2.5,x2=3.3]?

First, for normally distributed variables, we have from (3-102),

= Cov[y,x]{Var[x]}-1Cov[x,y] / Var[y]. We may just insert the figures above to obtain the results.

and and

"5

2"

-1

"3"

2

6

1

= -.4615 + .6154xj - .03846x2, Var[y*xi,x2] = 2 - (.6154,-.03846)(3,1)N = .1923. E[y*xj=2.5,x2=3.3] = 1.3017. The conditional variance is not a function of xx or x2. ~

25. What is the density of y = 1/x if x has a chi-squared distribution?

The density of a chi-squared variable is a gamma variable with parameters 1/2 and n/2 where n is the degrees of freedom of the chi-squared variable. Thus, f ( x) =

If y = 1/x then x = 1/y and \dx/dy\ = 1/y2 Therefore, after multiplying by the Jacobian, f ( y) =

26. What is the density and what are the mean and variance of y = 1/x if x has the gamma distribution described in Section 3.4.5.

The density of x is f (x) = e~hcxp_\x > 0. If y = 1/x, then x = 1/y, and the Jacobian is |dx/dy| = 1/y2. Using the change of variable formula, as usual, the density of y is

f(y) =r7P) — e^/y I-1 , y > 0. The mean is E (y) = f y ———e^/y I-1 dy . This is a r(P) y2 Vy) o r(P) y2 Vy)

—e-x 1 y f -Jo r(P) Iyj order to use the results for the gamma integral, we will have to make a change of variable. Let z = 1/y, so |dy/dz| = 1/z2 Making the change of variable, we

P» XP ( 11 P» xP find E(y) =f r(P)erjdz = f r(P)e2dz . Now, we can use the gamma integral directly, xp r( p _ 1) x to find E(y) =-x-—— =-. Note that for this to exist, P must be greater than one. We can use r( P) xpp _ 1 6

the same approach to find the variance. We start by finding E[y2]. First, r ^p 1 _ f 1 ^Pr ^p _ f 1 ^P

E(y2) =1 yeX 1 yI—I dy =1 —— e x 1 yI—I dy . Once again, this is a gamma integral,

which we can evaluate by first making the change of variable to z = 1/y. The integral is

E(2) r xp p_ 1f 1 L r xp p_3d Th • xp r(p _ 2) x2

E(y ) = -e z I —— Idz = -e z dz . This is -x-—— =-.

27. Suppose xi and x2 have the bivariate normal distribution described in Section 3.8. Consider an extension of Example 3.4, where the bivariate normal distribution is obtained by transforming two independent standard normal variables. Obtain the distribution of z = exp(y1)exp(y2) where y1 and y2 have a bivariate normal distribution and are correlated. Solve this problem in two ways. First, use the transformation approach described in Section 3.6.4. Second, note that z = exp(y1+y2) = exp(w), so you can first find the distribution of w, then use the results of Section 3.5 (and, in fact, Section 3.4.4 as well).

The (extremely) hard way to proceed is to define the joint transformations z1 = exp(y1)exp(y2) and z2 = exp(y2). The Jacobian is 1/(z1z2). The joint distribution is the Jacobian times the bivariate normal distribution, evaluated at y1 = logz1 - logz2 and y2 = logz2, from which it is now necessary to integrate out z2. Obviously, this is going to be tedious, but the hint gives a much simpler way to proceed. The variable w = y1+y2 has a normal distribution with mean | = and variance ct2 = (ct12 + ct22 + 2ct12). We already have a simple result for exp(w) in Exercise 22; this has a lognormal distribution.

28. Probability Generating Function. For a discrete random variable, x, the function

is called the probability generating function because in the function, the coefficient on t is Prob[X=i]. Suppose that x is the number of the repetitions of an experiment with probability n of success upon which the first success occurs. The density of x is the geometric distribution,

Prob[X=x] = (1 - n)x_1n. What is the probability generating function?

29. Moment Generating Function. For the random variable X, with probability density function fx), if the function M(t) = E[etx] exists, it is the moment generating function. Assuming the function exists, it can be shown that drM(t)/df\t=0 = E[xr ]. Find the moment generating functions for

(a) The Exponential distribution of Exercise 9.

(b) The Poisson distribution of Exercise 21.

ioo ftO

This is e times a Gamma integral (see Section 5.4.2b) withp=1, c=1, and a = (e-t). Therefore, M(t) = e/(e- t).

For the Poisson distribution,

The sum is the sum of probabilities for a Poisson distribution with parameter Xe t, which equals 1, so the term before the summation sign is the moment generating function, M(t) = exp[X(e t - 1)].

28. Moment generating function for a sum of variables. When it exists, the moment generating function has a one to one correspondence with the distribution. Thus, for example, if we begin with some random variable and find that a transformation of it has a particular MGF, we may infer that the function of the random variable has the distribution associated with that MGF. A useful application is the following: If x and y are independent, the MGF of x + y is Mx(t)My(t).

(a) Use this result to prove that the sum of Poisson random variables has a Poisson distribution.

(b) Use the result to prove that the sum of chi-squared variables has a chi-squared distribution. [Note, you must first find the MGF for a chi-squared variate. The density is given in (3-39).]

(c) The MGF for the standard normal distribution is Mz = exp(-t2/2). Find the MGF for the n[|ct2] distribution, then find the distribution of a sum of normally distributed variables.

(a) From the previous problem, Mx(t) = exp[X(et - 1)]. Suppose y is distributed as Poisson with parameter | Then, My(t)=exp[|a(et-1)]. The product of these two moment generating functions is Mx(t)My(t)= exp[X(e t - 1)]exp[|(e t - 1)] = exp[(X+|)(e t - 1)], which is the moment generating function of the Poisson distribution with parameter X+| . Therefore, on the basis of the theorem given in the problem, it follows that x+y has a Poisson distribution with parameter X+|.

(b) The density of the Chi-squared distribution with n degrees of freedom is [from (3-39)]

Let the constant term be k for the present. The moment generating function is ia

This is a gamma integral which reduces to M(t) = k(1/2 - t)- r(n/2). Now, reinserting the constant k and simplifying produces the moment generating function M(t) = (1 - 2t) -n/2. Suppose that x, is distributed as chi-squared with nt degrees of freedom. The moment generating function of x, is nM(t) = (1 - 2t ) ^/2

which is the MGF of a chi-squared variable with n = % n degrees of freedom.

(c) We lety = az + | Then, My(t) = E[exp(ty)] = E[et(a z+|) ] = e'^E\e°tz ] = etl E[e(a t)z ]

Using the same approach as in part b., it follows that the moment generating function for a sum of random variables with means | and standard deviations a,- is

0 0

Post a comment