## Note On Monte Carlo Experiments

In this chapter we showed that under the assumptions of CLRM the least-squares estimators have certain desirable statistical features summarized in the BLUE property. In the appendix to this chapter we prove this property

27Every field of study has its jargon. The expression "regress Y on X" simply means treat Y as the regressand and X as the regressor.

92 PART ONE: SINGLE-EQUATION REGRESSION MODELS

more formally. But in practice how does one know that the BLUE property holds? For example, how does one find out if the OLS estimators are unbiased? The answer is provided by the so-called Monte Carlo experiments, which are essentially computer simulation, or sampling, experiments.

To introduce the basic ideas, consider our two-variable PRF:

A Monte Carlo experiment proceeds as follows:

1. Suppose the true values of the parameters are as follows: fa = 20 and

3. You fix the values of X for each observation. In all you will have 25 X values.

4. Suppose you go to a random number table, choose 25 values, and call them ui (these days most statistical packages have built-in random number generators).28

5. Since you know fa, fa, Xi, and ui, using (3.8.1) you obtain 25 Yi values.

6. Now using the 25 Yi values thus generated, you regress these on the 25 X values chosen in step 3, obtaining fa and fa, the least-squares estimators.

7. Suppose you repeat this experiment 99 times, each time using the same fa, fa, andXvalues. Of course, the ui values will vary from experiment to experiment. Therefore, in all you have 100 experiments, thus generating 100 values each of fa and fa. (In practice, many such experiments are conducted, sometimes 1000 to 2000.)

8. You take the averages of these 100 estimates and call them fa and fa2.

9. If these average values are about the same as the true values of fa and fa assumed in step 1, this Monte Carlo experiment "establishes" that the least-squares estimators are indeed unbiased. Recall that under CLRM

These steps characterize the general nature of the Monte Carlo experiments. Such experiments are often used to study the statistical properties of various methods of estimating population parameters. They are particularly useful to study the behavior of estimators in small, or finite, samples. These experiments are also an excellent means of driving home the concept of repeated sampling that is the basis of most of classical statistical inference, as we shall see in Chapter 5. We shall provide several examples of Monte Carlo experiments by way of exercises for classroom assignment. (See exercise 3.27.)