In summary, Gaussian-Hermite quadrature is easy to apply if one assumes a small number of uncorrelated normally-distributed random parameters, a common assumption in many applications.
5. An application: Green Bay fishing under different conditions
The sample consists of 647 randomly-sampled anglers who purchased licenses in eight counties near the bay of Green Bay, a large bay on Lake Michigan, and who fished the bay at least once in 1998. Each angler was presented eight pairs of Green Bay alternatives. Anglers made their selections on the basis of the average time to catch a fish for four species (PERCH, TROUT, WALLEYE, and BASS), a fish consumption advisory index (FCA), and a boat launch fee (FEE). The FCA index takes one of nine discrete levels, the first being no advisory, and contains combined information on separate advisories for the four species.
The four parameters on the catch rates and the eight parameters on the FCA dummy variables are all assumed to be random and normally distributed.
Specifically, for each angler and each pair, it is assumed that the conditional indirect utility function for alternative is K is vk = (ft + uCi)
x [ftpPERCHK + ¡3tTROUTK + /3WWALLEYEK + /3bBASSK]
The parameter f3y is the marginal utility of money. The parameter [3C is the mean catch-parameter; the f3p,ftt,ftw and ¡3b allow the means and variances of the catch parameters to vary by species; /3fCa is the mean FCA parameter; and the ¡3fCa2, ...,Pfca9 allow the means and variances of the FCA parameters to differ by FCA regime. With this specification, the ratio of the mean parameter to the standard deviation is the same for each of the four catch rates, and for each of the eight FCA levels, so only two standard deviation parameters need to be estimated, oc and oFCA. Assuming that the standard deviation varies in proportion to the mean is a common way of dealing with heteroskedasticity and a reasonable way to limit the number of random parameters that need to be estimated. It also causes an individual's four catch parameters to be correlated with one another, and his eight FCA parameters to be correlated with one another; something one would expect, and this is accomplished in a way that does not require that one assumes separate ucp and uct that are correlated. Of course, this specification for the random structure is not necessarily appropriate for other applications and is in no way required for quadrature to work. is set to one to identify the catch parameters and /3FCA2 is set to one identify the FCA parameters.11
This model was estimated using both Hermite quadrature and simulation with pseudo-random draws. Parameter estimates are reported in Table 17.1. Results from various model runs show that 500 draws in simulation and 9 evaluation points using quadrature are sufficient for parameter estimates to be stable. That is, for quadrature parameter-estimates, when more than 9 evaluation points are used, the individual parameter estimates are never more than 2% different from the estimates obtained with 9 evaluation points. When at least 500 draws are used in simulation, parameter estimates vary by at most 2% across runs. An important finding is that simulation took almost three times longer than quadrature to reach this level of stability.
11 Code for both estimation techniques can be found at http://www.colorado.edu/Economics/morey/dataset.html.
Results are also reported for 100 draws using simulation, and 3 and 6 evaluation points for quadrature,but one should not make too much of these. Comparing the properties of estimates estimated with too few quadrature points to estimates estimated with too few random draws is a questionable endeavor: one would never present either as one's parameter estimates.
In conclusion, this paper provides an example of a choice question probit model with two random parameters. We demonstrate that to obtain a high level of accuracy, quadrature is faster than simulation with pseudo-random draws. What should be made of this? When it comes to estimating random-parameter models, there is an alternative to the ubiquitous method of simulation with random draws, and it can be faster and more accurate. It's best suited to applications with a small number of uncorrelated random parameters.
An interesting issue is what happens to the magnitudes of the simulation errors as one increases the number of draws and what happens to the approximation errors as one increases the number of evaluation points. One would hope that simulation noise always decreases as the number of draws increases; that is, it would be comforting to those doing simulation to know that if they re-estimate their model with more draws, there will be less simulation error in the parameter estimates. Unfortunately, this is not always the case as we demonstrate below.
Those using quadrature would hope that re-estimating their model with more evaluation points will always decrease the approximation errors in the parameter estimates. This is the case.
To investigate, we define the bias in parameter ¡ as Biasp = ¡3 — 3 where ¡3 is our stable estimate of 3 (obtained with either 1,000 draws or 10 evaluation points). To make this measure of bias comparable across parameters (by accounting for the relative flatness of the likelihood-function), we divide each by the standard error, s.e., of the stable ß,
/(s.e.p). Averaging these over all of the parameters provides one possible measure of aggregate bias, denoted Bias. By definition, this measure of aggregate bias is, in our example, zero for simulation with 1,000 draws and quadrature with 10 evaluation points.
Note that this measure of aggregate bias, Bias, will change every time one reruns the simulation program, even if one takes the same number of draws in each run - the draws are random. There is no comparable variation in Bias when the the parameters are estimated with quadrature - Bias does not vary across runs when the number of evaluation points is held constant because the parameter estimates are always the same.
Our measure of aggregate bias is plotted in Figure 1 for the parameter estimates obtained with 100, 300, 500 and 1000 draws. This example plot proves,
by example, that Bias does not always decrease as the number of random draws increases (the biggest "error" for these three sets of estimates is with 500 draws). Every time one re-estimates the model with 100, 300, 500 draws Figure 17.1 will change, and while one would expect it to usually be monoton-ically decreasing, it does not have to be, as our example demonstrates.12
Figure 17.2 plots Bias for 3, 6 and 9 and 10 evaluation points and the plot is invariant to reruns. Aggregate bias is monotonically decreasing in the number of evaluation points, as it must when the integral evaluated is of the form +» 2
J e—v h(v)dv . Note the big drop from 9 to 10 evaluation points.
What should be concluded? Use enough pseudo-random draws or evaluation points to get stable parameter estimates. Determining whether one's parameter estimates are stable is more difficult with simulation than with quadrature. With quadrature, just keep increasing the number of evaluation points until the estimates with e evaluation points and e + 1 evaluation points differ by less than some predetermined percentage. With simulation, increase the number of draws until the parameter estimates differ by less than that predeter-
12Consider another measure of bias, |ave,8 — /| where ave/ = avel/31 + ... + 3R] and 3r is the estimate of / obtained the rth time the parameters are simulated holding constant the number of draws (Hess et al., 2004, Sandor and Train, 2004). — = |ave,9 — for quadrature but not for simulation; for simulation there is less randomness in |ave/3 — than in |/ — /|. Therefore, |ave/3 — /| is likely to be monotonically decreasing in the number of draws, particularly for large R. However, there is no great practical significance to this likelihood unless the researcher plans to run the simulation program multiple times with each number of random draws.
Was this article helpful?