Info

From (1) it is easy to verify that

as well as

n n ln f (Y1, Y2,..., Yn) = £ Yi (01 + 02Xi) - £ ln [1 + e(01+02X >] (8) 11

As you can see from (8), the log likelihood function is a function of the parameters 01 and 02, since the Xi are known.

In ML our objective is to maximize the LF (or LLF), that is, to obtain the values of the unknown parameters in such a manner that the probability of observing the given Y's is as high (maximum) as possible. For this purpose, we differentiate (8) partially with respect to each unknown, set the resulting expressions to zero and solve the resulting expressions. One can then apply the second-order condition of maximization to verify that the values of the parameters we have obtained do in fact maximize the LF.

CHAPTER FIFTEEN: QUALITATIVE RESPONSE REGRESSION MODELS 635

So, you have to differentiate (8) with respect to fai and fa and proceed as indicated. As you will quickly realize, the resulting expressions become highly nonlinear in the parameters and no explicit solutions can be obtained. That is why we will have to use one of the methods of nonlinear estimation discussed in the previous chapter to obtain numerical solutions. Once the numerical values of fa and fa are obtained, we can easily estimate (1).

The ML procedure for the probit model is similar to that for the logit model, except that in (1) we use the normal CDF rather than the logistic CDF. The resulting expression becomes rather complicated, but the general idea is the same. So, we will not pursue it any further.

Was this article helpful?

0 0
Budgeting Basics

Budgeting Basics

Get All The Support And Guidance You Need To Be A Success At Budgeting. This Book Is One Of The Most Valuable Resources In The World When It Comes To Get Your Finances In Order Once And For All.

Get My Free Ebook


Post a comment