Second Order Conditions for Constrained Optimization

We saw in the previous section that the first-order conditions for a maximum and a minimum of a constrained problem are identical, as in the unconstrained case, and so it again becomes necessary to look at second-order conditions. One approach to these is global: assumptions are built into the economic model to ensure that the objective function and the constraint function(s) have the right general shape. As we will see in section 13.3, it is sufficient for a maximum (minimum) that the objective function be quasiconcave (-convex) and that the constraint funcdon(s) defines a convex set. However, for some purposes, particularly comparative sialics (discussed in the next chapter ), it is useful to have the second-order conditions in local form, in terms of small dev iations around the opti mal point.

In section 12.2 we saw that the local second-order conditions in the unconstrained case could be expressed in terms of the signs of leading principal minors of the Hessian determinant of the function being optimized. In the constrained case there is a similar, though more complex, procedure. We will not derive the conditions rigorously here bui simply state and explain them.

Take first the simplest possible case of a two-variable, one-constraint problem max/l.ti, jez) ¿'Ui.-rj) = 0

The Lagrange function i-*

Suppose that the point (v*. .v?, A') yields a stationary value of the Lagrange function. so we have that

We now derive (he Hessian matrix of the La era nee function

/ll+^'SlI /l2 + ^*£l2 gl f2\+X*f>2\ /22+^*g22 i?2

Si 82 0

It is to be understood that the panial derivatives in this matrix are all evaluated at the point u*, x2. A*). which is why the matrix has a superscript. In the case where the constraint function is linear (i.e.. the g„ are zero) as in the standard consumer problem, this matrix is simply the Hessian of the objective function bordered by ihe vector [gi g2 0],

Theorem 13.3 If (.v*. X*) gives a stationary value of the Lagrange lunction £(.vt. x2, X) = fix t, x2) + Xg(x{. x2), then

(i) it yields a maximum if the determinant of die bordered Hessian \H' \ > 0, and

(ii) it yields a minimum if the determinant of ihe bordered Hcsfian |//'| <0.

Example 13.7

Establish that the solution to example 13.1 is a true maximum.

Solution t he Hessian is

0 0

Post a comment