L2 The Hamilton I An And The Necessary Conditions For Maximization In Optimal Control Theory

Dynamic optimization of a functional subject to a constraint on the state variable in optimal control involves a Hanultonian function H similar to the Lagrangian function in concave programming. In terms of (21. J), the Hamiltonian is defined as

Hlx(t)M')Mt).t) =/W0.y(0.'l * A(r)*[*(/),>(r).f] C21.2)

where A(r) is called the costate variable. Similar to the Lagrangian multiplier, the costate variable A(z) estimates the marginal value or shadow price of the associated state variable x(t). Working from (21.2), formation of the Hamiltonian is easy. Simply take the integrand under the integral sign and add to it the product of the costate variable A(f) times the constraint.

Assuming the Hamiltonian is differentiable in y and strictly concave so there is an interior solution and not an endpoint solution, the necessary conditions for maximization are

The first two conditions are known as the maximum principle and the third is called the boundary condition. The two equations of motion in the second condition are generally referred to as the Hamiltonian system or the canonical system. For minimization, the objective functional can simply be multiplied by -1, as in concave programming. If the solution does not involve an end point. dH/ay need not equal zero in the first condition, but H must still be maximized with respect to y. See Chapter 13. Example 9, and Fig. 13-1(6He), for clarification. We shall generally assume interior solutions.

EXAMPLE 1. The condition* in Section 21.2 are used below 10 »olve the following optimal control problem: Maximize f (4x-Sy*)dt

Jo subject to J - 8y

B. Assuming an interior solution, apply the maximum principle ay ¿H

Hut from 121.J), y - O KA. So, t - 8<0KA) » 6.44 (2J.5)

Having employed the maximum principle. *c are kit with two diHtrenti.il equations. which we now wive (or the state variables i(/> an J the cmtate variable A(r). By integrating (2/.4) we find the cosíate variable.

Substituting (21,6) in (21 5). Integrating.

* • 6.4<-4/ + r,) - -25 61+6.4c, *<!>- f (-25A ♦ 6.4c,) A

C. The boundary conditions can now be used lo solve for the constant* of integration Applying ¿(0) « 2. Jt3) - 117.2 successively lo (2J 7).

[email protected]) - -12-8(3)* 4 6.4c ,(3) ♦ 2 • 117.2 ct - 12 Then by substituting f| » 12 and c. - 2 in (21.7) and |2/ 6). we have.

D. l astly, we can find Ihe final M>Ju!ion for the control vanablc y(l) in eiilier of two ways. I, From (21.1). yit) - 0ÜA, so vil) » OJW -4r + 12) - - 3.21 4- 9.6 control variable 1 Or taking ihe derivative of (2I ff).

we »ubstitulc for « in ihe equation of motion in the constraint, i -8y - 25,6r ♦ 76 H - 8v y{t) - -3.2* ♦ 9.6 control variable

Evaluated at the endpotnts.

y<0) - -3.2(0) + 9.6 - 96 v(3) - -3.2(3) * 9.6 - 0

the optimal path of Ihe control vanablc is linear staning at (0.9.6) and ending at (3.0). with a slope of -3.2 Iw similar problems involving Ihcd endpotnt*. see also Problems 2t I to 21 3.

Was this article helpful?

0 0

Post a comment