Conditions for Profit Maximization

We shall now present an economic example of extreme-value problems, i.e., problems of optimization.

One of the first things that a student of economics icarns is that, in order to maximize profit, a firm must equate marginal cost and marginal revenue, Let us show the mathematical derivation of this condition. To keep the analysis on a general level, wc shall work with the total-revenue function R = R(Q) and total-cost function C = C( 0, both of which arc functions of a single variable Q. From these it follows that a profit function (the objective function) may also be formulated in terms of Q (the choice variable):

To find the profit-maximizing output level, we must satisfy the first-order necessary condition for a maximum: dnjdQ — 0. Accordingly, let us differentiate (9,1) with respeel to Q and set the resulting derivative equal to zero: The result is

Thus the optimum output (equilibrium output) Q* must satisfy the equation R'(Q*) = C {Q*)> or MR —MC, This condition constitutes the first-order condition for profit maximization.

However, the first-order condition may lead to a minimum rather than a maximum; thus we must check the second-order condition next. We can obtain the second derivative by differentiating the first derivative in (9.2) with respect to Q:

This last inequality is the second-order necessary condition for maximization. If it is not met, then Q* cannot possibly maximize profit; in fact, it minimizes profit. If Rr,(Q4) — C"(Q*)9 then we are unable to reach a definite conclusion. The best scenario is to find R"(Q*) < C"(Q*)> which satisfies the second-order sufficient condition for a maximum. In that case, we can conclusively take Q* to be a profit-maximizing output. Economically this would mean that, if the rate of change of MR is less than the rate of change of MC at the output where MC = MR, then that output will maximize profit.

These conditions are illustrated in Fig. 9.7. In Fig, 9.7« we have drawn a total-revenue and a total-cost curve, which are seen to intersect twice, at output levels of Q2 and Q4. In the open interval {Qiy total revenue R exceeds total cost C, and thus 71 is positive. But in the intervals [0, Q2) and (£>4, 0s], where Q$ represents the upper limit of the firm's productive capacity, n is negative. This fact is reflected in Fig, 9.71, where the profit curve-obtained by plotting the vertical distance between the R and C curves for each level of output—lies above the horizontal axis only in the interval {Qit Q4).

When we set dn/dQ = 0, in line with the first-order condition, it is our intention to locate the peak point K on the profit curve, at output Q3, where the slope of the curve is zero. However, the relative-minimum point M (output Qx) will also offer itself as a candidate, because it, too, meets the zero-slope requirement. Below, we shall resort to the second-order condition to eliminate the "wrong" kind of extremum.

The first-order condition dnjdQ = 0 is equivalent to the condition R'(Q) = C'(Q). In Fig» 9,7a, the output level satisfies this, because the R and C curves do have the same slope at g3 (the tangent lines drawn to the two curves at H and J are parallel to each other). The same is true for output Q \. Since the equality of the slopes of R and C means the equality of MR and MC, outputs Qi and Q\ must obviously be where the MR and MC curves intersect, as illustrated in Fig. 9.7c.

How does the second-order condition enter into the picture? Let us first look at Fig. 9.76. At point K, the second derivative of the;r function will (barring the exceptional zero-value case) have a negative value, n"(Qi) < 0, because the curve is inverse U-shaped around K\ this means that Q3 will maximize profit. At point M, on the other hand, we would expect that jt "(£?]) > 0; thus Q\ provides a relative minimum for jt instead. The second-order sufficient condition for a maximum can, of course, be stated alternatively as /?"( Q) < C"( Q), that is, that the slope of the MR curve be less than the slope of the MC curve. From Fig. 9.7c, it is immediately apparent that output satisfies this condition, since the slope of MR is negative while that of MC is positive at point L. But output Q\ violates this condition because both MC and MR have negative slopes, and that of MR is numerically smaller than that of MC at point N, which implies that is greater than

FIGURE 97

FIGURE 97

Q5 Q

C!{Q]) instead. Ill fact, therefore, output Qx also violates the second-order necessary condition for a relative maximum, but satisfies the second-order sufficient condition for a relative minimum.

Example 3

Let the R(Q) and C(Q) functions be fi(Q) = 1/200Q-2Q2

Then the profit function is jr(Q) = -Q3 + 59.25 Q2 - 328.5Q - 2,000

where R, C, and jt are all in dollar units and Q is in units of (say) tons per week. This profit function has two critical values, Q = 3 and Q - 36.5, because

36.5

But since the second derivative is d2n

the profit-maximizing output is Q* = 36.5 (tons per week). (The other output minimizes profit.) By substituting Q* into the profit function, we can find the maximized profit to be ?r* = tt(36 5) = 16,318.44 (dollars per week).

As an alternative approach to the preceding, we can first find the MR and MC functions and then equate the two, i.e., find their intersection. Since fl'(Q) = 1,200-4 Q

equating the two functions will result in a quadratic equation identical with dnfdQ- 0 which has yielded the two critical values of Q cited previously.

Coefficients of a Cubic Total-Cost Function

In Example 3, a cubic function is used to represent the total-cost function. The traditional total-cost curve C = C(Q)> as illustrated in Fig. 9.7a, is supposed to contain two wiggles that form a concave segment (decreasing marginal cost) and a subsequent convex segment (increasing marginal cost). Since the graph of a cubic function always contains exactly two wiggles, as illustrated in Fig. 9.4, it should suit that role well. However, Fig. 9A immediately alerts us to a problem; the cubic function can possibly produce a downward-sloping segment in its graph, whereas the total-cost function, to make economic sense, should be upward-sloping everywhere (a laiger output always entails a higher total cost). If we wish to use a cubic total-cost function such as

therefore, it is essential to place appropriate restrictions on the parameters so as to prevent the C curve from ever bending downward.

An equivalent way of stating this requirement is that the MC function should be positive throughout, and this can be ensured only if the absolute minimum of the MC function turns out to be positive. Differentiating (9.3) with respect to Q, we obtain the MC function

which, becausc it is a quadratic, plots as a parabola as in Fig. 9.7c. In order for the MC curve to stay positive (above the horizontal axis) everywhere, it is necessary that the parabola be U-shaped (otherwise, with an inverse U, the curve is bound to extend itself into the second quadrant). Hence the coefficient of the Q2 term in (9.4) has to be positive; i.e., we must impose the restriction a > 0. This restriction, however, is by no means sufficient, because the minimum value of a U-shaped MC curve—call it MCmin (a relative minimum which also happens to be an absolute minimum)—may still occur below the horizontal axis. Thus we must next find MCmjn and ascertain the parameter restrictions that would make it positive.

According to our knowledge of relative extremum, the minimum of MC will occur where d dQ

The output level that satisfies this first-order condition is

This minimizes (rather than maximizes) MC becausc the second derivative d2(MC)/dQ2 — 6 a is assuredly positive in view of the restriction a > 0. The knowledge of Q* now enables us to calculate MCmin? but wc may first infer the sign of coefficient h from it. Inasmuch as negative output levels are ruled out, we see that h can never be positive (given a > 0), Moreover, since the law of diminishing returns is assumed to set in at a positive output level (that is, MC is assumed to have an initial declining segment), Q* should be positive (rather than zero). Consequently, we must impose the restriction h < 0,

It is a simple matter now to substitute the MC-minimizing output Q* into (9.4) to find that

Thus, to guarantee the positivity of MCmm* we must impose the restriction1 h1 < 3ac. This last restriction, we may add, in effect also implies the restriction c > 0, (Why?)

The preceding discussion has involved the three parameters a, b, and c> What about the other parameter, dl The answer is that there is need for a restriction on d also, but that has nothing to do with the problem of keeping the MC positive. If wc let 0 = 0 in (9.3), we find f This restriction may also be obtained by the method of completing the square. The MC fu nction can be successivefy transformed as follows:

Since the squared expression can possibly be zero, we must, in order to ensure the positivity of MC, require that b2 < on the knowledge that a > 0.

that C(0) = d. The role of d is thus to determine the vertical intercept of the C curve only, with no bearing on its slope. Since the economic meaning of d is the fixed cost of a firm, the appropriate restriction (in the short-run context) would be d > 0.

In sum, the coefficients of the total-cost function (9.3) should be restricted as follows (assuming the short-run context);

As you can readily verify, the C(Q) function in Example 3 does satisfy (9.5).

Upward-Sloping Marginal-Revenue Curve

The marginal-revenue curve in Fig. 9.7c is shown to be downward-sloping throughout. This, of course, is how the MR curve is traditionally drawn for a firm under imperfect competition, However, the possibility of the MR curve being partially, or even wholly, upward-sloping can by no means be ruled out a priori.*

Given an average-revenue function AR = f(Q)9 the marginal-revenue function can be expressed by

MR = .f(Q) + Qf(Q) [from (7.7)] The slope of the MR curve can thus be ascertained from the derivative

As long as the AR curve is downward-sloping (as it would be under imperfect competition), the 2 /'({?) term is assuredly negative. But the Qf"(Q) term can be either negative, zero, or positive, depending on the sign of the second derivative of the AR function, i.e., depending on whether the AR curve is strictly concave, linear, or strictly convex. If the AR curve is strictly convex either in its entirety (as illustrated in Fig. 7.2) or along a specific segment, the possibility will exist that the (positive) Qff(Q) term may dominate the (negative) 2/'(£?) term, thereby causing the MR curve to be wholly or partially upward-sloping.

Let the average-revenue function be

As can be verified (see Exercise 9.4-7), this function gives rise to a downward-sloping AR curve, as is appropriate for a firm under imperfect competition. Since

it follows that the slope of MR is

Because this is a quadratic function and since the coefficient of Q3 is negative, dMR/dQ must plot as an inverse-U-shaped curve against Q, such as shown in Fig, 9.5a If a segment of this curve happens to lie above the horizontal axis, the slope of MR will take positive values.

f This point is emphatically brought out in John P. Formby, Stephen Layson, and W. (ames Smith, "The Law of Demand, Positive Sloping Marginal Revenue, and Multiple Profit Equilibria/' Economic inquiry, April 1982, pp. 303-311.

Example 4

Setting dMR/dQ = 0, and applying the quadratic formula, we find the two zeros of the quadratic function to be Ch =10.76 and Q^ = 19.79 (approximately). This means that, for values of Q in the open interval {Qj, Q2)f the dMR/dQ curve does lie above the horizontal axis. Thus the marginal-revenue curve indeed is positively sloped for output levels between Qi and Q2t

The presence of a positively sloped segment on the MR curve has interesting implications. Such an MR curve may produce more than one intersection with the MC curve satisfying the second-order sufficient condition for profit maximization. While all such intersections constitute local optima, however, only one of them is the global optimum that the firm is seeking.

2. Mr. Creenthumb wishes to mark out a rectangular flower bed, using a wall of his house as one side of the rectangle. The other three sides are to.be marked by wire netting, of which he has only 64 ft available. What are the length L and width W of the rectangle that would give him the largest possible planting area? How do you make sure that your answer gives the largest, not the smallest area?

3. A firm has the following total-cost and demand functions:

(а) Does the total-cost function satisfy the coefficient restrictions of (9.5)?

(б) Write out the total-revenue function R in terms of Q,

(c) Formulate the total-profit function jt in terms of Q.

(d) Find the profit-maximizing level of output Q\ (¿) What is the maximum profit?

4. If coefficient b in (9,3) were to take a zero value, what would happen to the marginal-cost and total-cost curves?

5. A quadratic profit function tt(Q) - hQ2 + ¡Q + k is to be used to reflect the following assumptions:

(a) If nothing i$ produced, the profit will be negative (because of fixed costs),

{b) The profit function is strictly concave.

(r) The maximum profit occurs at a positive output level Q*.

What parameter restrictions are called for?

6. A purely competitive firm has a single variable input i (labor), with the wage rate Wq per period. Its fixed inputs cost the firm a total of F dollars per period. The price of the product is P0.

(a) Write the production function, revenue function, cost function, and profit function

EXERCISE 9.4

1 / Find the relative maxima and minima of y by the: second-derivative test:

of the firm.

(b) What is the first-order condition for profit maximization? Give this condition an economic interpretation.

(c) What economic circumstances would ensure that profit is maximized rather than minimized?

7, Use the following procedure to verify that the AR curve in Example 4 is negatively sloped:

(a) Denote the slope of AR by 5. Write an expression for 5. (ib) Find the maximum value of S, by using the second-derivative test. (c) Then deduce from the value of 5m«* that the AR curve is negatively sloped throughout.

9,5 Maclaurin and Taylor Series __

The time has now come for us to develop a test for relative extrema that can apply even when the second derivative turns out to have a zero value at the stationary point. Before we can do that, however, it is first necessary to discuss the so-called expansion of a function v = f{x) into what are known, respectively, as a Maclaurin series (expansion around the point x — 0) and a Taylor scries (expansion around any point x = xq).

To expand a function / = f(x) around a point xo means, in the present context, to transform that function into a.polynomial form, in which the coefficients of the various terms are expressed in terms of the derivative values /'(jfo), etc.—all evaluated at the point of expansion to the Maclaurin series, these will be evaluated at x = 0; thus we have /'(0), /"(0), etc., in the coefficients. The result of expansion is a power series because, being a polynomial, it consists of a sum of power functions.

Maclaurin Series of a Polynomial Function

Let us consider first the expansion of a polynomial function of the nih degree, f(x) = ao + a\x + a2x2 + a^x2 + aAx4 + ■ - ■ + a„xn (9.6)

into an equivalent /rth-degree polynomial where the coefficients (ao, a\, etc.) are expressed instead in terms of the derivative values /'(0), /"(0), etc. Since this involves the transformation of one polynomial into another of the same degree, it may seem a sterile and purposeless exercise, but actually it will serve to shed much light on the whole idea of expansion.

Since the power series after expansion will involve the derivatives of various orders of the function f let us first find these. By successive differentiation of (9.6), we can get the derivatives as follows:

f(x) = tf] + la2x + 2>aix2 + 4 a4x* + * • ■ + nanxn~1

f(x) = 2a2 + 3(2)a^x + 4(3 )aAx2 + •■■ + «(«- 1 )anxn~2

fix) = 3(2)^3 + 4(3)(2)tf4* + - - - »(« - 1 ){n - 2)anxn~2

= 4(3)(2)fl4 + 5(4)(3)(2)flsx + ■ • ■ + n(n - 1)(* - 2 )(n ~ 3 )anxn^

Example 1

Note that each successive differentiation reduces the number of terms by one— the additive constant in front drops out—until, in the rcth derivative, we arc left with a single product term (a constant term). These derivatives can be evaluated at various values of j; here we shall evaluate them at x — 0, with the result that all terms involvings will drop out. We are then left with the following exceptionally neat derivative values:

/'(0) = /"(O) = 2«2 /"'(0) = 3(2)^ /i4)( 0) = 4(3X2)04

fn\0) = n(n -!)(»- 2)(« - 3) • • ■ (3)(2)(1 )an (97)

If we now adopt a shorthand symbol n\ (read: "a factorial"), defined as n\ = n{n - l)(/j - 2) ■ ■ ■ (3)(2)(1) (n — a positive integer)

so that, for example, 21 = 2 x I = 2 and 3! = 3 x 2 x 1 = 6, etc. (with 0! defined as equal to 1), then the result in (9,7) can be rewritten as

Substituting these into (9.6) and utilizing the obvious fact that /(0) = t/o, we can now express the given function f(x) as a new, but equivalent, same-degree polynomial in which the coefficients are expressed in terms of derivatives evaluated at x = 0;'

This new polynomial, called the Maclaurin series of the polynomial function f(x), represents the expansion of the function f(x) around zero (x — 0). Note that the point of expansion (here, 0) is simply the value of x that will be used to evaluate f(x) and all its derivatives.

Find the Maclaurin series for the function

This function has the derivatives f00 = 4 + 6* f r(X) = 6 sothat( r<0) = 6

Thus the Maclaurin series is f"(Q)

The previous line verifies that the Maclaurin series does indeed correctly represent the given function.

f Since 0! = 1 and 1! = 1, the first two terms on the right of the equals sign in (9.8) can be written more simply as f(0), and respectively. We have included the denominators 01 and 1! here to call attention to the symmetry among the various terms in the expansion.

Taylor Series of a Polynomial Function

More generally, the polynomial function in (9.6) can be expanded around any point Xq, not necessarily zero. Jn the interest of simplicity, we shall explain this by means of the specific quadratic function in (9.9) and generalize the result later.

For the purpose of expansion around a specific point we may first interpret any given value of x as a deviation from More specifically, we shall let x = xo + 5, where 8 represents the deviation from the value xG. Upon such interpretation, the given function

(9.9) and its derivatives now become

/(x) = 2 + 4(x0+<5) + 3(*o + i)2 f(x) = A + 6{xo+8) (9.10)

We know that the expression (xq + 8) = x is a variable in the function, but since Xq in the present context is a fixed (chosen) number, only 5 can be properly regarded as a variable in

(9.10). Consequently, f(x) is in fact a function of say, g(S):

with derivatives

We already know how to expand g(S) around zero (fi = 0). According to (9.8), such an expansion will yield the following Maclaurin scries:

But since we have let * = x^ + St the fact that i = 0 implies x — hence, on the basis of the identity g(S) = /(*), we can write for the case of S = 0:

Upon substituting these into (9.11)? we find the result to represent the expansion of f(x) around the point xo, because the coefficients now involve the derivatives f(x0), /"(*o)i etc,, all evaluated at x = x\y.

You should compare this result—the Taylor polynomial of f(x)—with the Maclaurin polynomial ofg[S) in (9.11).

Since for the specific function under consideration, (9.9)f we have f{xo) = 2 + 4jc0 + f{xo) = 4 + 6*o /"(*o) - 6

the Taylor polynomial in (9.12) becomes fix) = 2 + 4x0 + 3xl + (4 + 6x0)(x - xt}) + §ix - x0)2 = 2 + 4x + 3x2

This verifies that the Taylor polynomial does correctly represent the given function.

The expansion formula in (9.12) can be generalized to apply to the «th-degree polynomial of (9.6). The generalized formula is

This differs from Maclaurin's formula in (9.8) only in the replacement of zero by xq as the point of expansion, and in the replacement of x by the expression (x - *0). What (9.13) tells us is that, given an nth-degree polynomial f(x), if we let x = 7 (say) in the terms on the right of (9.13), select an arbitrary number xo, then evaluate and add these terms, we will end up exactly with /(7)—the value of f(x) at x = 7.

Example 2 Taking xo = 3 as the point of expansion, we can rewrite (9.6) equivalently as f(x) = f(3) + f'(3)(x - 3) + - 3)2 + ■ - + - 3)"

Expansion of an Arbitrary Function

Heretofore, we have shown how an wth-degree polynomial function can be expressed in another, equivalent, «th-degree polynomial form. As it turns out, it is also possible to express any arbitrary function <p(x)—one that is not necessarily a polynomial—in a polynomial form similar to (9.13), provided tp(x) has finite, continuous derivatives up to the desired order at the expansion point jy.

According to a mathematical proposition known as Taylors theorem» given an arbitrary function (¡>(x), if we know the value of the function at x = xG [that is, </>(*o)] and the values of its derivatives at [that is, 0'(jco), <pff(xo)? etc.], then this function can be expanded around the point xo as follows (n = a fixed positive integer arbitrarily chosen);

0! 1! v 2! = pn + [Taylor's formula with remainder] (9.14)

where Pn represents the (bracketed) nth-degree polynomial [the first (n + 1) terms on the right], and Rn denotes a remainder, to be explained on page 248.r The presence of R„ is what distinguishes (9.14) from Taylor's formula (9.13), and for this reason (9.14) is called Taylor's formula with remainder. The form of the polynomial Pn and the size of the remainder Rn will depend on the value of n we choose. The larger the n, the more terms there will be in Pn\ accordingly, R„ will in general assume a different value for each different w. This fact explains the need for the subscript n in these two symbols. As a memory aid, we can identify n as the order of the highest derivative in Pn. (In the special case of n = 0, no derivative will appear in Pn at all.)

f The symbol Rn (remainder) is not to be confused with the symbol Rn (n-space).

The appearance of Rn in (9.14) is due to the fact that we are here dealing with an arbitrary function <f> which cannot always he transformed exactly into* but can only be approximated by, the polynomial form shown in (9.13). Therefore, a remainder term is included as a supplement to the Pn part, to represent the discrepancy between <j>(x) and PR. Thus, Pn constitutes a polynomial approximation to ^(x), with the term Rn as a measure of the error of approximation. If we choose n — 1, for example, we have

where P\ consists of« + 1=2 terms and constitutes a linear approximation lo<j>(x). If we choose w-2,a second-power term will appear, so that

where P2, consisting of n -f 1 - 3 terms, is a quadratic approximation to <£(*), And so forth. The fact that we can create polynomial approximations to any arbitrary function (provided it has finite, continuous derivatives) is of great practical significance. Polynomial functions—even higher-degree ones—are relatively easy to work with, and if they can serve as good approximations to some difficult functions, much convenience is to be gained, as the next two examples will illustrate.

We should point out that the arbitrary function <}>{x) could obviously encompass the «th-degree polynomial of (9.6) as a special case. For this latter case, if the expansion is into another «th-degree polynomial, the result of (9,13) will exactly apply; or in other words, we can use the result in (9.14), with Rn 0, However, if the given «th-degrcc polynomial f(x) is to be expanded into a polynomial of a lesser degree, then the latter can only be considered an approximation to /(x), and a remainder must appear; in that case, the result in (9.14) can be applied with a nonzero remainder Thus Taylor's formula in the form of (9.14) is perfectly general.

Example 3 Expand the nonpolynomial function

around the point xq = 1, with n = 4. We shall need the first four derivatives of which are

- -{1 + x)~2

so that 4>'(1)

m

m

= 2(2) 3 = 1

<p"'(x)

— -6(1 + x)~4

f'Q)

= " T

<t>(\x)

= 24(1 + x)~5

0(4)( 1)

= 24(2)"5 = 1

Also, we See that 0(1) - \ . Thus, setting x0 = 1 in (9.14) and utilizing the obtained derivatives, we arrive at the following Taylor series with remainder:

m = \ - \(X - 1)+ 1)2 - ¿(X - I)3 + - I)4 + «4 31 13 1 2 I 3 1 4 D

It is possible, of course, to choose x0 = 0 as the point of expansion here, too. In that case, with xo set equal to zero in (9.14), the expansion will result in a Madourtn series with remainder.

Example 4 Expand the quadratic function

around xo = 1, with n = 1. This function is, like (9.9) in Example 1, a second-degree polynomial. But since n = 1, our assigned task is to expand it into a fitft-degree polynomial, i,e>, to find a linear approximation to the given quadratic function; thus a remainder term is bound to appear. For this reason, <p{x) should be viewed as an "arbitrary" function for the purpose of this Taylor expansion.

To carry out this expansion, we need only the first derivative <p'{x) = 2 + 2x. Evaluated at xq = 1, the given function and its derivative yield

Thus Taylor's formula with remainder gives us

= 8 + 4(x-l)+/?! = 4 + 4* + fli where the (4 + 4x) term is a linear approximation and the fli term represents the error of approximation.

In Fig. 9.8, 0(x) plots as a parabola, and its linear approximation as a straight line tangent to the curve at the point (1, 8). The occurrence of the point of tangency at x = 1 is not a matter of coincidence; rather, it is the direct consequence of the fact that the point of expansion is set at that particular value of x. This suggests that, when an arbitrary function <p(x) is approximated by a polynomial, the latter will give the exact value of 0(x) at (and only ot) the point of expansion, with zero error of approximation (Ki = 0)» Elsewhere, Ri is strictly nonzero and, in fact, shows increasingly larger errors of approximation as we

FIGURE 9.8

FIGURE 9.8

try to approximate ${x) for x values farther and farther away from the point of expansion Thus, when attempting to approximate any function <p(x) by a polynomial, if we are most interested in obtaining an accurate approximation in the neighborhood of a specific value of x, say x0/ then we ought to choose x0 as the point of expansion.

The construction of Fig. 9.8 is strongly reminiscent of Fig. 8.1. Indeed, both figures are concerned with "approximations/' But there is a difference in the scope of approximation. In Fig. 8.1, we attempt to approximate Ay by the differential dy with the help of a tangent line drawn at a given starting value of x. In Fig. 9.8, on the other hand, we aim more broadly to approximate an entire curve by a particular straight line, i.e., to approximate the height of the curve at any value of x, say, by the corresponding height of the straight line at X]. Note that, in both cases, the error of approximation varies with the value of x. In Fig. 8.1, the error (the difference between dy and Ay) gets smaller as Ax gets smaller, or as x gets closer to at which the tangent line is drawn. In Fig. 9.8, the error (the vertical discrepancy between the straight line and the curve) gets smaller as x approaches the chosen point of expansion.

Lagrange Form of the Remainder

Now we must comment further on the remainder term- According to the Lagrange form of the remainder, we can express lin as

where/? is some number between x (the point where we wish to evaluate the arbitrary function (p) and X[) (the point where we expand the function 0). Note that this expression closely resembles the term which should logically follow the last term in Pn in (9.14), except that the derivative involved is here to be evaluated at a point p instead of Xq. Sincc the point p is, unfortunately, not otherwise specified, this formula does not really enable us to calculatc Ru\ nevertheless, il does have great analytical significance. Let us therefore illustrate its meaning graphically, although we shall do it only for the simple case of n - 0.

When n — 0, no derivatives whatever will appear m the polynomial part therefore (9.14) reduces to

<f>(x) = Po + fio = 0(*o) + 4>f(p)(x ~ xo) or (f>{x) - (i>(xo) = <p'(p)(x - Xo)

This result, a simple version of the mean-value theorem, states that the difference between the value of the function ^ at a0 and at any other x value can be expressed as the product of the difference (x - xq) and the derivative 0' evaluated at p (with p being some point between x and.vo). Let us look at Fig, 9.9, where the function (f>(x) is shown as a continuous curve with derivative values defined at all points. Let xo be the chosen point of expansion, and let x be any point on the horizontal axis. If we try to approximate 0(x), or distance xB, by 0(xo), or distance x^A, it will involve an error equal to <t>(x) - tp(xo), or the distance CB. What the mean-value theorem says is that the error CB which constitutes the value of the remainder term in the expansion—can be expressed as (f>f(p)(x - Xo), where p is some point between x and xq. First we loeate, on the curve between points

FIGURE 9.9

FIGURE 9.9

A and B, a point D such that the tangent line at D is parallel to line AB\ such a point D must exist, since the curve passes from A to B in a continuous and smooth manner. Then, the remainder will be

= (slope of tangent at D) • AC = (slope of curve at x — p) AC = $(p)(x - xn)

where the point p is between x and xo, as required. This demonstrates the rationale of the Lagrange form of the remainder for the case n = 0. We can always express as (¡>((p)(x — jto) because, even thoughp cannot be assigned a specific value, we can be sure that such a point exists.

Equation (9.15) provides a way of expressing the remainder term Rn, but it does not eliminate Jt„ as a source of discrepancy between (p(x) and the polynomial P„. However, if it happens that as we increase n (thus raising the degree of the polynomial) indefinitely, we find that

Rn 0 as n oo so that Pn <j>(x) as n oo then the Taylor series is said to be convergent to <p(x) at the point of expansion, and the Taylor series can be written as a convergent infinite series as follows:

<£(*) = + —(x -xn) + ——(x -xi}y + ... (9.16)

Note that the Rn term is no longer shown; in its place is an ellipsis signifying that the polynomial contains an infinite number of subsequent terms whose mathematical structures follow the pattern indicated by the previous terms, In this (convenient) event, it will be possible to make Pn as accurate an approximation to (j>(x) as we desire by choosing a large enough value for n, that is, by including a large enough number of terms in the polynomial Pn. An important example of this will be discussed in Sec. 10,2.

EXERCISE 9.5

1. Find the value of the following factorial expressions:

2. Find the first five terms of the Maclaurin series (i.e., choose n - 4 and let = 0) for:

3. Find the Taylor series with n = 4 and xq = -2, for the two functions in Prob. 2.

4. On the basis of Taylor's formula with the Lagrange form of the remainder [see (9.14) and (9,15)3/ show that at the ,point of expansion (* - *o) the Taylor series will always give exadtythe vatue of the function at that point, <p(xo)t not merman approximation.

Was this article helpful?

0 0
Procrastination Killer

Procrastination Killer

Procrastination in probably the number one cause of failure in life and business. You Can Change Your Life Forever and Discover Success by Overcoming Procrastination. Learn how to defeat procrastination and transform your life into success.

Get My Free Ebook


Responses

  • v
    What are sufficient condition for profit maximization by the firm?
    2 years ago
  • Catherine Moore
    Which of the following is the second order condition for profit maximization?
    11 months ago
  • KENNETH
    How to find profit maximinzion condition?
    10 months ago
  • Cameron
    What conditions are necessary for profit and investment centres..solutions?
    25 days ago

Post a comment