Useful Generalizations, part I

Very often in mathematics we see a nice argument or proof and we realize that the same argument can prove more than what was originally intended. The purpose of this post is to do just that for two recent posts, “It’s a Mean Value Theorem” and “On the value of believing that you know the answer.”

In the post on the Mean Value Theorem (MVT), we proved as a warm-up example that:

7^(1/3)+9^(1/3) < 4

As a quick review, MVT states that if f(x) is a differentiable function on the open interval (a,b) and continuous on the closed interval [a,b], then there is at least one point z in (a,b) such that f′(z)=[f(b)−f(a)]/(b−a). We rewrite this as:

f(b)−f(a)=(b−a)f′(z) (MVT)

Now set f(x)=x^(1/3) observe that f′(x) is decreasing when x>0 and use MVT twice to get:

f(9)−f(8)<f(8)−f(7). Since f(8)=2, this last inequality gives the desired answer.

These two applications of MVT use different choices for b and a, but in both applications

b−a= 1. As a generalization, we imagine

b−a=d>0, with a+d≤b−d  and we consider any function whose derivative is decreasing:

Which is bigger, f(a)+f(b) or f(a+d)+f(b−d)?

By comparison, our original question had:

a=7, b=9 and d=1.

From two applications of MVT we see that

f(b)−f(b−d) < f(a+d)−f(a)

Hence we have our generalization:

f(a)+f(b)<f(a+d)+f(b−d)

But is this generalization useful?

Here’s where the post, “On the value of believing that you know the answer” comes in to play. The goal of that post was to prove the arithmetic-geometric mean inequality. However, instead of directly proving that the arithmetic average of n numbers was bigger than the n-th root of their product, we instead replaced two of the n numbers, a and b say, with two new numbers a+d and b−d (choosing d so that one of these new numbers was the arithmetic average).  This change maintained the same arithmetic average, and we showed that the n-th root of the product increased. Finitely many of these two-at-a-time changes produces n equal numbers, and the result follows from comparisons.

The same process works for a very analogous application; given n numbers, whose (arithmetic) average is A, then Jensen’s inequality states that

if f′(x) is decreasing then f(A)≥Z

where Z is the arithmetic average of the n outputs

i.e. Z = [f(x_1)+f(x_2)+…+f(x_n)]/n and

A=[x_1 + x_2 +…+x_n]/n).

If the n numbers are all equal, there is nothing to prove. Otherwise, let a be the smallest one, a<A and there must be another one b say, which is bigger than A. If A−a<b−A then set d=A−a. Otherwise set d=b−A. This guarantees that either a+d=A or b−d=A. To prove Jensen’s inequality, replace the two numbers a and b with a+d and b−d. The average of the n new numbers will still be A, but the average of the n outputs will be larger—Our generalization is useful! Hooray!

Fewer of the n numbers will be different from A. We repeat, similarly altering numbers different from A until there are none remaining.

Remarks

  1. This proof is well-known, but this discussion suggests a natural way to find it.
  2. It is also well-known (and by now probably not surprising) that setting f(x)=log x in Jensen’s inequality immediately yields the AM-GM inequality.
  3. In “It’s a Mean Value Theorem” the main example was to show that in any acute triangle with angles A, B and C, that sinA+sinB+sinC>2. Let f(x)=sin x. Notice  f′(x) is decreasing for acute x. Hence by Jensen’s inequality, f((A+B+C)/3)≥(sinA+sinB+sinC)/3. Rearranging slightly gives us:

sinA+sinB+sinC≤3sin(π/3).

This is a nice upper bound to go with the lower bound of the previous post.

added

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *