real analysis – Proofs of AM-GM inequality

The Question :

108 people think this question is useful

The arithmetic – geometric mean inequality states that
$$\frac{x_1+ \ldots + x_n}{n} \geq \sqrt[n]{x_1 \cdots x_n}$$
I’m looking for some original proofs of this inequality. I can find the usual proofs on the internet but I was wondering if someone knew a proof that is unexpected in some way. e.g. can you link the theorem to some famous theorem, can you find a non-trivial geometric proof (I can find some of those), proofs that use theory that doesn’t link to this inequality at first sight (e.g. differential equations …)?

Induction, backward induction, use of Jensen inequality, swapping terms, use of Lagrange multiplier, proof using thermodynamics (yeah, I know, it’s rather some physical argument that this theorem might be true, not really a proof), convexity, … are some of the proofs I know.

The Question Comments :
  • What have you seen, so that we don’t repeat what you already know?
  • Would you include that in your post? (And,thermodynamics? ._. )
  • I’m curious about the thermodynamics proof $\ddot \smile$
  • Community wiki?
  • Please add the proofs you know as answers (just to get them all together).

The Answer 1

73 people think this answer is useful

Pólya’s Proof:

Let $f(x) = e^{x-1}-x$. The first derivative is $f'(x)=e^{x-1}-1$ and the second derivative is $f”(x) = e^{x-1}$.

$f$ is convex everywhere because $f”(x) > 0$, and has a minimum at $x=1$. Therefore $x \le e^{x-1}$ for all $x$, and the equation is only equal when $x=1$.

Using this inequality we get

$$\frac{x_1}{a} \frac{x_2}{a} \cdots \frac{x_n}{a} \le e^{\frac{x_1}{a}-1} e^{\frac{x_2}{a}-1} \cdots e^{\frac{x_n}{a}-1}$$

with $a$ being the arithmetic mean. The right side simplifies

$$\exp \left(\frac{x_1}{a} -1 \ +\frac{x_1}{a} -1 \ + \cdots + \frac{x_n}{a} -1 \right)$$

$$=\exp \left(\frac{x_1 + x_2 + \cdots + x_n}{a} – n \right) = \exp(n – n) = e^0 = 1$$

Going back to the first inequality

$$\frac{x_1x_2\cdots x_n}{a^n} \le 1$$

So we end with

$$\sqrt[n]{x_1x_2\cdots x_n} \le a$$

The Answer 2

43 people think this answer is useful

I shall provide a simple geometric proof of the inequality in the case of two variables (which I have not been able to find anywhere else – a proof involving a triangle in a circle seems to be popular).

Consider the square of side $a + b$ in the figure below.

The area of the square is $(a + b)^2$. But as it completely contains the four blue rectangles, each of area $ab$, it follows that

$(a + b)^2 \ge 4ab \Rightarrow\\
\dfrac{a + b}{2} \ge \sqrt{ab}

Further, note that there is a square in middle, of side $(b – a)$, and hence area $(b – a)^2$. Therefore the inequality is strict except when $a = b$.

This proves the two-variable case. The same can be extended to the $n$-variable case. I have tried extending it to three variables, but it is difficult to argue why exactly $27$ rectangular parallelepipeds (of sides $a, b, c$) fit in the cube (of side $a + b + c$), though I can see it is so. Any suggestions?

The Answer 3

33 people think this answer is useful

As requested by dani_s, I will give the thermodynamic proof of the AM-GM inequality. This is certainly an example of an original proof, although you might argue about whether or not it’s rigorous.

Let’s start with a list of numbers $x_i$ for which we want to prove the inequality. Take $n$ identical heat reservoirs with the same heat capacity $c$. Reservoir $i$ had initial temperature $x_i$. Bring those reservoirs in contact with each other such that this system evolves to an equilibrium temperature A.

The first law of thermodynamics (conservation of energy) implies that A equals the arithmetic mean of the $x_i$, AM.

The second law of thermodynamics states that the entropy increases until the equilibrium is reached, where the entropy has a maximum. The corresponding formula of change in entropy is
$$\Delta S=c \ln{\frac{T}{T_0}}$$
where $c$ is the heat capacity, $T_0$ the initial temperature and $T$ the end temperature.

In our case $T_i=A$ for all $i$ and $T_{0,i}=x_i$. The total entropy didn’t decrease and therefore,
$$\sum_{i=1}^n c \ln\frac{A}{x_i} \geq 0$$

By writing the sum of logarithms as a logarithm of a product, we recognize the geometric mean. Therefore (since $A=AM$):
$$\frac{AM^n}{GM^n} \geq 1$$
This proves the AM-GM inequality.

The Answer 4

30 people think this answer is useful

LEMMA. If $a_1,\dots,a_n$ are positive numbers whose product is equal to $1,$ then $a_1+\dots+a_n\ge n,$ with equality only when $a_1=\cdots=a_n=1.$

Proof by induction on $n.$ The case $n=1$ being trivial, we suppose $n\ge2.$ Let $a_1,\ a_2,\ \dots,\ a_n$ be positive numbers with $a_1a_2\dots a_n=1.$ Without loss of generality, we can assume that $a_1=\max\{a_1,\dots,a_n\}\ge1$ and $a_2=\min\{a_1,\dots,a_n\}\le1.$ Thus we have
Since $a_1a_2,\ a_3,\ a_4,\ \dots,\ a_n$ are $n-1$ positive numbers with product equal to $1,$ by the inductive hypothesis we have
$$a_1a_2+a_3+\cdots+a_n\ge n-1.\tag2$$

Adding the inequalities (1) and (2), we get
If equality holds in (3) then we must have equality in (1), that is, $a_1=1$ or $a_2=1,$ whence $a_1=\dots=a_n=1.$

THEOREM. If $x_1,\ x_2,\ \dots,\ x_n$ are positive numbers, then
$$\frac{x_1+x_2+\cdots+x_n}n\ge(x_1x_2\cdots x_n)^{\frac1n},$$
with equality only when $x_1=x_2=\cdots=x_n.$

Proof. Let $g=(x_1x_2\cdots x_n)^{\frac1n}.$ According to the lemma we have
$$\frac{x_1}g+\frac{x_2}g+\cdots+\frac{x_n}g\ge n,$$
$$\frac{x_1+x_2+\cdots+x_n}n\ge g=(x_1x_2\cdots x_n)^{\frac1n}.$$
Equality holds only if $\frac{x_1}g=\frac{x_2}g=\dots=\frac{x_n}g=1,$ that is, $x_1=x_2=\cdots=x_n.$

The Answer 5

27 people think this answer is useful

As $(\sqrt{x_1}-\sqrt{x_2})^2 \geq 0$ we have $$\sqrt{x_1 \cdot x_2} \leq \frac{x_1+x_2}{2}.$$ Applying this inequality twice, we get $$(x_1 x_2 x_3 x_4)^{\frac{1}{4}} \leq \frac{\sqrt{x_1 x_2}+\sqrt{x_3 x_4}}{2} \leq \frac{x_1+x_2+x_3+x_4}{4}.$$ By induction, it is not difficult to see that $$(x_1 \cdots x_{2^k})^{\frac{1}{2^k}} \leq \frac{x_1+\ldots+x_{2^k}}{2^k} \tag{1}$$ for all $k \geq 1$.

It remains to fill the gaps between the powers of two. So let $x_1,\ldots,x_n$ be arbitrary positive numbers and choose $k$ such that $n\leq 2^k$. We set

$$\alpha_i := \begin{cases} x_i & i \leq n \\ A & n< i \leq 2^k \end{cases}$$

where $A:= \frac{x_1+\ldots+x_n}{n}$. Applying $(1)$ to the $(\alpha_1,\ldots,\alpha_{2^k})$ yields

$$\bigg( x_1 \ldots x_n A^{2^k-n} \bigg)^{\frac{1}{2^k}} \leq \frac{x_1+\ldots+x_n+(2^k-n) A}{2^k} = A.$$


$$(x_1 \ldots x_n)^{1/n} \leq A = \frac{x_1+\ldots+x_n}{n}.$$

The Answer 6

23 people think this answer is useful

Bernoulli’s Inequality says that for $u\ge-1$ and $0\le r\le1$,
Setting $u=\frac xy-1$ in $(1)$ says that for $x,y\gt0$,
\left(\frac xy\right)^r\le(1-r)+r\frac xy\tag{2}
If we multiply $(2)$ by $y$, we get
x^ry^{1-r}\le rx+(1-r)y\tag{3}
Now $(3)$ can be used inductively to get
x_1^{r_1}x_2^{r_2}x_3^{r_3}\dots x_n^{r_n}\le r_1x_1+r_2x_2+r_3x_3+\dots+r_nx_n\tag{4}
where $r_1,r_2,r_3,\dots,r_n\ge0$ and $r_1+r_2+r_3+\dots+r_n=1$.

Inductive step:

Suppose that $(4)$ holds, then we can use $(3)$ to get
&\left(x_1^{r_1}x_2^{r_2}x_3^{r_3}\dots x_n^{r_n}\right)^{1-r_{n+1}}x_{n+1}^{r_{n+1}}\\
where $(1-r_{n+1})(r_1+r_2+r_3+\dots+r_n)+r_{n+1}=1$

The Answer 7

13 people think this answer is useful

There is a more direct induction proof.

We can assume that $0<a_1\leq \dots \leq a_n$ holds. Let us set $A=\frac{a_1+ \dots +a_n}{n}$.
First notice that $(a_n-A)(A-a_1)\geq 0$, which can be rewritten as $\frac{a_1a_n}{A}\leq a_1+a_n-A $.
Now we assume that the property holds for $n-1$; we will apply it to the positive numbers ${a_2,\ldots , a_{n-1},a_1+a_n-A}$. This gives: $$\left(\frac{\prod_{k=1}^na_k}{A}\right)^{1/(n-1)}\leq \left[(\prod _{k=2}^{n-1}a_k)(a_1+a_n-A)\right]^{1/(n-1)}\leq \frac{a_1+\dots +a_n-A}{n-1}= A.$$

So : $$\prod_{k=1}^na_k\leq (A^{1+1/(n-1)})^{n-1}=A^n.$$

The Answer 8

10 people think this answer is useful

This is a proof using Buffalo way (see this link for what buffalo way is, I couldn’t find it anywhere, I jush heard it’s possible, so I tried it, hopefully there are no mistakes, but even if there are I think they will be easy to correct, the main idea should still be right. We’ll proceed by strong induction, base case holds trivially from inequality $(x_1 – x_2)^2 \geq 0$, assume that AM-GM holds for all $k$ such that $1 < k < n$. Then we wish to prove that $$x_1^n + x_2^n + \cdots + x_n^n – nx_1x_2\cdots x_n \geq 0$$ This inequality is clearly symmetric, hence we may WLOG suppose $x_1 = \min\{x_1,x_2,\cdots,x_n\} = x$, therefore there exist $y_1,y_2,\cdots,y_{n – 1} \geq 0$ such that $x_i = x + y_{i – 1}$ for all $i \in \{1,2,\cdots,n\}$. Therefore the inequality can be rewritten as $$x^n + (x + y_1)^n + (x + y_2)^n + \cdots + (x + y_{n – 1})^n – nx(x + y_1)(x + y_2) \cdots (x + y_{n – 1}) \geq 0$$ This is a polynomial in $x$ and expanding this polynomial we get that the coefficient of $x^{n – k}$, where $1 < k < n$, is $$\binom{n}{k}p_k(y_1,y_2,\cdots,y_{n – 1}) – ne_k(y_1,y_2,\cdots,y_{n – 1})$$ where $p_k$ is $k$-th power sum and $e_k$ is $k$-th elementary symmetric polynomial. So it suffices to prove that $$\binom{n}{k}p_k(y_1,y_2,\cdots,y_{n – 1}) – ne_k(y_1,y_2,\cdots,y_{n – 1}) \geq 0$$ But by inductive hypothesis we get $$\frac{n}{k} \cdot \left(y^k_{i_1} + y^k_{i_2} + \cdots + y^k_{i_k}\right) – ny_{i_1}y_{i_2} \cdots y_{i_k} \geq 0,$$ where $i_1,i_2,\cdots,i_k \in \{1,2,3,\cdots,n – 1\}$ are pairwise distinct. Doing this for all the possible combinations of indices $i_1,i_2, \cdots, i_k$ we in fact get stronger inequality $$\frac{n}{k} \cdot \binom{n – 2}{k – 1} p_k(y_1,y_2,\cdots,y_{n – 1}) – ne_k(y_1,y_2,\cdots,y_{n – 1}) \geq 0.$$ Now also clearly coefficients of $x^n$ and $x^{n – 1}$ are $0$ and coefficient of $x^0$ is just $p_n(y_1,y_2,\cdots,y_{n – 1})$. Hence all the coefficients of our polynomial are non-negative, therefore the polynomial is non-negative, thus the inductive step is proved and the whole proof is finished.

The Answer 9

10 people think this answer is useful

Draw graph of $y= \ln x$.

Choose $n$ points on it.A polygon of $n$ sides is thus formed whose coordinates of whose centroid is $$G \left (\frac{(x_1+x_2+….x_n)}n, \frac{(y_1 + …..y_n)}n\right ) = \left (\frac{(x_1+x_2+….x_n)}n, \frac{(\ln x_1 + \ln x_2 + …. + \ln x_n)}n\right )$$
Form centroid we draw a perpendicular to $x$ axis and extend it to meet the graph of $y= \ln x$ at $C$ and $y$ axis at $D$.

Now coordinates of point $C$ are $\left (\frac{(x_1+x_2+….x_n)}n, \ln \left (\frac{x_1+x_2+….x_n}n\right )\right )$.
Now $$CD= \ln {\left (\frac{x_1+x_2+….x_n}n\right )}$$ ($ y$ coordinate of $C$).
$$GD = \frac{\ln x_1 + \ln x_2 + …. + \ln x_n}n = \frac{\ln (x_1\cdot x_2\cdot ….x_n)} n$$ ($y$ coordinate of $G$).
Now $CD> GD$ (as centroid of a convex polygon is always inside the polygon)
$$\therefore \ln {\left (\frac{x_1+x_2+….x_n}n\right )} > \frac{\ln (x_1\cdot x_2\cdot ….x_n)} n = \ln \left ((x_1 \cdot x_2 \cdot … x_n)^{\frac 1n} \right )$$ Now base and the number are positive. So we take antilog of both sides and get$$\frac{x_1+x2+….x_n}n > (x_1\cdot x_2\cdot ….x_n)^{1/n}$$
Now, it is also possible that all the selected $n$ points are only one point. Hence then the centroid is the point itself. Hence $\frac{x_1+x_2+….x_n}n = (x_1\cdot x_2\cdot ….x_n)^{1/n}$.
Thereby proving that $\frac{x_1+x_2+….x_n}n \geq (x_1\cdot x_2\cdot ….x_n)^{1/n}$, where equality holds when$ x_1=x_2=…= x_n$.

The Answer 10

10 people think this answer is useful

Let $S(n)$ denote the statement
S(n):\; \frac{x_1+x_2+\cdots+x_n}{n}\geq\sqrt[n]{x_1x_2\ldots x_n},\quad n\in\mathbb{N}.

Base step ($n=1$): The statement $S(1)$ says that $\frac{x_1}{1}\geq\sqrt[1]{x_1}$, which is true because $x_1 = x_1$.

Base step ($n=2$): The statement $S(2)$ says that

which is true because
a\leq x \leq b \longleftrightarrow a+b\geq x+\frac{ab}{x}, \qquad 0<a\leq b,\; x>0.\tag{1*}

Inductive step ($S(k)\to S(k+1)$): Fix some $k\geq 1$, where $k\in\mathbb{N}$. Assume that
S(k):\; \frac{x_1+x_2+\cdots+x_k}{k} \geq \sqrt[k]{x_1x_2\ldots x_k}

holds. To be proved is that
\frac{x_1+x_2+\cdots+x_{k+1}}{k+1}\geq\sqrt[k+1]{x_1x_2\ldots x_{k+1}}

follows. If $x_1 = x_2 = \cdots = x_{k+1}$, then the proof is done. If not, let $x_1x_2\ldots x_{k+1} = \rho^{k+1}$. Without loss of generality, assume that $x_1\leq x_i$ and $x_i \leq x_2$ for all $i$; that is, assume that $x_1 < \rho < x_2$. Beginning with the left side of $S(k+1)$ [excluding the $k+1$ divisor],
x_1+x_2+\cdots+x_{k+1} &\,\geq\, \rho+\frac{x_1x_2}{\rho}+x_3+\cdots+x_k+x_{k+1}\tag{by (1*)}\\
&\,\geq\, \rho+k\cdot\left(\sqrt[k]{\frac{x_1x_2}{\rho}x_3\ldots x_{k+1}}\right)\tag{by $S(k)$}\\
&\,=\, (k+1)\rho,

one arrives at the right side of $S(k+1)$ [with a $k+1$ multiple], thereby showing that $S(k+1)$ is also true, completing the inductive step. Thus, by mathematical induction, $S(n)$ is true for all $n\geq 1$, where $n\in\mathbb{N}$.

The Answer 11

9 people think this answer is useful

This is not an original or different proof but since any question, asking for a proof for this inequality, gets closed as a duplicate of this question … I feel this answer should be here too.

First, let me prove the result that if there are two numbers of the same sum, the numbers which are closer together have a greater product.

$$ t + u = x + y $$

Let $t,u$ be further apart from each other than $x,y$

But, $$ t – u \gt x – y $$

Consider the following identity, writing the product of two numbers as a function of it’s sum and difference. Since we are subtracting a positive quantity, and the sum is the same, the greater product is of the numbers with the lesser difference.

$$4ab = (a+b)^{2} – (a-b)^{2}$$

From this, we clearly see that $$xy \gt tu$$

Now, let’s consider a set of numbers

$$a, b, c {\dots} n$$

Let us choose its arithmetic mean, $M$ given by

$$M = \frac{a+b+c+{\dots}n}{n}$$

The product of this series is

$$P = a.b.c.{\dots}n$$

Now, we make another series of numbers
$$a_{1}, b_{1}, c_{1} {\dots} n_{1}$$

We choose this series in such a way that all elements are equal but we make one change. We choose one number less than $M$, (say $a_{1}$) and write $a_{1} = M$.

However, we’d like to preserve the sum and the $AM$ of this series so we’d choose a number greater than $M$, say $b_{1}$ and reduce its value so that $a_{1} + b_{1} = a + b$. Since, $a_{1}$ has been increased, $b_{1}$ must be decreased. All the other values remain as their older counterparts.

$c_{1} = c, d_{1} = d, {\dots} n_{1} = n$

Now we observe,

$P_{1} = a_{1}.b_{1}.c_{1}{\dots}n_{1}$

Since the product of the terms $c_{1}d_{1}{\dots}n_{1} = cd{\dots}n$, and $Mb_{1} \gt ab$,

$$\implies P_{1} \gt P$$

Now, we construct another series of terms, $a_{2}, b_{2},c_{2}{\dots}n_{2}$ in the same way where all the terms are equal to their counterparts of the previous series. We write, $ b_{2} = M$, and increase the value of a number less than $M$, (say $c_{2}$) so that the sum $b_{2}+c_{2} = b_{1} + c_{1}$ is preserved, but $b_{2}c_{2} \gt b_{1}c_{1}$, since $b_{2},c_{2}$ are closer together.

Now, we observe the product of this series.

$$P_{2}= a_{2}b_{2}c_{2}{\dots}n_{2}$$
$$\implies P_{2} = M^{2}c_{2}{\dots}n_{2}$$

So, by the similar logic, $$P_{2} \gt P_{1}$$

We go on constructing many series making a new element equal to M each time and increasing the product we get,

$$P_{n} \gt {\dots} \gt P_{2} \gt P_{1} \gt P$$
$$\implies P_{n} = M^{n} \gt P$$
$$\implies ( {\frac{a+b+c+\dots+n}{n}})^{n} \gt {a.b.c.{\dots}n}$$

But, both the $L.H.S.$ and the $R.H.S$ are positive quantities.

$$\implies \frac{a+b+c\dots+n}{n} \gt {(a.b.c.{\dots}n)}^{\frac{1}{n}}$$

And, Voila !

$$\implies A.M \gt G.M.$$

The Answer 12

8 people think this answer is useful


In the diagrams, the circle is tangential to the horizontal line at $C$ and intersects the vertical line at $A$ and $B$, while the lines themselves intersect at $P$. In the figure on the left, let $a = PA$, $b = PB$, and $c = PC$. By the power of the point $P$, we know $c^2 = ab$ or $c = \sqrt{ab}$.

Then, we shift the circle to the right so that it is tangential to both lines. The circle being a symmetric figure, we know the new $PA$ is $\frac{a + b}{2}$ and therefore $PC$ is also $\frac{a + b}{2}$. But as we moved the circle to the right, the new $PC \geq c$ and hence
$$\frac{a + b}{2} \geq \sqrt{ab}$$
(Note that equality occurs when $A$ and $B$ are the same in the figure on the left.)

I am not sure about this, but I think the proof can be extended to the AM-GM inequality with 3 variables if we consider a sphere with one tangential plane and one intersecting plane, and we apply the above proof to each ‘slice’… by extension, one could consider the $n$-dimensional sphere intersecting with $(n-1)$-dimensional planes, but as I said, I am very unsure about this.

(The credit for this proof goes to Sujay Kazi, a friend at a math club I attend. He recently presented this proof and I cannot recall seeing it in the usual lists of AM-GM proofs, so I am including it here.)

The Answer 13

7 people think this answer is useful

Just consider a probabilistic approach of the proof using the convexity of the function $-\log x$
and using Jensen’s inequality.

Let $\{a_1,a_2,a_3,\ldots,a_n\}$ be a set of positive numbers. Consider a distribution for a random variable $X$ by placing weight $1/n$ on each of these numbers.Then the mean of the random variable $X$ is the arithmetic mean (AM), $$E(X)=n^{-1}\sum_{i=1}^n a_i$$Then since $-\log x$ is a convex function,we have by Jensen’s inequality that

$$-\log\left(\frac {\sum_{i=1}^n a_i}{n}\right)\le E(-\log X)=-\frac{1}{n}\left(\sum_{i=1}^n \log a_i\right)=-log(a_1a_2a_3….a_n)^{1/n}$$
or equivalently $$\log\left(\frac {\sum_{i=1}^n a_i}{n}\right)\ge \log(a_1a_2a_3….a_n)^{1/n}$$
and hence we can see that $$(a_1a_2a_3….a_n)^{1/n}\le \frac{\sum_{i=1}^n a_i}{n}$$
As we can see the inequality on the left hand side of this inequality is the geometric mean and the right hand side is the arithmetic mean for any set of positive numbers.
Note: If you replace $a_i$ by $1/a_i$ we get the relationship between the harmonic mean ,the arithmetic mean and the geometric mean

The Answer 14

6 people think this answer is useful

Applying the Log sum inequality (direct consequence of Jensen inequality)

$$ \sum a_i \log \frac{ b_i}{a_i} \le a \log \frac{b}{a}$$
where $a=\sum a_i$ and $b=\sum b_i$, $a_i\ge 0$, $b_i\ge 0$; setting $b_i=x_i/n$ $a_i=1/n$ we get

$$ \frac{1}{n}\sum \log x_i \le \log \left( \frac{\sum x_i}{n}\right) $$

$$\log\left (\frac{x_1+ \ldots + x_n}{n} \right) \geq \frac{ \log x_1 +\log x_2 +\cdots \log x_n}{n}$$

The Answer 15

5 people think this answer is useful


Using $\color{blue}{\text{Forward-Backward Induction/Cauchy Induction}}$,

Our base case is $n=2$ as $n=1$ is trivial.

Base Case:
P(2):(\sqrt{x_1}-\sqrt{x_2})^2\geq0\implies x_1+x_2-2\sqrt{x_1x_2}\geq0\\
\implies x_1+x_2\geq2\sqrt{x_1x_2}\implies \frac{x_1+x_2}{2}\geq\sqrt{x_1x_2}
$P(2)$ is true.

Inductive Step:

1.) Forward Part:

Let the inequality holds for $n=k$, ie. $P(k)$ is true.
$P(2k)$ is true whenever $P(k)$ is true.

2.) Backward Part:

Substitute $x_k=\frac{x_1+x_2+…+x_{k-1}}{k-1}$ in $P(k)$,
\bigg(\frac{x_1+x_2+…+x_{k-1}}{k-1}\bigg)^k\geq x_1x_2…x_{k-1}.\frac{x_1x_2…x_{k-1}}{k-1}\\
\frac{x_1+x_2+…+x_{k-1}}{k-1}\geq \sqrt[k-1]{\frac{x_1x_2…x_{k-1}}{k-1}}
$P(k-1)$ is true whenever $P(k)$ is true.

$\implies$ By the forward-backward induction AM-GM inequality is true for all $n$.

The Answer 16

3 people think this answer is useful

This is a plausibility argument rather than a complete proof. Without loss of generality we may assume that $x_1,x_2,\dots,x_n\in[0,1].$ Let $p_i=x_i^{1/n}\in[0,1].$

Consider the following variant of Russian Roulette. You are faced with a panel of $n$ buttons. If you press the $i^{\text{th}}$ button, then with probability $p_i$ you will get a fatal electric shock. You are required to press a button $n$ times. The probabilities $p_1,\dots,p_n$ are known to you, but the buttons are not labeled, you don’t know which is which. You can choose between two strategies.

I. Press each button once.

II. Choose a button at random and press it $n$ times.

It is INTUITIVELY OBVIOUS that strategy II is better: you have a better chance of survival pressing the same button that you already pressed and lived, than pressing an untried button. (Prove this rigorously and you have a proof of the AM-GM inequality.)

Now, the probability of survival with strategy I is
$$p_1p_2\cdots p_n=(x_1x_2\cdots x_n)^{1/n}$$
and with strategy II it’s
so the fact that strategy II dominates strategy I is expressed by the AM-GM inequality
$$(x_1x_2\cdots x_n)^{1/n}\le\frac{x_1+x_2+\cdots+x_n}n.$$

The Answer 17

1 people think this answer is useful

If $a_1,…,a_n>0$, and $0<p\le q$, by Hölder inequality:
Also, using De l’Hopital we get:
$$\lim_{p\rightarrow 0^+}\left(\frac{1}{n}\sum_{k=1}^na_k^p\right)^{\frac{1}{p}}=\exp\left(\frac{1}{n}\sum_{k=1}^n \log(a_k)\right)=\left(\prod_{k=1}^na_k\right)^{\frac{1}{n}}$$
$$\forall q>0, \left(\prod_{k=1}^na_k\right)^{\frac{1}{n}}\le\left(\frac{1}{n}\sum_{k=1}^na_k^q\right)^{\frac{1}{q}}.$$
In particular for $q=1$ we get AM-GM.

The Answer 18

1 people think this answer is useful

An alternate proof for the case $n=3$: it is equivalent to show that $a^3+b^3+c^3-3abc \geq 0$ for positive reals.

a^3+b^3+c^3 – 3abc & = \\
c^3 + (a+b)^3 -3ab(a+b)-3abc & = \\
c^3 + (a+b)^3 – 3ab(a+b+c) & = \\
(a+b+c)^3 – 3c(a+b)(a+b+c) – 3ab(a+b+c) & = \\
(a+b+c)^3 – 3(ab+ac+bc)(a+b+c) & = \\
(a+b+c) \cdot ((a+b+c)^2 – 3(ab+ac+bc)) & = \\
(a+b+c) \cdot ((a^2+b^2+c^2) – (ab+ac+bc)) & = \\
(a+b+c) \cdot \cfrac{(a-b)^2+(b-c)^2+(c-a)^2}{2}

And it is obviously true.

(P.S.: we can use the same technique of transfinite induction from here too!)

The Answer 19

0 people think this answer is useful

Just came up with this proof that uses only the Cauchy-Schwarz inequality. Not sure if it’s already known.

We first show the following result for polynomials.

Let $p$ be a polynomial with non-negative coefficients.
Then $\sqrt{p(a) \, p(b)} \geq p(\sqrt{ab})$
for all $a, b \geq 0$.

Proof. Let $p(x) = c_n x^n + c_{n-1} x^{n-1} + \cdots + c_0$. Then let $\vec{v} := (\sqrt{c_n a^n}, \sqrt{c_{n-1} a^{n-1}}, \dots, \sqrt{c_0 a^0})$,
and similarly define $\vec{w} = (\sqrt{c_n b^n}, \sqrt{c_{n-1} b^{n-1}}, \dots, \sqrt{c_0 b^0})$. The result is immediate from Cauchy-Schwarz. $\square$

We use this result to show AM-GM for two variables.

Let $a, b \geq 0$. Then $\frac{a+b}{2} \geq \sqrt{ab}$.

Proof. Let $P_n$ be the $n^\text{th}$-order Taylor polynomial of $e^x$. By the previous result, we have
$$\sqrt{P_n(a) \, P_n(b)} \geq P_n(\sqrt{ab})$$
for all $n \geq 0$. The desired result follows by taking $n \to \infty$ and noting that $e^x$ is increasing. $\square$

I am not sure whether this result can be extended to multiple variables.

The Answer 20

0 people think this answer is useful

I think we can use Korovkin’s inequality :

Korovkin’s inequality :

Let $x_1,x_2,\cdots,x_n$ be positive numbers then :
$$\frac{x_1}{x_2}+\frac{x_2}{x_3}+\cdots+\frac{x_{n-1}}{x_n}+\frac{x_n}{x_1}\geq n$$

Proof :


Using Korovkin’s inequality and putting $\frac{x_{i}}{x_{i+1}}=y_i$ we have :

$$y_1+y_2+\cdots+y_{n-1}+y_{n}\geq n$$

Where $y_1y_2\cdots y_{n-1}y_n=1$

Putting $a_i\alpha=y_i$ where $\alpha$ is a positive numbers we have :

$$a_1+a_2+\cdots+a_{n-1}+a_{n}\geq \frac{n}{\alpha}$$

Where $a_1a_2\cdots a_{n-1}a_n=\frac{1}{\alpha^n}$

We get :

$$a_1+a_2+\cdots+a_{n-1}+a_{n}\geq \frac{n}{(\alpha^{n})^{\frac{1}{n}}}$$


$$a_1+a_2+\cdots+a_{n-1}+a_{n}\geq n(a_1a_2\cdots a_{n-1}a_n)^{\frac{1}{n}}$$


The Answer 21

0 people think this answer is useful

By analyzing rectangles we discover the following identities, true for all real numbers $u$ and $v$,

$\tag 1 u(u+w) = \frac{u^2}{2} + \frac{u}{2} (u + w) + (u w)/2$

$\tag 2 \frac{u^2+(u+w)^2}{2} – u(u+w) = \frac{w^2}{2}$

For the impetus for deriving these identities, see this answer to

$\quad$ Intuitive understanding of $\sqrt{a b} \leq \frac{a+b}{2}, \:
> a,b \ge 0$

The second identity can be used to immediately derive the AM-GM inequality for two variables.

It is important that any proof keeps track of the

$\quad$ equality hold if and only if …

for the AM-GM inequality.

Although two other answers discuss forward-backward induction, I must insist that nothing compares to the ‘algebraic crispness’ found in wikipedia/The subcase where $n \lt 2^k$. It is this subcase (the ‘go backward’ part of the induction) that is the most challenging, and I copy it directly from the source in the next section.

The subcase where $n \lt 2^k$

If $n$ is not a natural power of $2$, then it is certainly less than some natural power of $2$, since the sequence $2, 4, 8, . . . , 2^k, . . .$ is unbounded above. Therefore, without loss of generality, let $m$ be some natural power of $2$ that is greater than $n$.

So, if we have $n$ terms, then let us denote their arithmetic mean by $\alpha$, and expand our list of terms thus:
{\displaystyle x_{n+1}=x_{n+2}=\cdots =x_{m}=\alpha .}

We then have:

\begin{aligned}\alpha &={\frac {x_{1}+x_{2}+\cdots +x_{n}}{n}}\\[6pt]&={\frac {{\frac {m}{n}}\left(x_{1}+x_{2}+\cdots +x_{n}\right)}{m}}\\[6pt]&={\frac {x_{1}+x_{2}+\cdots +x_{n}+{\frac {m-n}{n}}\left(x_{1}+x_{2}+\cdots +x_{n}\right)}{m}}\\[6pt]&={\frac {x_{1}+x_{2}+\cdots +x_{n}+\left(m-n\right)\alpha }{m}}\\[6pt]&={\frac {x_{1}+x_{2}+\cdots +x_{n}+x_{n+1}+\cdots +x_{m}}{m}}\\[6pt]&>{\sqrt[{m}]{x_{1}x_{2}\cdots x_{n}x_{n+1}\cdots x_{m}}}\\[6pt]&={\sqrt[{m}]{x_{1}x_{2}\cdots x_{n}\alpha ^{m-n}}}\,,\end{aligned}


{\displaystyle \alpha ^{m}>x_{1}x_{2}\cdots x_{n}\alpha ^{m-n}}


{\displaystyle \alpha >{\sqrt[{n}]{x_{1}x_{2}\cdots x_{n}}}}

as desired.

The Answer 22

0 people think this answer is useful

AM-GM inequality says that for any $ a_1, \dots , a_n > 0 $, we have $ \dfrac{a_1 + \cdots + a_n}{n} \geq \sqrt[n]{a_1 \cdots a_n} $ with equality holding if and only if $ a_1 = \cdots = a_n $.

Here is Cauchy’s Backward Induction proof, but with the backward part slightly modified (and hopefully more intuitive). We’ll first prove the inequality, and then think about when equality holds. For brevity, let $ P_n $ mean “For any $a_1, \dots a_n > 0, \dfrac{a_1 + \cdots + a_n}{n} \geq \sqrt[n]{a_1 \cdots a_n}$ “.

$P_1$ holds trivially.
$P_2$ is true because $ \dfrac{a_1 + a_2}{2} \geq \sqrt{a_1 a_2} $ can be written as $ a_1 + a_2 – 2\sqrt{a_1 a_2} \geq 0 $, which is just $ (\sqrt{a_1} – \sqrt{a_2})^2 \geq 0 $.
$ P_4 $ comes from $ P_2 $, as $ \dfrac{a_1 + a_2 + a_3 + a_4}{4} = \dfrac{\dfrac{a_1 + a_2}{2} + \dfrac{a_3 + a_4}{2}}{2} \geq \dfrac{\sqrt{a_1 a_2} + \sqrt{a_3 a_4}}{2} \geq \sqrt{\sqrt{a_1 a_2}\sqrt{a_3 a_4}} = \sqrt[4]{a_1 \cdots a_4} $.
Similarly $P_8$ comes from $P_4$, $P_{16}$ comes from $P_8$, etc.
In general, if we know a $ P_{2^k} $ holds then $ P_{2^{k+1}} $ must hold as
$ \dfrac{ a_1 + \cdots +a_{2^{k+1}} }{2^{k+1}} = \dfrac{ \dfrac{ a_1 + \cdots a_{2^k} }{2^k} + \dfrac{ a_{2^{k} + 1} + \cdots a_{2^{k+1}} }{2^k} }{2} \geq \dfrac{(a_1 \cdots a_{2^k})^{ \frac{1}{2^k} } + (a_{2^k + 1} \cdots a_{2^{k+1}})^{ \frac{1}{2^k} } }{2} \geq \sqrt{ (a_1 \cdots a_{2^k})^{ \frac{1}{2^k} } (a_{2^k + 1} \cdots a_{2^{k+1}})^{ \frac{1}{2^k} } } = (a_1 \cdots a_{2^{k+1}})^{\frac{1}{2^{k+1}}} $.

So it is clear $P_k$ holds whenever $ k $ is a power of $2$.

It now suffices to show “If a $P_{k+1}$ holds, then $P_k$ must hold”. [Informally, we showed we can jump to powers of $2$, but there are gaps so now we are trying to show we can also take single steps backwards]. So we’ll try to prove this:

Fix any $ a_1, \dots, a_k > 0 $. Now we are to show $ \dfrac{a_1 + \cdots + a_k}{k} \geq (a_1 \cdots a_k)^{\frac{1}{k}} $ , but we have the information $ \dfrac{ a_1 + \cdots + a_k + t }{k+1} \geq (a_1 \cdots a_k t)^{\frac{1}{k+1}} $ for any $ t > 0 $. In order to match the info’s RHS to that of the goal we can try picking a $ t $ such that $ (a_1 \cdots a_k t)^{\frac{1}{k+1}} = (a_1 \cdots a_k)^{\frac{1}{k}} $, i.e we can try setting $ t = (a_1 \cdots a_k)^{\frac{1}{k}} $, and in fact it works :
$ \dfrac{ a_1 + \cdots a_k + (a_1 \cdots a_k)^{\frac{1}{k}} }{k+1} \geq (a_1 \cdots a_k)^{\frac{1}{k}} $, giving the required inequality $ \dfrac{a_1 + \cdots + a_k}{k} \geq (a_1 \cdots a_k)^{\frac{1}{k}} $.
[We could’ve also tried to match info’s LHS to that of the goal, i.e we could’ve tried setting $ t = \dfrac{a_1 + \cdots a_k}{k} $, and even this happens to work !].

So the inequality part is proved. Now we’ll think about when the equality is true. The $P_2$-inequality becomes equality if and only if $ \sqrt{a_1} – \sqrt{a_2} = 0 $, i.e if and only it $ a_1 = a_2 $. The $P_4$-inequality becomes equality if and only if $ \dfrac{a_1 + a_2}{2} = \dfrac{a_3 + a_4}{2}, a_1 = a_2 $ and $ a_3 = a_4 $, i.e. if and only if $ a_1 = \cdots = a_4 $. Proceeding so, it is clear $P_{2^k}$-inequality becomes equality if and only if $ a_1 = \cdots = a_{2^k} $. Now looking at the backwards induction step above, we see that in general equality holds if and only if all the numbers taken are equal.

The Answer 23

0 people think this answer is useful

Another answer. We can prove this alternative result:

If $a_1, \ldots, a_n$ are positive reals such that $a_1 + \dots + a_n = n$, then $a_1 \dots a_n \leq 1$.

We can suppose wlog that $a_1 \leq \dots \leq a_n$.

Notice that we can suppose $a_n \not= 1$, or else we could just solve the same problem for $n-1$.

By “pigeonhole principle”, we know that $a_1 \leq 1 \leq a_n$. If we change them by their arithmetical mean, the sum stays the same, but the product increases. Indeed, the product “increases” by $(\frac{a_1+a_n}{2})^2-a_1a_n = (\frac{a_1-a_n}{2}) \geq 0$, with equality if and only if $a_1=a_n$.

That way, we can always increase the sum whenever there are at least two different terms. The only way it can’t be done is if all terms are equal; and this is the supremum value.

(Another possible solution would be change $a_n$ by $1$ and $a_1$ by $(a_1+a_n)-1$; the sum stays the same but the product “increases” by $a_1+a_n-1-a_1a_n = (1-a_1)(a_n-1) \geq 0$; that way the maximum occurs if all terms are $1$).

The Answer 24

-2 people think this answer is useful

My idea is to use only the Cauchy Schwartz inequality by induction.

Applying the Cauchy_Schwartz inequality on the vectors $(\sqrt{a},\sqrt{a})^T$ and $(\sqrt{b},\sqrt{b})^T$ we get:
$$\sqrt{ab} +\sqrt{ab} \le (a+b)^{1/2}(a+b)^{1/2} $$
that is $$\sqrt{ab} \le \frac{a+b}{2}~~~~~~\color{red}{\text{AM-GM for $n =2$}}$$

Hypothesis of induction: Assume that for all $p\in\{1,2,\cdots, n-1\}$ we have,
\color{blue}{(a_1a_2\cdots a_p)^{1/p}\leq \frac{a_1+a_2+\cdots+a_p}{p}}
We want to prove that it is true for $n$. we will discuss two case: $n =2p$ and $n=2p-1$

If $n =2p$ Then we proceed as follows:
Using that case: $n=2$ and the case $n=p$, we have,

$$ \begin{align}\color{blue}{\frac{a_1+a_2+\cdots+a_n}{n} }&= \frac{1}{2}\left(\overbrace{\frac{a_1+a_2+\cdots+a_p}{p}}^a+\overbrace{\frac{a_{p+1}+a_{p+2}+\cdots+a_n}{p}}^b \right) \\
&\ge\sqrt{\left(\frac{a_1+a_2+\cdots+a_p}{p}\right) \left(\frac{a_{p+1}+a_{p+2}+\cdots+a_n}{p}\right)}\\
&\ge\sqrt{(a_1a_2\cdots a_p)^{1/p}(a_{p+1}a_{p+2}\cdots a_n)^{1/p}}\\
&=\sqrt{(a_1a_2\cdots a_n)^{1/p}} = \color{blue}{(a_{p+1}a_{p+2}\cdots a_n)^{1/n} } ~~~~~~\color{red}{\text{AM-GM for $n =2p$}} \end{align}$$

If $n =2p-1$ (we use the case $n=2p$, as follows.we have, $n+1 =2p $ and $p<n$ : Then taking $$ a_{n+1} = \frac{a_1+a_2+\cdots+a_n}{n}$$
in the previous case we obtain,

\begin{align}&\frac{a_1+a_2+\cdots+a_{n+1}}{n+1} \ge (a_1a_2\cdots a_{n+1})^{\frac{1}{n+1}}\\
&\Longleftrightarrow\frac{1}{n+1}\left(a_1+a_2+\cdots+a_n+\overbrace{\frac{a_1+a_2+\cdots+a_n}{n}}^{a_{n+1}} \right) \ge \left(a_1a_2\cdots a_{n}\left(\overbrace{\frac{a_1+a_2+\cdots+a_n}{n}}^{a_{n+1}}\right) \right)^{\frac{1}{n+1}}\\
\\&\Longleftrightarrow\frac{a_1+a_2+\cdots+a_n}{n} \ge \left(a_1a_2\cdots a_{n}\left(\frac{a_1+a_2+\cdots+a_n}{n}\right) \right)^{\frac{1}{n+1}}\\
&\Longleftrightarrow\left(\frac{a_1+a_2+\cdots+a_n}{n}\right)^{\frac{n}{n+1}} \ge \left(a_1a_2\cdots a_{n}\right)^{\frac{1}{n+1}}
\\&\Longleftrightarrow \color{blue}{\frac{a_1+a_2+\cdots+a_n}{n} \ge \left(a_1a_2\cdots a_{n}\right)^{\frac{1}{n}}}~~~~\color{red}{\text{AM-GM for $n =2p-1$}}\end{align}

Add a Comment