An introduction to the Weierstrass M-test: Part IV

I think this will be the last post in this series. This time we will look at the special case of power series, using Theorem 5 from the previous part to help us to justify differentiation term by term.

Notation: Recall that we denote the set  \(\N \cup \{0\}\) of non-negative integers by \(\N_0.\)

Recall the statement of our Theorem 5.

Theorem 5 (Term by term differentiation of series)

Let $I$ be a nondegenerate interval in $\R$, and let $M_n$ ($n \in \N_0$) be non-negative real numbers such that $\sum_{n=0}^\infty M_n < \infty\,.$

Let $f_n$ ($n \in \N_0$) be differentiable functions from $I$ to $\R,$ and suppose that, for all $n \in \N_0$ and all $x \in I,$ we have $|f_n'(x)|\leq M_n.$

Suppose further that, for  all $x \in I$, the series $\sum_{n=0}^\infty f_n(x)$ converges.

Define $f:I \to \R$ by

\[ f(x)=\sum_{n=0}^\infty f_n(x)\,.\]

Then $f$ is differentiable on $I$, and, for each $x \in I$,

\[f'(x)=\sum_{n=0}^\infty f_n'(x)\,.\]

We are now ready to prove our main result about power series.

Theorem 6 (convergence and differentiability of power series)

Let $x_0\in \R$, and let $a_n$ ($n \in \N_0$) be real numbers. Let $t_0 \in \R$ with $t_0 \neq x_0$, and suppose that the series $\sum_{n=0}^\infty a_n (t_0-x_0)^n$ converges.

Set $R= |t_0-x_0|$, and set $J=(x_0-R,x_0+R).$ Then the following hold.

(a) For all $x \in J$, both of the series $\sum_{n=0}^\infty a_n (x-x_0)^n$ and $\sum_{n=1}^\infty  n a_n (x-x_0)^{n-1}$   converge absolutely (and hence they converge).

(b) Define $f:J\to \R$ by

\[f(x) = \sum_{n=0}^\infty a_n (x-x_0)^n\,.\]

Then $f$ is differentiable on $J$ and, for all $x \in J$,

\[f'(x) = \sum_{n=1}^\infty n a_n (x-x_0)^{n-1}\,.\]

Comments

  • We are investigating the convergence of the power series $\sum_{n=0}^\infty a_n (x-x_0)^n,$ and the properties of the resulting function.
  • We assume that the series converges when $x=t_0$ for some particular $t_0 \neq x_0$. The existence of such a $t_0$ is not guaranteed. (It depends on the sequence.) Of course the series does always converge when $x=x_0.$
  • If we actually know that the series converges for all $x \in \R$, then the rest of the theorem also applies for all $x \in \R.$ For example, this applies to the standard power series for $\exp$, $\sin$ and $\cos$.
  • Our proof of Theorem 6 uses Theorem 5, but we won't set $I=J,$ because we can't be sure that the conditions of Theorem 5 are satisfied on $J$. Fortunately it is enough to prove that, for all $r \in (0,R)$, Theorem 5 applies on $(x_0-r,x_0+r)\,.$ (Or we can work on $[x_0-r,x_0+r]$.)
  • All we really need to know about $R$ here is that the sequence $|a_n| R^n$ is bounded.
  • Several parts of our argument resemble the proof we gave in class of the Ratio Test for series.
Proof of Theorem 6

Since $\sum_{n=0}^\infty a_n (t_0-x_0)^n$ converges, we must have that $a_n (t_0-x_0)^n \to 0$ as $n \to \infty$, and so (in particular) this sequence is bounded. Since $R=|t_0-x_0|$, this tells us that the sequence $|a_n| R^n$ is bounded. Set
\[K=\sup\{|a_n| R^n: n \in \N_0\,\}\,.\]
(Or any other upper bound will do.)
This $K$ is constant in this proof. We may assume that $K>0$, as the whole result is trivial otherwise.

(a) Let $x \in J=(x_0-R,x_0+r).$ Then $|x-x_0|<R$. Set $\alpha = |x-x_0|/R \in [0,1)$. The case where $x=x_0$ is trivial, so we may assume that $\alpha \in (0,1)$. 

We have \[|a_n (x-x_0)^n| = |a_n| R^n \alpha^n \leq K \alpha^n\,.\]
Since $\alpha<1$, $\sum_{k=0}^\infty K \alpha^n$ is a convergent geometric series. So, by the comparison test, the series 
\[\sum_{n=0}^\infty a_n (x-x_0)^n\,\]
is absolutely convergent, and hence convergent. 

Similarly, with $x$ and $\alpha$ as above, for each $n \in \N$, we have
\[|n a_n (x-x_0)^{n-1}| = n |a_n| R^{n-1} \alpha^{n-1}= \frac{n}{R}  \alpha^{n-1} |a_n| R^n \leq \frac{K}R n \alpha^{n-1}\,.\]
Set $b_n=\frac{K}R n \alpha^{n-1}>0.$
We may now apply the ratio test to show that the series $\sum_{n=1}^\infty b_n$ converges (but see also the final comment at the very end of this post). For each $n \in \N$, we have 
\[\frac{|b_{n+1}|}{|b_n|}= \frac{b_{n+1}}{b_n} = \frac{n+1}{n} \alpha \to \alpha\]
as $n \to \infty$. Since $\alpha<1$, the series $\sum_{n=1}^\infty b_n$ converges.
By the comparison test, the series 
\[\sum_{n=1}^\infty n a_n (x-x_0)^{n-1}\]
is absolutely convergent, and hence convergent.

(b) For each $n \in \N_0,$ define $f_n:J \to \R$ by $f_n(x)=a_n (x-x_0)^n.$ In particular, $f_0(x)=a_0$, a constant function.

We define $f:J\to \R$ by

\[f(x) = \sum_{n=0}^\infty a_n (x-x_0)^n = \sum_{n=0}^\infty f_n(x) \,.\]

(We know that this series converges for each $x \in J$ by (a).)

We now wish to apply Theorem 5. But to do this we have to work on slightly smaller intervals, in order to obtain the constants $M_n$ we need.

It is enough to prove that, for all $r \in (0,R)$, the result holds for all $x \in (x_0-r,x_0+r).$

Let  $r \in (0,R),$ and set $I=(x_0-r,x_0+r)$. Set $\beta = r/R \in (0,1).$

For each $n \in \N_0,$ set $f_n(x)=a_n (x-x_0)^n.$ In particular, $f_0(x)=a_0$, a constant function.

Each function $f_n$ is differentiable on $I$ and, for $n \in \N$ and $x \in I$, we have 

\[f_n'(x)= n a_n (x-x_0)^{n-1}\,.\]

For $n=0$ there could be an issue with this expression when $x=x_0,$ but in any case \[f_0'(x)=0\] and this won't affect anything.

We for $x \in I$ and $n \in \N$, we have

\[|f_n'(x)|= n |a_n| |x-x_0|^{n-1} \leq n |a_n| r^{n-1} = \frac{1}{r} n |a_n| r^n = \frac{1}{r} n \beta^n |a_n|R^n  \leq \frac{K}{r} n \beta^n\,.\]

For each $n \in \N_0,$ set $M_n=  \frac{K}{r} n \beta^n.$ Then, for all $x \in I$ and all $n\in \N_0$, we have

\[|f_n'(x)| \leq M_n\,\]

(Here $n=0$ is a trivial special case.)

In order to apply Theorem 5, all we need now is to show that the series $\sum_{n=0}^\infty M_n$ converges. We can ignore the term $M_0=0$. 

Recall that $\beta \in (0,1)$ here. For each $n \in \N$, we have

\[M_n=  \frac{K}{r} n \beta^n > 0 \, .\]

(We eliminated the case $K=0$ as trivial earlier. But we could always work with $K+1$ instead. We could also use $n+1$ if we wanted to include $n=0$.)

We may now apply the ratio test for series (but see also the final comment below). We have 

\[\frac{|M_{n+1}|}{|M_n|}=\frac{M_{n+1}}{M_n} = \frac{n+1}{n} \beta \to \beta < 1\,\]

as  $n \to \infty.$ Thus we have

\[\sum_{n=0}^\infty M_n < \infty\,,\]

and so we can apply Theorem 5 to see that $f$ is differentiable on $(x_0-r,x_0+r)$, with

\[f'(x)=\sum_{n=0}^\infty f_n'(x) = \sum_{n=1}^ \infty n a_n (x-x_0)^{n-1}\]

for all $x \in (x_0-r,x_0+r).$

Since this is true for all $r \in (0,R)$, the result holds for all $x \in J = (x_0-R,x_0+R)\,.$ $\qquad\square$

Comments

  • If we had tried to use $I=J = (x_0-R,x_0+R)$ directly, we would not have obtained suitable $M_n$ this way, because we would have had $r=R$, $\beta=1$, and then $\sum_{n=0}^\infty \frac{K}{r} n \beta^n$ diverges.
  • With only a little more work, we see that we can continue to differentiate our power series term by term on $J= (x_0-R,x_0+R)$ as often as we like. So $f$ is infinitely differentiable on $J$, and the derivatives are exactly what we expect. We can't necessarily use the same $t_0$ for the higher derivatives. But if we take any $R_1 \in (x_0-R,x_0+R)$, we can use the same interval $J_1=(x_0-R_1,x_0+R_1)$ for all of the derivatives. Indeed, we then have, for each $k \in \N$,
    \[n(n-1)\cdots(n-k+1) a_n R_1^{n-k} \to 0\] as $n\to \infty$, which is enough for us to obtain the next order derivative.
  • For  each $t\in(-1,1)$ the series $\sum_{n=1}^\infty nt^{n-1}$ is actually a fairly standard convergent series, with sum $1/(1-t)^2.$ Given this, we can multiply by any constant $C \in \R$ to see that (for such $t$)
    \[\sum_{n=1}^\infty Cnt^{n-1} = \frac{C}{(1-t)^2}\,,\]
    and similarly
    \[\sum_{n=1}^\infty Cnt^n = \sum_{n=1}^\infty Ct nt^{n-1} = \frac{Ct}{(1-t)^2}\,.\]
    Quoting these (with $t=\alpha$ and $t=\beta$ above) would save us two applications of the ratio test, because we would already know those series converge.


Comments

Popular posts from this blog

Sums and integration against counting measure: Part I

Discussion of the proof that the uniform norm really is a norm

Revisiting liminf and limsup