next up previous index
Next: 11. Infinite Series Up: 10. The Derivative Previous: 10.2 Differentiable Functions on   Index


10.3 Trigonometric Functions

10.45   Example. Suppose that there are real valued functions $S,C$ on $\mbox{{\bf R}}$ such that

\begin{eqnarray*}
&\;&S'=C,\qquad \phantom{S} S(0)=0,\\
&\;&C'=-S,\qquad C(0)=1.
\end{eqnarray*}



You have seen such functions in your previous calculus course. Let $H=S^2+C^2$. Then

\begin{displaymath}H'=2SS'+2CC'=2SC-2CS=0.\end{displaymath}

Hence, $H$ is constant on $\mbox{{\bf R}}$, and since $H(0)=S^2(0)+C^2(0)=0+1$, we have

\begin{displaymath}S^2+C^2=\tilde 1 \mbox{ on } \mbox{{\bf R}}.\end{displaymath}

In particular,

\begin{displaymath}\vert S(t)\vert\leq 1 \mbox{ and }\vert C(t)\vert\leq 1 \mbox{ for all }t\in\mbox{{\bf R}}.\end{displaymath}

Let $K(t)=\left(S(t)+S(-t)\right)^2+\left(C(t)-C(-t)\right)^2$. By the power rule and chain rule,

\begin{eqnarray*}
\lefteqn{K'(t)}\\
&=&2\left(S(t)+S(-t)\right)\left(S'(t)-S'(-...
...ight)+2\left(C(t)-C(-t)\right)
\left(-S(t)-S(-t)\right)\\
&=&0.
\end{eqnarray*}



Hence $K$ is constant and since $K(0)=0$, we conclude that $K(t)=0$ for all $t$. Since a sum of squares in $\mbox{{\bf R}}$ is zero only when each summand is zero, we conclude that

\begin{eqnarray*}
S(-t)&=&-S(t) \mbox{ for all }t\in\mbox{{\bf R}}, \\
C(-t)&=&C(t) \mbox{ for all }t\in\mbox{{\bf R}}.
\end{eqnarray*}



Let

\begin{displaymath}F_0(t)=-C(t)+1 \mbox{ for all }t\in\mbox{{\bf R}}.\end{displaymath}

Then $F_0(t)\geq 0$ for all $t\in\mbox{{\bf R}}$ and $F_0(0)=0$. I will now construct a sequence $\{F_n\}$ of functions on $\mbox{{\bf R}}$ such that $F_n(0)=0$ for all $n\in\mbox{{\bf N}}$, and $F_{n+1}'(t)=F_n(t)$ for all $t\in\mbox{{\bf R}}$. I have

\begin{eqnarray*}
F_1(t)&=&-S(t)+t,\\
F_2(t)&=&C(t)+{{t^2}\over {2!}}-1,\\
F_3...
...2!}}+1,\\
F_5(t)&=&-S(t)+{{t^5}\over {5!}}-{{t^3}\over {3!}}+t.
\end{eqnarray*}



It should be clear how this pattern continues. Since $F_1'(t)=F_0(t)\geq 0$, $F_1$ is increasing on $[0,\infty)$ and since $F_1(0)=0$, $F_1(t)\geq 0$ for $t\in[0,\infty)$. Since $F_2'(t)=F_1(t)\geq 0$ on $[0,\infty)$, $F_2$ is increasing on $[0,\infty)$ and since $F_2(0)=0$, $F_2(t)\geq 0$ for $t\in[0,\infty)$.

This argument continues (I'll omit the inductions), and I conclude that $F_n(t)\geq
0$ for all $t\in[0,\infty)$ and all $n\in\mbox{{\bf N}}$. Now

\begin{eqnarray*}
&\;&F_0(t)\geq 0\mbox{ and }F_2(t)\geq 0
\mbox{$\hspace{1ex}\L...
...over {6!}}\leq C(t)-1+{{t^2}\over
{2!}}-{{t^4}\over {4!}}\leq 0.
\end{eqnarray*}



For each $n\in\mbox{{\bf N}}$, $t\in\mbox{{\bf C}}$, define

\begin{eqnarray*}
&\;&c_n(t)={{(-1)^nt^{2n}}\over {(2n)!}},\\
&\;&s_n(t)={{(-1)...
...=\sum_{j=0}^ns_j(t)=\sum_{j=0}^n{{(-1)t^{2j+1}}\over {(2j+1)!}}.
\end{eqnarray*}



The equations above suggest that for all $n\in\mbox{{\bf N}}$, $t\in[0,\infty)$,
\begin{displaymath}
\vert C(t)-C_n(t)\vert\leq\vert c_{n+1}(t)\vert
\end{displaymath} (10.46)

and
\begin{displaymath}
\vert S(t)-S_n(t)\vert\leq\vert s_{n+1}(t)\vert
\end{displaymath} (10.47)

I will not write down the induction proof for this because I believe that it is clear from the examples how the proof goes, but the notation becomes complicated.

Since $C(t)=C(-t)$, $C_n(t)=C_n(-t)$ and $c_n(t)=c_n(-t)$, the relation (10.46) actually holds for all $t\in\mbox{{\bf R}}$ (not just for $t\in[0,\infty)$) and similarly relation (10.47) holds for all $t\in\mbox{{\bf R}}$. From (10.46) and (10.47), we see that if $\{c_n(t)\}$ is a null sequence, then the sequence $\{C_n(t)\}$ converges to $C(t)$, and if $\{s_n(t)\}$ is a null sequence, then $\{S_n(t)\}$ converges to $S(t)$.

We will show later that both sequences $\{C_n(z)\}$ and $\{S_n(z)\}$ converge for all complex numbers $z$, and we will define

$\displaystyle \cos(z)$ $\textstyle =$ $\displaystyle \lim\{C_n(z)\}=\glossary{$\cos$, cosine}
\glossary{$\sin$, sine}\lim\left\{\sum_{j=0}^n{{(-1)^jz^{2j}}\over
{(2j)!}}\right\}$ (10.48)
$\displaystyle \sin(z)$ $\textstyle =$ $\displaystyle \lim\{S_n(z)\}=\lim\left\{\sum_{j=0}^n{{(-1)^jz^{2j+1}}\over
{(2j+1)!}}\right\}$ (10.49)

for all $z\in\mbox{{\bf C}}$. The discussion above is supposed to convince you that for real $z$ this definition agrees with whatever definition of sine and cosine you are familiar with. The figures show graphs of $C_n$ and $S_n$ for small $n$.

\psfig{file=sines.ps,angle=-90,width=5in}
Graphs of the polynomials $S_n$ for $1 \leq n \leq 10$

\psfig{file=cos.ps,angle=-90,width=5in}
Graphs of the polynomials $C_n$ for $1 \leq n \leq 10$

10.50   Exercise. A Show that $\{c_n(t)\}$ and $\{s_n(t)\}$ are null sequences for all complex $t$ with $\vert t\vert\leq 1$.

10.51   Exercise. A a) Using calculator arithmetic, calculate the limits of
$\displaystyle {\left\{C_n\left({1\over {10}}\right)\right\}}$ and $\displaystyle {\left\{S_n\left({1\over {10}}\right)\right\}}$ accurate to 8 decimals. Compare your results with your calculator's value of $\displaystyle {\sin\left({1\over {10}}\right)}$ and $\displaystyle {\cos\left({1\over {10}}\right)}$. [Be sure to use radian mode.]

b) Calculate $\cos(i)$ to 3 or 4 decimals accuracy. Note that $\cos(i)$ is real.




\psfig{file=sines0124x.ps,width=5.5in}

The figure shows graphical representations for $S_0$, $S_1$, $S_2$, and $S_4$. Note that $S_0$ is the identity function.

10.52   Entertainment. Show that for all $a,x\in\mbox{{\bf R}}$

\begin{displaymath}C(a+x)=C(a)C(x)-S(a)S(x)\end{displaymath}

and

\begin{displaymath}S(a+x)=S(a)C(x)+C(a)S(x).\end{displaymath}

Use a trick similar to the trick used to show that $S(-x)=-S(x)$ and $C(-x)=C(x)$.

10.53   Entertainment. By using the definitions (10.48) and (10.49), show that

a) For all $a\in\mbox{{\bf R}}$, $\cos(ia)$ is real, and $\cos(ia) \geq 1$.

b) For all $a\in\mbox{{\bf R}}$, $\sin(ia)$ is pure imaginary, and $\sin(ia) = 0$ if and only if $a=0$.

c) Assuming that the identity

\begin{displaymath}\sin(z+w) = \sin(z)\cos(w) + \cos(z)\sin(w)\end{displaymath}

is valid for all complex numbers $z$ and $w$, show that if $a \in \mbox{{\bf R}}\setminus \{0\}$ then sin maps the horizontal line $y=a$ to the ellipse having the equation

\begin{displaymath}{x^2 \over \vert\cos(ia)\vert^2} + {y^2 \over \vert\sin(ia)\vert^2} = 1.\end{displaymath}

d) Describe where $\sin$ maps vertical lines. (Assume that the identity $\sin^2(z) + \cos^2(z) = 1$ holds for all $z \in \mbox{{\bf C}}.)$

10.54   Note. Rolle's theorem is named after Michel Rolle (1652-1719). An English translation of Rolle's original statement and proof can be found in [46, pages 253-260]. It takes a considerable effort to see any relation between what Rolle says, and what our form of his theorem says.


The series representations for sine and cosine (10.48) and (10.49) are usually credited to Newton, who discovered them some time around 1669. However, they were known in India centuries before this. Several sixteenth century Indian writers quote the formulas and attribute them to Madhava of Sangamagramma (c. 1340-1425)[30, p 294].


The method used for finding the series for sine and cosine appears in the 1941 book What is Mathematics" by Courant and Robbins[17, page 474]. I expect that the method was well known at that time.


next up previous index
Next: 11. Infinite Series Up: 10. The Derivative Previous: 10.2 Differentiable Functions on   Index