\ie\cf
\newcommand{\eqdef}{\mathbin{\stackrel{\rm def}{=}}}  \newcommand{\R} % real numbers  \newcommand{\N}} % natural numbers  \newcommand{\Z} % integers  \newcommand{\F} % a field  \newcommand{\Q} % the rationals  \newcommand{\C}{\mathbb{C}} % the complexes  \newcommand{\poly}}  \newcommand{\polylog}}  \newcommand{\loglog}}}  \newcommand{\zo}{\{0,1\}}  \newcommand{\suchthat}  \newcommand{\pr}[1]{\Pr\left[#1\right]}  \newcommand{\deffont}{\em}  \newcommand{\getsr}{\mathbin{\stackrel{\mbox{\tiny R}}{\gets}}}  \newcommand{\Exp}{\mathop{\mathrm E}\displaylimits} % expectation  \newcommand{\Var}{\mathop{\mathrm Var}\displaylimits} % variance  \newcommand{\xor}{\oplus}  \newcommand{\GF}{\mathrm{GF}}  \newcommand{\eps}{\varepsilon}  \notag
\no
Next |
| Previous

Series of Sines and Cosines

100%

Defining Fourier Series

Under rather general conditions on a function \(f\), it can be written as the following series of sines and cosines:$$f(x)=c+a_1\sin x+b_1\cos x+a_2\sin 2x+b_2\cos 2x+\ldots\tag{1}$$for a particular set of coefficients \(c,a_1,\ldots,b_1,\ldots\). Given the choice of coefficients, we call the series above a Fourier Series of \(f\).

For now, we'll assume that \(f\) is a real function, i.e., \(f:\bbR\to\bbR\). Since all of the summands in the Fourier series have period \(2\pi\), we expect \(f\) to as well. (A function \(\phi\) has period \(a\) iff for all \(x\), \(\phi(x+a)=\phi(x)\).)

Given any function \(g:[0,2\pi)\to\bbR\), we can extend it to a function \(g^*:\bbR\to\bbR\) by letting \(g^*(x)=g(x\bmod{2\pi})\). Since this function has period \(2\pi\), we'll expect there to be a Fourier Series for \(g^*\). If we don't concern ourselves with the value of this series outside of \([0,2\pi)\), this is also a Fourier Series for \(g\). Unfortunately, there are some conditions we must impose on \(g\) for a Fourier Series of it to exist.

Of course, this idea can work on a function \(h:[a,b)\to\bbR\), by dilating and translating \([a,b)\) to \([0,2\pi)\). It may not be obvious at this point, but this operation adds no complexity to the problem of finding and evaluating Fourier Series.

Orthogonality

Determining the coefficients of the Fourier Series is dependent on evaluating particular definite integrals involving sines and cosines. Observe first that for \(n\in\bbZ\setminus\set{0}\)$$\intp\cos nx=\frac{\sin nx}n\Bigg\vert_{-\pi}^\pi=\frac{2\sin n\pi}n=0\tag{2}$$And likewise, for \(n\in\bbZ\), \(\int\limits_{-\pi}^\pi\sin nx=0\).

However, \(\int\limits_{-\pi}^\pi\cos0x=\int\limits_{-\pi}^\pi1=2\pi\).

This may seem obvious, but it has some rather interesting implications.

From the sum to product formulas $$\cos px\cos qx=\frac12(\cos(p-q)x+\cos(p+q)x\tag{3}$$$$\sin px\sin qx=\frac12(\cos(p-q)x-\cos(p+q)x)\tag{4}$$$$\sin px\cos qx=\frac12(\sin(p-q)x+\sin(p+q)x)\tag{5}$$we see that if \(p\neq q\), then $$\intp\cos px\cos qx dx=\frac12\left(\intp\cos(p-q)x dx+\intp\cos(p+q)x dx\right)=0\tag{6}$$And \(\intp\cos^2pxdx=\pi\) by using equation 3 again. We can apply equation 4 to show the same result holds if we replace \(\cos\) with \(\sin\) in equation 6. Finally, for any integers \(p,q\), we have $$\intp\sin px\cos qxdx=\frac12\left(\intp\sin(p-q)x+\intp\sin(p+q)x\right)=0\tag{7}$$by equation 5.

Why are these results important? As we saw in the Preliminaries, the inner product of two functions in an \(L_2\)-space is defined as the integral of their product. Clearly, \(\sin nx\) and \(\cos nx\) belong to \(\rmL=L_2((-\pi,\pi))\) for all \(n\in\bbZ\).

Given an \(L_2\) space, we say that elements \(x,y\) are orthogonal (notated as \(x\bot y\)) if \(x\cdot y=0\).

As we saw in the Preliminaries, the \(L_2\) space we'll be concerning ourselves with in these notes is \(\rmL\), the space of square integrable functions on \([-\pi,\pi)\).

For any \(n\neq m\in\mathbb N\), \(\sin nx\bot\sin mx\) and \(\cos nx\bot\cos mx\). And, for any \(n,m\in\mathbb N\), \(\sin nx\bot\cos mx\).


The proof is essentially given in the paragraphs above.

Odd and Even functions

Odd and even functions have natural interpretations as Fourier Series. To see why, we need to show a very trivial result.

The product of odd functions and the product of even functions is even. The product of an even and an odd function is odd.


As a quick reminder, an odd function is a function for which for all \(x\in\bbR\), \(f(x)=-f(-x)\). Similarly, an even function is a function for which for all \(x\in\bbR\), \(f(x)=f(-x)\).

Let \(f,g\) be odd functions and \(i,j\) be even functions on \(\bbR\). Thus, for any \(x\in\bbR\), we have $$f(-x)\cdot g(-x)=(-1)^2f(x)\cdot g(x)=f(x)\cdot g(x)$$$$f(-x)\cdot i(-x)=-f(x)\cdot i(x)$$$$i(-x)\cdot j(-x)=i(x)\cdot j(x)$$This completes the proof.
If an odd function can be expressed as a Fourier Series, all of the cosine coefficients must be zero. Likewise, if an even function can be expressed as a Fourier Series, all of the sine coefficients must be zero.


First, let's prove a quick result. Suppose \(g\) is odd. Then, \(\intp g=\int_{-\pi}^0g+\int_0^\pi g=0\).

Let \(f\) have the Fourier expansion $$f(x)=c+a_1\sin x+b_1\cos x+a_2\sin 2x+\ldots$$Suppose \(f\) is even and let \(k\in\bbN\). Then, by orthogonality, $$\intp f\cdot\sin kx=a_k\intp \sin^2kx=a_k\pi$$But, by theorem 2, \(f\cdot\sin kx\) is odd, and thus \(a_k\pi=0\). So, \(a_k=0\).

Suppose now that \(f\) is odd. Then, for any \(k\in\bbN\), orthogonality implies $$\intp f\cdot\cos kx=b_k\intp\cos^2kx=b_k\pi$$By an identical argument, since \(f\cdot\cos kx\) is odd, \(b_k=0\).

In other words, odd functions are represented by a series of sines, and even functions by a series of cosines. So that motivates the question: can we look at a series of cosines and understand whether it converges to an even function (or an odd function for a series of sines?)

The answer, in general, is no. However, there is a special series of cosines that we can evaluate easily, that will motivate other convergence tests we'll see in the future.

Let \(G\) be the sum $$G(v)=\frac12+\sum\limits_{k=1}^n\cos kv$$We can use the relation $$2\sin\frac v2\cos kv=\sin\left(k+\frac12\right)v-\sin\left(k-\frac12\right)v\tag{8}$$which gives us $$2\sin\left(\frac v2\right)G(v)=\sin\frac v2+\sum\limits_{k=1}^n\left[\sin\left(k+\frac12\right)v-\sin\left(k-\frac12\right)v\right]$$$$=\sin\left(n+\frac12\right)v$$Thus, we have the relation $$G(v)=\frac{\sin\left(n+\frac12\right)v}{2\sin\frac v2}\tag{9}$$
Top