Sergei Yakovenko's blog: on Math and Teaching

Wednesday, November 11, 2015

Lecture 3, Nov 10, 2015

Filed under: Rothschild course "Analysis for high school teachers" — Sergei Yakovenko @ 5:18
Tags: ,

Limits

First, what’s the problem?
Assume we want to calculate the derivative of the function f(x)=x^2, say, at the point x=2. This derivative is the number, defined using the divided difference \displaystyle\frac{(2+h)^2-4}{h}=4+h when h is “very small”. What does it mean “very small”? We cannot let h be exactly zero, since division by zero is forbidden. On the other hand, if h\ne 0, then the above expression is never equal to 4 (as expected) precisely, so in any case the “derivative” cannot be 4, as we want. To resolve this controversy, Leibniz introduced mysterious “differentials” which disappear when added to usual numbers, but whose ratio has precise numerical meaning.

The approach by Leibniz can be worked out into a rigorous mathematical theory, called nonstandard analysis, but historically a different approach, based on the notion of limit (of sequence, function, …), prevailed.

Limit of a sequence

Consider an (infinite) sequence \{a_n:n\in\mathbb N\}=\{a_1,a_2,a_3,\dots,a_n,\dots\} of real numbers. We say that it stabilizes (מתיצבת) at the value A\in\mathbb R, if only finitely many terms in this sequence can be different from A, and the remaining infinite “tail” consists of the repeated value A. Since among the finitely many numbers n\in\mathbb N one can always choose the maximal one (denoted by N), we say that the sequence $latex\{a_n\}$ stabilizes, if

\exists N\in\mathbb N\ \forall n>N\ a_n=A.

Obviously, stabilizing sequences are not interesting, but their obvious properties can be immediately listed:

  1. Changing any finite number of terms in a stabilizing sequence keeps it stabilizing and vice versa;
  2. If a sequence \{a_n\} stabilizes at A, and another sequence \{b_n\} stabilizes at B, then the sum-sequence \{a_n+b_n\} stabilizes at A+B, the product-sequence \{a_nb_n\} at $AB$.
  3. The fraction-sequence \{a_n/b_n\} may be not defined, but if B\ne 0, then only finitely many terms b_n can be zeros, just change them to nonzero numbers and then the fraction-sequence will be defined for all n and stabilize at A/B.
  4. Exercise: formulate properties of stabilizing sequences, involving inequalities.

Blurred vision

As we have introduced the real numbers, to test their equality requires to check infinitely many “digits”, which is unfeasible. All the way around, we can specify a given precision \varepsilon >0 (it can be chosen rational). Then one can replace the genuine equality by \varepsilon-equality, saying that two numbers a,b\in\mathbb R are \varepsilon-equal, if |a-b|<\varepsilon. This is a bad notion that will be used only temporarily, since it is not transitive (for the same fixed value of \varepsilon). Yet as a temporary notion, it is very useful.

We say that an infinite sequence \{a_n\}\ \varepsilon-stabilizes at a value A for a given precision \varepsilon, if only finitely many terms in the sequence are not \varepsilon-equal to A. Formally, this means that

\exists N\in\mathbb N\ \forall n>N\ |a_n-A|<\varepsilon.

Spectacles can improve your vision

The choice of the precision \varepsilon>0 is left open so far. In practice it may be set at the level which is determined by the imperfection of our measuring instruments, but since we strive for a mathematical definition, we should not set any artificial threshold.

Definition. A sequence of real numbers \{a_n\} is said to converge to the limit A\in\mathbb R, if for any given precision \varepsilon>0 this sequence \varepsilon-stabilizes at A. The logical formula for the corresponding statement is obtained by adding one more quantifier to the left:

\forall\varepsilon>0\ \exists N\in\mathbb N\ \forall n>N\ |a_n-A|<\varepsilon.

If we want to claim that the sequence is converging without specifying what the limit is, one more quantifier is required:

\exists A\in\mathbb R\ \forall\varepsilon>0\ \exists N\in\mathbb N\ \forall n>N\ |a_n-A|<\varepsilon.

Of course, this formula is inaccessible to anybody not specially prepared for the job, this is why so many students shuttered their heads over it.

Obvious examples

  1. a_n=\frac 1n, or, more generally, a_n=\frac1{n^p},\ p>0. The limit is zero.
  2. a_n=c,\ c\in\mathbb R. Joke.
  3. a_n =n^p,\ p>0. Diverges.
  4. These rules (plus the obvious rules concerning the arithmetic operations) allow to decide the convergence of any sequence a_n whose general term is a rational function of n.
  5. Exceptional cases are very rare: e.g., when a_n=\displaystyle\left(1+\frac 1n\right)^n

Limits of functions

Let f\colon D\to\mathbb R be a function defined on a subset D, and a\in\mathbb R\smallsetminus D a point outside the doomain of f. We want to “extend” the function to this point if this makes sense.

For a given precision \varepsilon>0 we say that f is \varepsilon-constant on a set S\subseteq D, if there exists a constant A\in\mathbb R such that \forall x\in S\ |f(x)-A|<\varepsilon.

Definition. The function f\colon D\to\mathbb R is said to have a limit equal to A at a point a\notin D, if

  1. all intersections between D and small intervals I_\delta=\{|x-a|<\delta\} are non-empty and
  2. \forall\varepsilon>0\ \exists\delta>0 such that the function restricted on D\cap \{|x-a|<\delta\} is \varepsilon-constant.

In other words, the function is \varepsilon-indistinguishable from a constant on a sufficiently small open interval centered around a.

Remark. One can encounter situations when the function is defined at some point a\in D, but if we delete this point from D, then the function will have a limit $A$ at this point. If this limit coincides with the original value f(a), then the function is well-behaved (we say that it is continuous). If the limit exists but is different from f(a), then we understand that the function was intentionally twisted, and if we change its value at just one point, then it will become continuous. If f has no limit at a if restricted on D\smallsetminus \{a\}, we say that $katex f$ is discontinuous$ at a.

Clearly, such extension by taking limit is possible (if at all) only for points at “zero distance” from the domain of the function.

For more detail read the lecture notes from the past years.

Series

As was mentioned, the problem of calculating limits of explicitly given (i.e., elementary) functions is usually not very difficult. The real fun begins when there is no explicit formula for the terms of the sequence (or the function). This may happen if the sequence is produced by some recurrent (inductive) rule.

The most simple case occurs where the rule is simply summation (should we say “correction”?) of a known nature:

a_{n+1}=a_n+\text{something explicitly depending on }n.

If we denote the added value by b_n, then the sequence will take the form
a_1, a_1+b_1, a_1+b_1+b_2,a_1+b_1+b_2+b_3,a_1+b_1+b_2+b_3+b_4,\dots. If we can perform such summations explicitly and write down the answer as a function of n, it would be great.

Example. Consider the case where b_n=\frac1{n(n+1)}=\frac1n-\frac1{n+1}. Then we get a “telescopic” sum which can be immediately computed. But this is rather an exception…

Another example is the geometric progression where b_n=cq^n for some constant c,q.

In general we cannot write down the sum as a function of n, which makes the task challenging.

Definition. Let b_n be a real sequence. We say that the infinite series \sum_{k=1}^\infty b_k converges, if the sequence of its finite sums a_n=\sum_{k=1}^n b_k has a (finite) limit.

Examples.

  1. The geometric series \sum_{k=1}^\infty q^k converges if |q|<1 and diverges if |q|\ge 1.
  2. The harmonic series \sum_{k=1}^\infty \frac1k diverges.
  3. The inverse square series \sum_{k=1}^\infty \frac1{n^2} converges.

The last two statements follow from the comparison of the series with patently diverging or patently converging series (fill in the details!).

Later on we will concentrate specifically on the series of the form \sum_{k=0}^\infty c_k q^k with a given sequence of “Taylor coefficients” c_0,c_1,c_2,\dots which contain a parameter q. Considered as the function of q, these series exhibit fascinating properties which make them ubiquitous in math and applications.

Advertisements

Wednesday, December 28, 2011

Lecture 10, December 27, 2011

Continuity and limits of functions of real variable

In the first lecture we introduce the notion of continuity of a function at a given point in its domain and a very close notion of a limit at a point outside of the “natural” domain.

This notion is closely related to the notion of sequential limit as introduced earlier. This paves a way to generalize immediately all arithmetic and order results from numeric sequences to functions.

The novel features involve one-sided limits, limits “at infinity” and continuity of composition of functions.

The (unfinished) notes, to be eventually replaced by a more polished text, are available here: follow the updates, this temporary link will eventually be erased.

Lectures 8-9, December 13 and 20

Limits of sequences

We spend some time considering different flavors of “limit behavior”: stabilization, approximate stabilization etc. A sequence is called converging, if it \varepsilon-stabilizes for any positive accuracy \varepsilon>0.

To show that passing to a limit “respects” arithmetic operations, we need to work out a bit of “interval arithmetic” with a special attention to the division which may cause problems.

We discuss the weaker notion of a partial limit (accumulation point) and study under what assumptions a unique partial limit is the genuine limit.

Finally, we show that monotone bounded sequences always converge. This is one of the most powerful tools to show that the limit exists when it is not possible to compute it explicitly.

The lecture notes (in pdf) are available here.

Blog at WordPress.com.