The Peano curve: continuity can be counter-intuitive

The Peano curve is obtained as the limit of piecewise-linear continuous (even closed) curves $\gamma_n$. Denote by $K=\{|x|+|y|\le 1\}$ the square (rotated by $\frac \pi/4$) and by $\mathbb Z^2=\{(x,y):x,y\in\mathbb Z$ the grid of horizontal and vertical lines at distance 1 from each other, then one can construct a family of piecewise-linear continuous curves $\gamma_n:[0,1]\to\mathbb R^2$ which visits all points of the intersection $K\cap\frac1{2^n}\mathbb Z^2$ in such a way that $|\gamma_n(t)-\gamma_n(t)|<\frac1{2^n}$ uniformly on $t\in[0,1]$.

This sequence of curves converges uniformly to a function (curve) $\gamma_*:[0,1]\to\mathbb R^2$ and this curve is closed and continuous for the same reasons that justify continuity of the Koch snowflake curve.

What are the properties of the images $C_n=\gamma_n([0,1])$ and of the limit curve $C_*=\gamma_*([0,1])$?

• Each curve $C_n$ for any finite $n$ is piecewise-linear. It has zero area in the sense that for any $\varepsilon > 0$ the curve $C_n$ can be covered by a finite union of (open) rectangles with the total area less than $\varepsilon$;
• Each curve $C_n$ has finite length (although it grows to infinity as $n\to\infty$, – check it!).
• The limit curve $C_*$ has no length (that’s the same as saying that it has infinite length). Moreover, unlike many other curves of infinite length (say, the straight line $\{y=0\}\subseteq\mathbb R^2$), no part $\gamma([a,b]),\ a, of $C_*$ has finite length!
• The limit curve $C_*$ coincides with the square $K$, hence fills the area equal to 2.

All these assertions are easy except for the last one. Let’s prove it.

Consider the images $C_n=K\cap \frac1{2^n}\mathbb Z^2$. The union of these images is dense in $K$: by definition, this means that any point $P\in K$ can be approximated by a sequence of points $P_n\in C_n$ which converge to $P$ as $n\to\infty$. Being in the image of $\gamma_n([0,1])$, each point $P_n$ is the image of some point in [0,1]: $\exists a_n\in[0,1]:\ \gamma(a_n)=P_n$. Such point may well be non-unique, and in any case we have absolutely no knowledge of how the points $a_1,a_2,\dots$ are distributed over [0,1].

However, we know that the sequence $a_n\in [0,1]$ must have an accumulation point $a_*\in [0,1]$, which is by definition a limit of some infinite subsequence. (This won’t be the case if instead of [0,1] we were dealing with the curves defined on the entire real line!). Replacing the sequence by this subsequence, we see that it still converges to the same limit, $P_n=\gamma(a_n)\to a_*=\gamma_*(a_*)=P$. Thus we proved that an arbitrary point in $K$ lies in the image: $P\in C_*$.

Topology: the study of properties preserved by continuous maps (functions, applications, …)

Definition. A neighborhood of a point $a\in\mathbb R^n$ in the Euclidean space is any set of the form $\{x:|x-a| 0$, where $| ??? |$ is a distance function satisfying the triangle inequality. Examples:

• $|x|=\sqrt{x_1^2+\cdots+x_n^2}$ (the usual Euclidean distance on the line, on the plane, …) for $x=(x_1,\dots,x_n)\in\mathbb R^n$;
• $|x|=\max\{|x_1|, \dots, |x_n|\}$ (in the above notation);
• $|x|=|x_1|+\cdots+|x_n|$.

Definition. A subset $A\subset\mathbb R^n$ of the Euclidean space (OK, plane) is called open, if together with any its point $a\in A$ it contains some neighborhood of $a$.
A subset is called closed, if the limit of converging infinite sequence $\{a_n\}\subset A$ again belongs in $A$.

Theorem. A subset $A$ is open if and only if its complement $\mathbb R^n\smallsetminus A$ is closed.

Theorem. The union of any family (infinite or even uncountable) of open sets is open. Finite intersection of open sets is also open (for infinite intersections this is wrong).
Corollary. Intersection of any family (infinite or even uncountable) of closed sets is closed. Finite union of closed sets is also closed (for infinite intersections this is wrong).

One can immediately produce a lot of examples of open/closed subsets in $\mathbb R^n$. It turns out that any property that can be formulated using only these notions, is preserved by maps which are continuous together with their inverses. The corresponding area of math is called topology.

Continuity of functions

Let $f\colon D\to\mathbb R,\ D\subseteq\mathbb R$ be a function of one variable, and $a\in D$ a point in its domain. The function is said to be continuous at $a$, if for any precision $\varepsilon>0$ the function is $\varepsilon$-constant (equal to its value $f(a)$) after restriction on a sufficiently short interval around $a$.

Formally, if we denote by $\bold I=\bold I^1$ the (open) interval, then the continuity means that $\forall \varepsilon>0\ \exists\delta>0\ f(a+\delta\bold I)\subseteq f(a)+\varepsilon\bold I$ (check that you understand the meaning of the notation $u+v\bold I$ for a subset of $\mathbb R$).

A function $f\colon D\to\mathbb R$ is said to be continuous on a subset $D'\subset D$, if it is continuous at all points $a\in D'$ of this subset. Usually we consider the cases where $D'=D$, that is, functions continuous on their domain of definition.

Remarks.
1. If $a$ is an isolated point of the domain $D$, then any function is automatically continuous at $a$ for a simple reason: for all sufficiently small $\delta>0$ the intersection $(a+\delta\bold I)\cap D$ consists of a single point $a$, so the image is a single point $f(a)$.

2. If $a\notin D$ but $\inf_{x\in D} |x-a|=0$ and there exists a limit $A=\lim_{x\to a}f(x)$, then one can extend $f$ on $a$ by letting $f(a)=A$ and obtain a function defined on $D\cup\{a\}$ which is continuous at $a$.

Obvious properties of the continuity

The sum, difference and product of two functions continuous at a point $a$, is again continuous. The reciprocal of a continuous function $\frac1{f}$ is continuous at $a$, if $f(a)\ne 0$.

This is an obvious consequence of the rules of operations on “approximate numbers” (קירובים). When dealing with the sum/difference, one has to work with absolute approximations, when dealing with the product/ratio – with the relatice approximations, but ultimately it all boils down to the same argument: if two functions are almost constant on a sufficiently small interval around $a$, then application of the arithmetic operations is almost constant.

Not-so-obvious property of continuity

When the continuity is compatible with transition to limit? More specifically, we consider the situation where there is an infinite number of functions $f_n\colon:D\to\mathbb R$ defined on the common domain $D$. Assume that for any $a\in D$ the values (numbers!) $f_n(a)\in\mathbb R$ form a converging sequence whose limit is denoted by $f_*(a)$ (it depends on $a$!). What can one say about the function $f_*\colon a\mapsto f_*(a)$?

Example.
Assume that $f_n(x)=x^n$ and $D=[0,1]$. All of them are continuous (why?). If $a<1$, then $\lim_{n\to\infty} a^n=0$. If $a=1$, then for any $n\ a^n=1$. Thus the limit $\lim_{n\to\infty}f_n(a)$ exists for all $a$, but as a function of $a\in[0,1]$ it is a discontinuous function. Thus without additional precautions a sequence of continuous functions can converge to a discontinuous one.

Distance between the functions.
The distance between real numbers $a,b\in\mathbb R$ is the nonnegative number $|a-b|$ which is zero if and only if $a=b$. Motivated by that, we introduce the distance $\|f-g\|$ between two functions $f,g\colon D\to \mathbb R$ as the expression $\sup_{a\in D}|f(a)-g(a)|$.

Exercise. Prove that any three functions $f,g,h$ defined on the common domain $D$, the “triangle inequality” $\|f-g\|\leqslant \|f-h\|+\|h-g\|$.

Definition. A sequence of functions $f_n\colon D\mathbb R$ is said to be uniformly converging to a function $f_*\colon D\to\mathbb R$, if $\lim_{n\to\infty}\|f_n-f_*\|=0$.

Theorem. If a sequence of continuous functions converges uniformly, then the limit is also a continuous function.

Indeed, denote by $g$ the limit function, $a\in D$ any point, and let $\varepsilon>0$ be any “resolution”. We need to construct a small interval around $a$ such that $g$ on this interval is $\varepsilon$-indistinguishable from the value $g(a)$. We split the resolution allowance into two halves. The first half we use to find $N$ such that $\|f_n-g\| < \frac\varepsilon2$ for all $n\ge N$. The second half we spend on the continuity: since $f_N$ is continuous, there exists a segment $a+\delta\bold I$ on which $f_N$ is $\frac\varepsilon2$-indistinguishable from $f_N(a)$. Collecting all inequalities we see that for any point $x\in a+\delta\bold I$ we have three inequalities: $|f_N(a)-g(a)|<\frac\varepsilon2, \ |f_N(x)-g(x)|<\frac\varepsilon2,\ |f_N(x)-g(x)|<\frac\varepsilon2$. By the triangle inequality, $|g(x)-g(a)|< \frac{3\varepsilon}2$. Ooops! we were heading for $\varepsilon$! One should rather divide our allowance into unequal parts, $\frac{2\varepsilon}3$ for the distance and $\frac\varepsilon3$ for the continuity of $f_N$ if we thought ahead of the computation! 😉 in any case, the outcome is the same.

Curves

The notion of continuity, the distance between functions etc. can be generalized from functions of one variable to other classes of functions.

For instance, functions of the form $\gamma\colon [0,1]\to\mathbb R^2$ can be called (parametrized) curves. Here the argument $x\in[0,1]$ can be naturally associated with time, so the $\gamma(t)$ is the position of the moving point at the moment $t$. We can draw the image $\gamma([0,1])$: this drawing does not reflect the timing: to indicate it, we can additionally mark the images, say, $\gamma (\frac k{10}),\ k=0,1,\dots,10$.

To define continuity for curves, denote by $\bold I^2$ the unit square $\{|x|<1,\ |y|<1\}$. A curve $\gamma$ is continuous at a point $a\in [0,1]$ if $\forall\varepsilon >0 \ \exists \delta>0$ such that $\gamma (a+\delta\bold I^1)\subseteq \gamma(a)+\varepsilon \bold I^2$. (Do you understand this formula? 😉 )

The distance between two points $a=(a_1,a_2),\ b=(b_1,b_2)\in\mathbb R^2$ is usually defined as $\sqrt{(a_1-b_1)^2+(a_2-b_2)^2}$, but this difference is not very much different from the expression $|a-b|=\max_{i=1,2}|a_i-b_i|$ (this definition can be immediately generalized for spaces $\mathbb R^n$ of any finite dimension $n=3,4,\dots$. The distance between two curves has a very similar form: $\|f-g\|=\sup_{x\in [0,1]}|f(x)-g(x)|$.

Remark. If the functions $f,g$ are continuous, we can replace the supremum by maximum (which is always achieved).

Koch snowflake revisited

Now we can return to one of the examples we discussed on Lecture 1, the Koch snowflake. In contrast with that time, we now have an appropriate language to deal with it.

The process of constructing the curve actually produces a sequence of closed curves. The image of the first curve is an equilateral triangular, the second one gives the Star of David, the third one has no canonical name.

In all cases the new curve $\gamma_{n+1}$ is obtained by taking the previous curve $\gamma_n$ and modifying it on a subset of its domain: instead of traversing a line segment with constant speed, one takes a middle third of this segment and forces $\gamma_{n+1}$ to detour. This requires increasing the speed, but we don’t care as long as the trajectory remains continuous. The distance between $\gamma_n$ and $\gamma_{n+1}$ is $\frac{\sqrt3}2$ times the size of the size of the segment $\frac1{3^n}$.

This observation guarantees that $\|\gamma_n-\gamma_{n+1}\|< C(1/3)^n$. This implies that the sequence of maps $\gamma_n\colon [0,1]\to\mathbb R^2$ converges uniformly. The result is continuous curve $\gamma_*\colon[0,1]\to\mathbb R^2$ which has "infinite length" (in fact, it has no length at all).

Sunday, November 15, 2015

One-time change of schedule

Filed under: lecture,schedule — Sergei Yakovenko @ 3:31

Because of the travel arrangements, the next lecture on “Analysis for high school teachers” is moved from Tue Nov 17, 9:15-11:15 (Sci teaching lab) to Thu Nov 19, 9:15-11:15 (Seminar room 2).
For the same reasons the next course by Dmitry Novikov is moved from Thu Nov 19, 9:15-11:15 (Seminar room 2) to Tue Nov 17, 9:15-11:15 (Sci teaching lab).

In other words, the two classes will simply swap their space-time slots for one week only.

Wednesday, November 11, 2015

Lecture 3, Nov 10, 2015

Filed under: Rothschild course "Analysis for high school teachers" — Sergei Yakovenko @ 5:18
Tags: ,

Limits

First, what’s the problem?
Assume we want to calculate the derivative of the function $f(x)=x^2$, say, at the point $x=2$. This derivative is the number, defined using the divided difference $\displaystyle\frac{(2+h)^2-4}{h}=4+h$ when $h$ is “very small”. What does it mean “very small”? We cannot let $h$ be exactly zero, since division by zero is forbidden. On the other hand, if $h\ne 0$, then the above expression is never equal to 4 (as expected) precisely, so in any case the “derivative” cannot be 4, as we want. To resolve this controversy, Leibniz introduced mysterious “differentials” which disappear when added to usual numbers, but whose ratio has precise numerical meaning.

The approach by Leibniz can be worked out into a rigorous mathematical theory, called nonstandard analysis, but historically a different approach, based on the notion of limit (of sequence, function, …), prevailed.

Limit of a sequence

Consider an (infinite) sequence $\{a_n:n\in\mathbb N\}=\{a_1,a_2,a_3,\dots,a_n,\dots\}$ of real numbers. We say that it stabilizes (מתיצבת) at the value $A\in\mathbb R$, if only finitely many terms in this sequence can be different from $A$, and the remaining infinite “tail” consists of the repeated value $A$. Since among the finitely many numbers $n\in\mathbb N$ one can always choose the maximal one (denoted by $N$), we say that the sequence $latex\{a_n\}$ stabilizes, if

$\exists N\in\mathbb N\ \forall n>N\ a_n=A.$

Obviously, stabilizing sequences are not interesting, but their obvious properties can be immediately listed:

1. Changing any finite number of terms in a stabilizing sequence keeps it stabilizing and vice versa;
2. If a sequence $\{a_n\}$ stabilizes at $A$, and another sequence $\{b_n\}$ stabilizes at $B$, then the sum-sequence $\{a_n+b_n\}$ stabilizes at $A+B$, the product-sequence $\{a_nb_n\}$ at $AB$.
3. The fraction-sequence $\{a_n/b_n\}$ may be not defined, but if $B\ne 0$, then only finitely many terms $b_n$ can be zeros, just change them to nonzero numbers and then the fraction-sequence will be defined for all $n$ and stabilize at $A/B$.
4. Exercise: formulate properties of stabilizing sequences, involving inequalities.

Blurred vision

As we have introduced the real numbers, to test their equality requires to check infinitely many “digits”, which is unfeasible. All the way around, we can specify a given precision $\varepsilon >0$ (it can be chosen rational). Then one can replace the genuine equality by $\varepsilon$-equality, saying that two numbers $a,b\in\mathbb R$ are $\varepsilon$-equal, if $|a-b|<\varepsilon$. This is a bad notion that will be used only temporarily, since it is not transitive (for the same fixed value of $\varepsilon$). Yet as a temporary notion, it is very useful.

We say that an infinite sequence $\{a_n\}\ \varepsilon$-stabilizes at a value $A$ for a given precision $\varepsilon$, if only finitely many terms in the sequence are not $\varepsilon$-equal to $A$. Formally, this means that

$\exists N\in\mathbb N\ \forall n>N\ |a_n-A|<\varepsilon.$

The choice of the precision $\varepsilon>0$ is left open so far. In practice it may be set at the level which is determined by the imperfection of our measuring instruments, but since we strive for a mathematical definition, we should not set any artificial threshold.

Definition. A sequence of real numbers $\{a_n\}$ is said to converge to the limit $A\in\mathbb R$, if for any given precision $\varepsilon>0$ this sequence $\varepsilon$-stabilizes at $A$. The logical formula for the corresponding statement is obtained by adding one more quantifier to the left:

$\forall\varepsilon>0\ \exists N\in\mathbb N\ \forall n>N\ |a_n-A|<\varepsilon.$

If we want to claim that the sequence is converging without specifying what the limit is, one more quantifier is required:

$\exists A\in\mathbb R\ \forall\varepsilon>0\ \exists N\in\mathbb N\ \forall n>N\ |a_n-A|<\varepsilon.$

Of course, this formula is inaccessible to anybody not specially prepared for the job, this is why so many students shuttered their heads over it.

Obvious examples

1. $a_n=\frac 1n$, or, more generally, $a_n=\frac1{n^p},\ p>0$. The limit is zero.
2. $a_n=c,\ c\in\mathbb R$. Joke.
3. $a_n =n^p,\ p>0$. Diverges.
4. These rules (plus the obvious rules concerning the arithmetic operations) allow to decide the convergence of any sequence $a_n$ whose general term is a rational function of $n$.
5. Exceptional cases are very rare: e.g., when $a_n=\displaystyle\left(1+\frac 1n\right)^n$

Limits of functions

Let $f\colon D\to\mathbb R$ be a function defined on a subset $D$, and $a\in\mathbb R\smallsetminus D$ a point outside the doomain of $f$. We want to “extend” the function to this point if this makes sense.

For a given precision $\varepsilon>0$ we say that $f$ is $\varepsilon$-constant on a set $S\subseteq D$, if there exists a constant $A\in\mathbb R$ such that $\forall x\in S\ |f(x)-A|<\varepsilon$.

Definition. The function $f\colon D\to\mathbb R$ is said to have a limit equal to $A$ at a point $a\notin D$, if

1. all intersections between $D$ and small intervals $I_\delta=\{|x-a|<\delta\}$ are non-empty and
2. $\forall\varepsilon>0\ \exists\delta>0$ such that the function restricted on $D\cap \{|x-a|<\delta\}$ is $\varepsilon$-constant.

In other words, the function is $\varepsilon$-indistinguishable from a constant on a sufficiently small open interval centered around $a$.

Remark. One can encounter situations when the function is defined at some point $a\in D$, but if we delete this point from $D$, then the function will have a limit $A$ at this point. If this limit coincides with the original value $f(a)$, then the function is well-behaved (we say that it is continuous). If the limit exists but is different from $f(a)$, then we understand that the function was intentionally twisted, and if we change its value at just one point, then it will become continuous. If $f$ has no limit at $a$ if restricted on $D\smallsetminus \{a\}$, we say that $katex f$ is discontinuous\$ at $a$.

Clearly, such extension by taking limit is possible (if at all) only for points at “zero distance” from the domain of the function.

For more detail read the lecture notes from the past years.

Series

As was mentioned, the problem of calculating limits of explicitly given (i.e., elementary) functions is usually not very difficult. The real fun begins when there is no explicit formula for the terms of the sequence (or the function). This may happen if the sequence is produced by some recurrent (inductive) rule.

The most simple case occurs where the rule is simply summation (should we say “correction”?) of a known nature:

$a_{n+1}=a_n+\text{something explicitly depending on }n$.

If we denote the added value by $b_n$, then the sequence will take the form
$a_1, a_1+b_1, a_1+b_1+b_2,a_1+b_1+b_2+b_3,a_1+b_1+b_2+b_3+b_4,\dots$. If we can perform such summations explicitly and write down the answer as a function of $n$, it would be great.

Example. Consider the case where $b_n=\frac1{n(n+1)}=\frac1n-\frac1{n+1}$. Then we get a “telescopic” sum which can be immediately computed. But this is rather an exception…

Another example is the geometric progression where $b_n=cq^n$ for some constant $c,q$.

In general we cannot write down the sum as a function of $n$, which makes the task challenging.

Definition. Let $b_n$ be a real sequence. We say that the infinite series $\sum_{k=1}^\infty b_k$ converges, if the sequence of its finite sums $a_n=\sum_{k=1}^n b_k$ has a (finite) limit.

Examples.

1. The geometric series $\sum_{k=1}^\infty q^k$ converges if $|q|<1$ and diverges if $|q|\ge 1$.
2. The harmonic series $\sum_{k=1}^\infty \frac1k$ diverges.
3. The inverse square series $\sum_{k=1}^\infty \frac1{n^2}$ converges.

The last two statements follow from the comparison of the series with patently diverging or patently converging series (fill in the details!).

Later on we will concentrate specifically on the series of the form $\sum_{k=0}^\infty c_k q^k$ with a given sequence of “Taylor coefficients” $c_0,c_1,c_2,\dots$ which contain a parameter $q$. Considered as the function of $q$, these series exhibit fascinating properties which make them ubiquitous in math and applications.

Real numbers

There are certain situations when the rational numbers are apparently not sufficient: for instance, the function $f(x)=x^2-2$ is negative at $x=0$, positive at $x=2$ but does not take the intermediate value zero: $\forall x\in\mathbb Q\ f(x)\ne 0$. Another situation concerns the possibility to define the notions of supremum and infimum for infinite sets: the set $A=\{x\in\mathbb Q: x^2<2\}$ is bounded from two sides, but among its upper bounds $B=\{b\in\mathbb Q:\ \forall a\in A\ a\leqslant b\}$ there is no minimal one.

The idea is to adjoin to $\mathbb Q$ solutions of infinitely many inequalities.

For any rational number $a\in\mathbb Q$ one can associate two subsets $L,R\subset\mathbb Q$ as follows: $L=\{l\in \mathbb Q: l\le a\}$ and $R=\{r\in\mathbb Q: a\le r\}$. Then the number $a$ is the unique solution to the infinite system of inequalities of the form $l\le x\le r$ for different choices of $l\in L,\ r\in R$. This system has the following two features:

1. it is self-consistent (non-contradictory): any lower bound $l$ is no greater than any upper bound $r$, i.e., $L\le R$, and
2. it is maximal: together the two sets give $\mathbb Q=L\cup R$, and none of the sets can be enlarged without violating the first condition.

Definition.
A (Dedekind) cut is any pair of subsets $L,R\subseteq\mathbb Q$ satisfying the two conditions above.

If a rational number $a\in\mathbb Q$ satisfies all the inqualities $l\le a,\ a\le r$ for all $l\in L,\ r\in R$, then we call it a root (or a solution) of the cut. Every rational number is the solution to some cut $\alpha=(L,R)$ as above, and this happens if and only if $L\cap R=\{a\}$. Yet not all cuts have rational solutions (give an example!).

We can associate cuts without rational solutions with “missing” numbers which we want to adjoin to $\mathbb Q$. For this purpose we have to show how cuts can be ordered (in a way compatible with the order on $\mathbb Q$) and how arithmetic operations can be performed on cuts.

Order on cuts

Let $\alpha=(L,R),\ \beta=(L',R')$ be two different cuts. We declare that $\alpha\triangleleft\beta$, if $L\cap R'\ne\varnothing$, i.e., if there is a rational number $a\in\mathbb Q$ that is at the same time an upper bound for the cut $\alpha$ and a lower bound for the cut $\beta$. If both cuts have rational solutions, this number would be squeezed between these solutions. In the similar way we define the opposite order $\alpha\triangleright\beta$ if and only if $L'\cap R\ne\varnothing$.

To see that this definition is indeed a complete order, we need to check that for any two cuts $\alpha,\beta$ one and only one of the three possibilities holds: $\alpha\triangleleft\beta,\ \alpha\triangleright\beta$ or $\alpha=\beta$ (meaning that $L=L',R=R'$). This is a routine check: if the first two possibilities are excluded, then $L\cap R'=L'\cap R=\varnothing$, and therefore $(L\cup L', R\cup R')$ is a self-consistent cut. But because of the maximality condition, this means that $L\cup L'=L=L'$ and $R\cup R'=R=R'$, that is, $\alpha=\beta$.

Arithmetic operations on cuts

If $\alpha=(L,R),\ \beta=(L',R')$ are two cuts which have rational solutions $a,b$, then these solutions satisfy inequalities $L\le a\le R,\ L'\le b\le R'$ (check that you understand the meaning of this inequality between sets and numbers ;-)!) Adding these inequalities together means that $c=a+b$ satisfies the infinite system of inequalities $L+L'\le c\le R+R'$, where $L+L'$ stands for the so called Minkowski sum $L+L'=\{l+l':\ l\in L,\ l'\in L'\}$ (the same for $R+R'$). This allows to define the summation on cuts.

Definition.
The sum of two cuts $\alpha=(L,R),\beta=(L',R')$ is the cut $\gamma=(L+L',R+R')$ with the Minkowski sum in the right hand side.

To define the difference, we first define the cut $-\alpha$ as follows, $-\alpha=(-R,-L)$, where (of course!) $-L=\{-l: l\in L\},\ -R=\{-r: r\in R\}$. Note that the upper and lower bounds exchanged their roles, since multiplication by $-1$ changes the direction (sense) of the inequalities. Then we can safely define $\alpha-\beta$ as $\alpha + (-\beta)$. Again, one has to check that this definition is well-behaving and all arithmetic properties are preserved.

To define multiplication, one has to exercise additional care and start with multiplication between positive cuts $\alpha,\beta\triangleright 0$ (do it yourselves!) and then extend it for negative cuts and the zero cut. After introducing this definition, one has to make a lot of trivial checks:

1. that for cuts having rational solutions, we get precisely what we expected, that is, the new operation agrees with the old one on the rational numbers,
2. that they have the same algebraic properties (associativity, distributivity, commutativity etc) as we had for the rational numbers,
3. that they agree with the order that we introduced earlier exactly as this was the case with the rational numbers,
4. … … …. …. …

Of course, nobody ever wrote the formal proofs of these endless properties! (Life is short and one should not waste it for nothing). Yet every mathematician can certainly provide a formal proof for any of them, and nobody of countless students who passed through this ordeal ever voiced any concern about validity of these endless nanotheorems. So wouldn’t we.

Achievement of the stated goals

Once we constructed the extension of the rational numbers by all cuts and denote the result $\mathbb R$ and call it the set of real numbers, one has to verify that all the problems we started with, were actually resolved. There is a number of theorems about the real numbers that look dull and self-evident unless we know that a heavy price had to be paid for that. Namely, we can guarantee that:

1. Any subset $A\subset\mathbb R$ which admits at least one upper bound, admits the minimal upper bound called $\sup A=\sup_{a\in A}a$ (and, of course, the analogous statement holds for $\inf A$).
2. If $\varnothing\ne I_k=[a_k,b_k]\subseteq\mathbb R$ is a family of nested nonempty closed intervals, $I_1\supseteq I_2\supseteq I_3\supseteq\cdots$, then the intersection $I_\infty=\bigcap_{k=1}^\infty I_k$ is also nonempty.
3. Any function $f:[a,b]\to\mathbb R$ continuous on the closed segment $[a,b]$, takes any intermediate value between $f(a)$ and $f(b)$.

For more detailed exposition, read the lecture notes here.

Sunday, November 1, 2015

Tutorial 1, Thu Oct 29, 2015

,שלום לכולם

בתרגול היום עסקנו בשאלה ‘איך סוכמים אינסוף איברים?’. ראינו כי האינטואיציה שלנו צריכה לעבוד קשה כשקופצים מעיסוק בכמויות סופיות לאינסופיות, ובפרט מבדידות לרציפות, והזכרנו כי השיח המתמטי העוסק במושג ‘אינסוף’ נמצא בתהליך מתמשך של התפתחות. בפרט, המושג ‘התכנסות טורים’ הוא מורכב יותר משנדמה על סמך הלימודים בתיכון (שם הדיון מוגבל לרוב לטורים גיאומטריים) או אפילו על סמך התואר הראשון.                                                                                                       י

התחלנו בתזכורת לגבי המובן הרגיל של התכנסות טורים אינסופיים, המבוסס על התכנסות של סדרת הסכומים החלקיים של הטור, והמשכנו לדיון קצר לגבי טורים שאינם מתכנסים במובן הרגיל – אך מניפולציות אלגבריות פשוטות מצליחות “בכל זאת” לשייך ערך מספרי עבור טורים אלו. דנו בשלוש דוגמאות לטורים אינסופיים של מספרים שלמים ש”מתכנסים” למספר שברי, שלילי, או גם וגם, ונגענו (בעדינות רבה) בכמה מהכלים המתמטיים שפותחו עם השנים כדי להתמודד עם טורים מתבדרים.                       י

.כך למשל, עבור הטור  $1-1+1-1+1...$ ראינו את העיסוק במקרים הגבוליים של נוסחת הסכום לסדרה הנדסית אינסופית מתכנסת ואת שיטת הסכימה של צ’זארו, שתי שיטות המשייכות את הערך $\frac{1}{2}$ לטור זה

עבור הטור $1+2+4+8+...$ הזכרנו את השינוי בנקודת המבט לגבי איברי הטור, כך שניתן להתבונן בו כמייצג טור של מספרים 2-אדיים. מנקודת מבט זו (ובשונה מאד מאשר במובן הרגיל של התכנסות טורים, שם הטיעון הבא שגוי), האיבר הכללי של הטור שואף לאפס ולכן הטור מתכנס ב2-אדיים, ובפרט – מתכנס שם לערך $-1$.      בנוסף, ברגע שמצאנו מובן עבור ההתכנסות של הטור, אנו מקבלים לגיטימיזציה לשימוש נקודתי במניפולציות האלגבריות על הטור אשר מביאות לאותו ערך.                            י

:עבור הטור $1+2+3+4+...$ ראינו כי המניפולציות האלגבריות מביאות לתוצאה שלגביה האינטואיטיציה שלנו נאלמת דום: $-\frac{1}{12}$. דוגמה למניפולציות אלו תוכלו לראות בסרטון הבא

.בנוסף, ציינתי בקצרה כי קיימות שיטות סכימה נוספות שמשייכות לטור זה את הערך הנ”ל, כגון שיטת הסכימה של אבל, שיטת הסכימה של רמנוג’אן, וגם פונקציית זטא של רימן

לסיכום – הקפיצה ממספר סופי של דברים למספר אינסופי אינה מובנת מאליה, וחלק מהאתגר הוא לבצע את הקפיצה באופן שישמר את התוצאות המוסכמות בקהילה. מתמטיקאים שונים בחרו בדרכים שונות, וחלק מהיופי שבמתמטיקה הוא הדרכים השונות להסתכל על אותו דבר, וכך להבין אותו טוב יותר.                                                                                                                                                                                                                                                             י

Blog at WordPress.com.