Sergei Yakovenko's blog: on Math and Teaching

Friday, November 20, 2015

Lecture 4, Nov 19, 2015

Continuity of functions

Let f\colon D\to\mathbb R,\ D\subseteq\mathbb R be a function of one variable, and a\in D a point in its domain. The function is said to be continuous at a, if for any precision \varepsilon>0 the function is \varepsilon-constant (equal to its value f(a)) after restriction on a sufficiently short interval around a.

Formally, if we denote by \bold I=\bold I^1 the (open) interval, then the continuity means that \forall \varepsilon>0\ \exists\delta>0\ f(a+\delta\bold I)\subseteq f(a)+\varepsilon\bold I (check that you understand the meaning of the notation u+v\bold I for a subset of \mathbb R).

A function f\colon D\to\mathbb R is said to be continuous on a subset D'\subset D, if it is continuous at all points a\in D' of this subset. Usually we consider the cases where D'=D, that is, functions continuous on their domain of definition.

Remarks.
1. If a is an isolated point of the domain D, then any function is automatically continuous at a for a simple reason: for all sufficiently small \delta>0 the intersection (a+\delta\bold I)\cap D consists of a single point a, so the image is a single point f(a).

2. If a\notin D but \inf_{x\in D} |x-a|=0 and there exists a limit A=\lim_{x\to a}f(x), then one can extend f on a by letting f(a)=A and obtain a function defined on D\cup\{a\} which is continuous at a.

Obvious properties of the continuity

The sum, difference and product of two functions continuous at a point a, is again continuous. The reciprocal of a continuous function \frac1{f} is continuous at a, if f(a)\ne 0.

This is an obvious consequence of the rules of operations on “approximate numbers” (קירובים). When dealing with the sum/difference, one has to work with absolute approximations, when dealing with the product/ratio – with the relatice approximations, but ultimately it all boils down to the same argument: if two functions are almost constant on a sufficiently small interval around a, then application of the arithmetic operations is almost constant.

Not-so-obvious property of continuity

When the continuity is compatible with transition to limit? More specifically, we consider the situation where there is an infinite number of functions f_n\colon:D\to\mathbb R defined on the common domain D. Assume that for any a\in D the values (numbers!) f_n(a)\in\mathbb R form a converging sequence whose limit is denoted by f_*(a) (it depends on a!). What can one say about the function f_*\colon a\mapsto f_*(a)?

Example.
Assume that f_n(x)=x^n and D=[0,1]. All of them are continuous (why?). If a<1, then \lim_{n\to\infty} a^n=0. If a=1, then for any n\ a^n=1. Thus the limit \lim_{n\to\infty}f_n(a) exists for all a, but as a function of a\in[0,1] it is a discontinuous function. Thus without additional precautions a sequence of continuous functions can converge to a discontinuous one.

Distance between the functions.
The distance between real numbers a,b\in\mathbb R is the nonnegative number |a-b| which is zero if and only if a=b. Motivated by that, we introduce the distance \|f-g\| between two functions f,g\colon D\to \mathbb R as the expression \sup_{a\in D}|f(a)-g(a)|.

Exercise. Prove that any three functions f,g,h defined on the common domain D, the “triangle inequality” \|f-g\|\leqslant \|f-h\|+\|h-g\|.

Definition. A sequence of functions f_n\colon D\mathbb R is said to be uniformly converging to a function f_*\colon D\to\mathbb R, if \lim_{n\to\infty}\|f_n-f_*\|=0.

Theorem. If a sequence of continuous functions converges uniformly, then the limit is also a continuous function.

Indeed, denote by g the limit function, a\in D any point, and let \varepsilon>0 be any “resolution”. We need to construct a small interval around a such that g on this interval is \varepsilon-indistinguishable from the value g(a). We split the resolution allowance into two halves. The first half we use to find N such that \|f_n-g\| < \frac\varepsilon2 for all n\ge N. The second half we spend on the continuity: since f_N is continuous, there exists a segment a+\delta\bold I on which f_N is \frac\varepsilon2-indistinguishable from f_N(a). Collecting all inequalities we see that for any point x\in a+\delta\bold I we have three inequalities: |f_N(a)-g(a)|<\frac\varepsilon2, \ |f_N(x)-g(x)|<\frac\varepsilon2,\ |f_N(x)-g(x)|<\frac\varepsilon2. By the triangle inequality, |g(x)-g(a)|< \frac{3\varepsilon}2. Ooops! we were heading for \varepsilon! One should rather divide our allowance into unequal parts, \frac{2\varepsilon}3 for the distance and \frac\varepsilon3 for the continuity of f_N if we thought ahead of the computation! 😉 in any case, the outcome is the same.

Curves

The notion of continuity, the distance between functions etc. can be generalized from functions of one variable to other classes of functions.

For instance, functions of the form \gamma\colon [0,1]\to\mathbb R^2 can be called (parametrized) curves. Here the argument x\in[0,1] can be naturally associated with time, so the \gamma(t) is the position of the moving point at the moment t. We can draw the image \gamma([0,1]): this drawing does not reflect the timing: to indicate it, we can additionally mark the images, say, \gamma (\frac k{10}),\ k=0,1,\dots,10.

To define continuity for curves, denote by \bold I^2 the unit square \{|x|<1,\ |y|<1\}. A curve \gamma is continuous at a point a\in [0,1] if \forall\varepsilon >0 \ \exists \delta>0 such that \gamma (a+\delta\bold I^1)\subseteq \gamma(a)+\varepsilon \bold I^2. (Do you understand this formula? 😉 )

The distance between two points a=(a_1,a_2),\ b=(b_1,b_2)\in\mathbb R^2 is usually defined as \sqrt{(a_1-b_1)^2+(a_2-b_2)^2}, but this difference is not very much different from the expression |a-b|=\max_{i=1,2}|a_i-b_i| (this definition can be immediately generalized for spaces \mathbb R^n of any finite dimension n=3,4,\dots. The distance between two curves has a very similar form: \|f-g\|=\sup_{x\in [0,1]}|f(x)-g(x)|.

Remark. If the functions f,g are continuous, we can replace the supremum by maximum (which is always achieved).

Koch snowflake revisited

Now we can return to one of the examples we discussed on Lecture 1, the Koch snowflake. In contrast with that time, we now have an appropriate language to deal with it.

The process of constructing the curve actually produces a sequence of closed curves. The image of the first curve is an equilateral triangular, the second one gives the Star of David, the third one has no canonical name.

In all cases the new curve \gamma_{n+1} is obtained by taking the previous curve \gamma_n and modifying it on a subset of its domain: instead of traversing a line segment with constant speed, one takes a middle third of this segment and forces \gamma_{n+1} to detour. This requires increasing the speed, but we don’t care as long as the trajectory remains continuous. The distance between \gamma_n and \gamma_{n+1} is \frac{\sqrt3}2 times the size of the size of the segment \frac1{3^n}.

This observation guarantees that \|\gamma_n-\gamma_{n+1}\|< C(1/3)^n. This implies that the sequence of maps \gamma_n\colon [0,1]\to\mathbb R^2 converges uniformly. The result is continuous curve \gamma_*\colon[0,1]\to\mathbb R^2 which has "infinite length" (in fact, it has no length at all).

Advertisements

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: