# Sergei Yakovenko's blog: on Math and Teaching

## Tuesday, November 15, 2016

### Lecture 2 (Nov. 14, 2016).

Filed under: Calculus on manifolds course — Sergei Yakovenko @ 5:07
Tags: , ,

## Tangent vectors, vector fields, integration and derivations

Continued discussion of calculus in domains lf $\mathbb R^n$.

• Tangent vector: vector attached to a point, formally a pair $(a,v):\ a\in U\subseteq\mathbb R^n, \ v\in\mathbb R^n$. Tangent space $T_a U=\ \{a\}\times\mathbb R^n$.
• Differential of a smooth map $F: U\to V$ at a point $a\in U$: the linear map from $T_a U$ to $T_b V,\ b=F(a)$.
• Vector field: a smooth map $v(\cdot): a\mapsto v(a)\in T_a U$.  Vector fields as a module $\mathscr X(U)$ over $C^\infty(U)$.
• Special features of $\mathbb R^1\simeq\mathbb R_{\text{field}}$. Special role of functions as maps $f:\ U\to \mathbb R_{\text{field}}$ and curves as maps $\gamma: \mathbb R_{\text{field}}\to U$.
• Integral curves and derivations.
• Algebra of smooth functions $C^\infty(U)$. Contravariant functor $F \mapsto F^*$ which associates with each smooth map $F:U\to V$ a homomorphism of algebras $F^*:C^\infty(V)\to C^\infty(V)$. Composition of maps vs. composition of morphisms.
• Derivation: a $\mathbb R$-linear map $L:C^\infty(U)\to C^\infty(U)$ which satisfies the Leibniz rule $L(fg)=f\cdot Lg+g\cdot Lf$.
• Vector fields as derivations, $v\simeq L_v$. Action of diffeomorphisms on vector fields (push-forward $F_*$).
• Flow map of a vector field: a smooth map $F: \mathbb R\times U\to U$ (caveat: may be undefined for some combinations unless certain precautions are met) such that each curve
$\gamma_a=F|_{\mathbb R\times \{a\}}$ is an integral curve of $v$ at each point $a$. The “deterministic law” $F^t\circ F^s=F^{t+s}\ \forall t,s\in\mathbb R$.
•  One-parametric (commutative) group of self-homomorphisms $A^t=(F^t)^*: C^\infty(U)\to C^\infty(U)$. Consistency: $L=\left.\frac{\mathrm d}{\mathrm dt}\right|_{t=0}A^t=\lim_{t\to 0}\frac{A^t-\mathrm{id}}t$ is a derivation (satisfies the Leibniz rule). If $A^t=(F^t)^*$ is associated with the flow map of a vector field $v$, then $L=L_v$.

Update The corrected and amended notes for the first two lectures can be found here. This file replaces the previous version.

# Analgebraic Geometry

It so happened that at the beginning of 2016 I gave a talk on the conference “Geometric aspects of modern dynamics” in Porto, delivered a minicourse at Journées Louis Antoine in Rennes and wrote an expository paper for the European Mathematical Society Newsletter, all devoted to the same subject. The subject, provisionally dubbed as “Analgebraic geometry”, deals with algebraic-like properties (especially from the point of view of intersection theory) of real and complex analytic varieties defined by ordinary and Pfaffian differential equations with polynomial right hand sides. Thus

analgebraic = un-algebraic + analytic + algebraic (background) + weak algebraicity-like properties.

It turns out that this analgebraic geometry has very intimate connections with classical problems like Hilbert 16th problem, properties of periods of algebraic varieties, analytic number theory and arithmetic geometry.

For more details see the presentation prepared for the minicourse (or the shorter version of the talk) and the draft of the paper.

Any remarks and comments will be highly appreciated.

# Elementary transcendental functions as solutions to simple differential equations

The way how logarithmic, exponential and trigonometric functions are usually introduced, is not very satisfactory and appears artificial. For instance, the mere definition of the non-integer power $x^a$, $a\notin\mathbb Z$, is problematic. For $a=1/n,\ n\in\mathbb N$, one can define the value as the root $\sqrt[n]x$, but the choice of branch/sign and the possibility of defining it for negative $x$ is speculative. For instance, the functions $x^{\frac12}$ and $x^{\frac 24}$ may turn out to be different, depending on whether the latter is defined as $\sqrt[4]{x^2}$ (makes sense for negative $x$) or as $(\sqrt[4]x)^2$ which makes sense only for positive $x$. But even if we agree that the domain of $x^a$ should be restricted to positive arguments only, still there is a big question why for two close values $a=\frac12$ and $a=\frac{499}{1000}$ the values, say, $\sqrt 2$ and $\sqrt[1000]{2^{499}}$ should also be close…

The right way to introduce these functions is by looking at the differential equations which they satisfy.

A differential equation (of the first order) is a relation, usually rational, involving the unknown function $y(x)$, its derivative $y'(x)$ and some known rational functions of the independent variable $x$. If the relation involves higher derivatives, we say about higher order differential equations. One can also consider systems of differential equations, involving several relations between several unknown functions and their derivatives.

Example. Any relation of the form $P(x, y)=0$ implicitly defines $y$ as a function of $x$ and can be considered as a trivial equation of order zero.

Example. The equation $y'=f(x)$ with a known function $f$ is a very simple differential equation. If $f$ is integrable (say, continuous), then its solution is given by the integral with variable upper limit, $\displaystyle y(x)=\int_p^x f(t)\,\mathrm dt$ for any meaningful choice of the lower limit $p$. Any two solutions differ by a constant.

Example. The equation $y'=a(x)y$ with a known function $a(x)$. Even the case where $a(x)=a$ is a constant, there is no, say, polynomial solution to this equation (why?), except for the trivial one $y(x)\equiv0$. This equation is linear: together with any two functions $y_1(x),y_2(x)$ and any constant $\lambda$, the functions $\lambda y_1(x)$ and $y_1(x)\pm y_2(x)$ are also solutions.

Example. The equation $y'=y^2$ has a family of solutions $\displaystyle y(x)=-\frac1{x-c}$ for any choice of the constant $c\in\mathbb R$ (check it!). However, any such solution “explodes” at the point $x=c$, while the equation itself has no special “misbehavior” at this point (in fact, the equation does not depend on $x$ at all).

## Logarithm

The transcendental function $y(x)=\ln x$ satisfies the differential equation $y'=x^{-1}$: this is the only case of the equation $y'=x^n,\ n\in\mathbb Z$, which has no rational solution. In fact, all properties of the logarithm follow from the fact that it satisfies the above equation and the constant of integration is chosen so that $y(1)=0$. In other words, we show that the function defined as the integral $\displaystyle \ell(x)=\int_1^x \frac1t\,\mathrm dt$ possesses all what we want. We show that:

1. $\ell(x)$ is defined for all $x>0$, is monotone growing from $-\infty$ to $+\infty$ as $x$ varies from $0$ to $+\infty$.
2. $\ell(x)$ is infinitely differentiable, concave.
3. $\ell$ transforms the operation of multiplication (of positive numbers) into the addition: $\ell(\lambda x)=\ell(\lambda)+\ell(x)$ for any $x,\lambda>0$.

## Exponent

The above listed properties of the logarithm ensure that there is an inverse function, denoted provisionally by $E(x)$, which is inverse to $\ell:\ \ell(E(x))=x$. This function is defined for all real $x\in\mathbb R$, takes positive values and transforms the addition to the multiplication: $E(\lambda+x)=E(\lambda)\cdot E(x)$. Denoting the the value $E(1)$ by $e$, we conclude that $E(n)=e^n$ for all $n\in\mathbb Z$, and $E(x)=e^x$ for all rational values $x=\frac pq$. Thus the function $E(x)$, defined as the inverse to $\ell$, gives interpolation of the exponent for all real arguments. A simple calculation shows that $E(x)$ satisfies the differential equation $y'=y$ with the initial condition $y(0)=1$.

## Computation

Consider the integral operator $\Phi$ which sends any (continuous) function $f:\mathbb R\to\mathbb R$ to the function $g=\Phi(f)$ defined by the formula $\displaystyle g(x)=f(0)+\int_0^x f(t)\,\mathrm dt$. Applying this operator to the function $E(x)$ and using the differential equation, we see that $E$ is a “fixed point” of the transformation $\Phi$: $\Phi(E)+E$. This suggests using the following approach to compute the function $E$: choose a function $f_0$ and build the sequence of functions $f_n=\Phi(f_{n-1})$, $n=1,2,3,4,\dots$. If there exists a limit $f_*=\lim f_{n+1}=\lim \Phi(f_n)=\Phi(f_*)$, then this limit is a fixed point for $\Phi$.

Note that the action of $\Phi$ can be very easily calculated on the monomials: $\displaystyle \Phi\biggl(\frac{x^k}{k!}\biggr)=\frac{x^{k+1}}{(k+1)!}$ (check it!). Therefore if we start with $f_0(x)=1$, we obtain the functions $\latex f_n=1+x+\frac12 x^2+\cdots+\frac1{n!}x^n$. This sequence converges to the sum of the infinite series $\displaystyle\sum_{n=0}^\infty\frac1{n!}x^n$ which represents the solution $E(x)$ on the entire real line (check that). This series can be used for a fast approximate calculation of the number $e=E(1)=\sum_0^\infty \frac1{n!}$.

## Differential equations in the complex domain

The function $E(ix)=e^{ix}$ satisfies the differential equation $y'=\mathrm iy$. The corresponding “motion on the complex plane”, $x\mapsto e^{\mathrm ix}$, is rotation along the (unit) circle with the unit (absolute) speed, hence the real and imaginary parts of $e^{\mathrm ix}$ are cosine and sine respectively. In fact, the “right” definition of them is exactly like that,

$\displaystyle \cos x=\textrm{Re}\,e^{\mathrm ix},\quad \sin x=\textrm{Im}\,e^{\mathrm ix} \iff e^{\mathrm ix}=\cos x+\mathrm i\sin x,\qquad x\in\mathbb R$.

Thus, the Euler formula “cis” in fact is the definition of sine and cosine. Of course, it can be “proved” by substituting the imaginary value into the Taylor series for the exponent, collecting the real and imaginary parts and comparing them with the Taylor series for the sine and cosine.

In fact, both sine and cosine are in turn solutions of the real differential equations: derivating the equation $y'=\mathrm iy$, one concludes that $y''=\mathrm i^2y=-y$. It can be used to calculate the Taylor coefficients for sine and cosine.

For more details see the lecture notes.

Not completely covered in the class: solution of linear equations with constant coefficients and resonances.

## Saturday, September 6, 2014

### Polynomials, matrices and differential equations

Filed under: lecture — Sergei Yakovenko @ 10:33
Tags: ,

The transcendental elementary functions $\sin x,\cos x, \mathrm e^x,\ln x$ are traditionally introduced in the high school/(pre)calculus courses in the way that obscures their inter-relations and properties. The text below is an extended version of the one-hour lecture delivered on Ulpana de-Shalit for undergraduate students. It purports to introduce these functions as solutions to linear differential equations in the way similar to radicals being introduced as solutions to algebraic equations.

Polynomials, matrices and differential equations, 11 pages (pdf).

## Monday, October 22, 2007

### “Auxiliary Lesson” #1 שעור עזר (Oct. 25, 2007)

Analytic ODEs in real and complex domain: similarities and differences.

1. Background on holomorphic functions. Weierstrass compactness principle.
2. (Ordinary) Differential Equations and their solutions.
3. Contracting mapping principle (recall).
4. Picard integral operator and its contractivity.
5. Existence/uniqueness theorem.
6. Example: Matrix exponent and its computation.
7. Holomorphic vector fields and their trajectories. Equivalence of vector fields.
8. Flow box theorem and rectification theorem for nonsingular vector fields.

Attached is Section 1. It will be available on these pages for a limited time and is password-protected from printing 😦 … I must obey  the requirements of the Publisher.

###### Disclaimer. In full compliance with the strike rules (were it still be underway), this meeting is defined as a research/orientation seminar on a novel teaching technology. 🙂

Blog at WordPress.com.