# Sergei Yakovenko's blog: on Math and Teaching

## Families of Fuchsian equations

A Fuchsian equation on $\mathbb C P^1$ with only quasiunipotent singularities admits an upper bound for the number of complex roots of its solutions, which depends on the equation, in particular, in the “magnitude” (slope), but also on the relative position of its singularities.

We are interested in finding conditions ensuring that this bound does not “explode”. The easiest way to formulate this is to consider parametric families of Fuchsian equations.

We will assume that the parametric family has the form

$L_\lambda u=0,\qquad L_\lambda=\sum_{k=0}^n p_k(t,\lambda)\partial^k,\quad p_k\in\mathbb C[t,\lambda]\qquad (*)$

with the coefficients $p_k$ polynomial in $t$ and rationally depending on the parameters $\lambda\in\mathbb P^m$ (one can consider them as homogeneous polynomials of the same degree on $\mathbb C^{m+1}$). For some values of $\lambda$ the operator $L_\lambda$ may degenerate (the leading coefficient vanishes identically, not excluding the degeneracy $L_\lambda\equiv0$). Such values, however, should constitute a proper algebraic subvariety $\Lambda\subset\mathbb P^m$.

Note that, because of the semicontinuity, it is sufficient to establish the global uniform bound for the number of isolated roots only for $\lambda\notin\Lambda$: complex roots cannot disappear in the blue sky…

We will impose the following qualitative conditions, imposed on the family (*).

1. Isomonodromy: when parameters change, the monodromy group remains “the same”.
2. Tameness (regularity): solutions $u_\lambda(t)$ of the equations grow at most polynomially when $\lambda\to\Lambda$.
3. Quasiunipotence: all singular points always have quasiunipotent monodromy.

The last condition is the “regularity” with respect to the parameters rather than with respect to the independent variable $t$. All conditions need to be accurately formulated, but one can give a simple example producing such families.

Consider a rational matrix-valued 1-form $\Omega$ on $\mathbb P^1\times\mathbb P^m$ with the polar locus $\varSigma\subset \mathbb P^1\times\mathbb P^m$ which is an algebraic divisor (singular hypersurface). Assume that the linear system $\mathrm dX=\Omega X$ is locally solvable and regular on $\varSigma$. Then for any fixed $\lambda$ the first row components of the (multivalued) matrix function $X(t,\lambda)$ satisfy a linear Fuchsian equation $L_\lambda u=0$ rationally depending on $\lambda$. This way we get the family of equations automatically satisfying the first two conditions above. It turns out that the third condition is sufficient to verify only for a generic equation of the family.

(Kashiwara theorem follows).

## Boundedness of the slope

In the arbitrary family (*) the slope $\angle L_\lambda$ is a semialgebraic function of the parameter $\lambda\notin\Lambda$, eventually undefined on the locus $\Lambda$ itself, and may well be unbounded.

However, in the isomonodromic regular family this is impossible.

(Grigoriev theorem follows)

Corollary: conformal pseudoinvariance of the slope.

## Oscillatory behavior of Fuchsian equations

### Semilocal theory

Consider a holomorphic linear equation in the unit disk $0<|t|\le 1$, having a unique Fuchsian singularity at the origin $t=0$. Such an equation can be always reduced to the form $Lu=0,\ L=\epsilon^n+a_1(t)\epsilon^{n-1}+\cdots+a_n(t)$, with holomorphic bounded coefficients $a_1,\dots,a_n\in\mathscr O(D)$, $D=\{|t|\leqslant 1\}$, $|a_k(t)|\leqslant A$.

The previous results imply that one can produce an explicit upper bound for the variation of argument of any nontrivial solution $u$ of the equation $Lu=0$ along the boundary of the unit disk $\partial D$: $\left.\mathrm{Var\,Arg\,}u(t)\right|_{t=1}^{t=\mathrm e^{2\pi \mathrm i}}\leqslant V_L=C\cdot n(A+1)$ for some universal constant $C$.

If the solution itself is holomorphic (e.g., in the case of apparent singularities), such bound would imply (by virtue of the argument principle) a bound for the number of zeros of $u$ in $D$. Unfortunately, solutions are usually ramified and the argument principle does not work. Denote by $\mathbf M$ the monodromy operator along the boundary.

##### Definition

The Fuchsian point is called quasiunipotent, if all eigenvalues $\mu_1,\dots,\mu_n$ of the matrix $\mathbf M$ have modulus one, $|\mu_k|=1$.

##### Theorem 1

The number of isolated roots of any solution of the equation $Lu=0$ in the Riemann domain $\Pi=\{0<|t|\leqslant 1,\ |\mathrm{Arg\,}t\le 2\pi\}$ having real coefficients $a_k(\mathbb R)\subseteq\mathbb R,\ k=1,\dots,n$ and a single quasiunipotent singularity at the origin does not exceed $(2n+1)(2V_L+1)$, where $V_L=Cn(A+1)$ is the parameter bounding the magnitude of coefficients of $L$.

The proof is based on a version of the flavor of the Rolle theorem for the “difference operators” $\mathbf P_\mu=\mu^{-1}\mathbf M-\mu\mathbf M^{-1}$ for any unit $\mu$ such that $\mu^{-1}=\bar\mu$:

$\#\{t\in\Pi:\ u(t)=0\}\leqslant \#\{t\in\Pi:\ \bigl(\mathbf P_\mu u\bigr)(t)=0\}+2V_L.$

A version of the Cayley-Hamilton theorem asserts that the (commutative) composition $\mathbf P=\prod_{\mu}\mathbf P_\mu$ over all eigenvalues of the monodromy operator (counted with their multiplicities) vanishes on all solutions of the real Fuchsian equation.

### Global theory

A linear ordinary differential equatuib with rational coefficients from $\Bbbk=\mathbb C(t)$ can always be transformed to the form

$Lu=0,\qquad p_0(t)\partial^n+p_1(t)\partial^{n-1}+\cdots+p_n(t),\qquad p_0,\dots,p_n\in\mathbb C[t].\qquad (*)$

It may depend on additional parameters $\lambda=(\lambda_1,\dots,\lambda_r)\in\mathbb C^r$: if this dependence is rational, then we may assume that the coefficients of the operator are polynomials from $\mathbb C[t,\lambda]$. The new feature then will be appearance of singular perturbations: for some values of the parameters $\lambda=\lambda_*$ the leading coefficient $p_0(~\cdot~,\lambda_*)$ may vanish identically in $t$, meaning that the order of the corresponding equation drops down to a smaller value. Such phenomenon is known to cause numerous troubles of analytic nature.

Changing the independent variable $\tau=1/t$ allows to investigate the nature of singularity at the infinite point $t=\infty\in\mathbb C P^1$. The equation is called Fuchsian, if it is Fuchsian at each its singular point on the Riemann sphere $\mathbb C P^1=\mathbb C\cup\{\infty\}$.

Assume that infinity is non-singular (this can always be achieved by a Mobius transformation of the independent variable $t$). Then a Fuchsian equation with the singular locus $Z=\{z_1,\dots,z_m\}\subset\mathbb C$ can always be transformed to the form $Mu=0$, where $M$ is the operator

$M= E^n+q_1(t)E^{n-1}+\cdots+q_{n-1}(t)E+q_n(t),$

$E=E_Z=(t-z_1)\cdots(t-z_m)\partial$

(nonsingularity at infinity implies certain bounds on the degrees of the polynomials $p_k\in\mathbb C[t]$). However, the coefficients of this form depend in the rational way not only on the coefficients of the original equation (*), but also on the location of the points $\{z_1, \dots,z_m\}$.

#### Definition.

The slope of this operator (*) is defined as the maximum

$\displaystyle\angle L=\max_{k=1,\dots,n}\frac{\|p_k\|}{\|p_0\|}$

where the norm of a polynomial $p(t)=\sum_0^r c_j t^j\in\mathbb C[t]$ is the sum $\|p\|=\sum_j |c_j|$.

Simple inequalities:

1. Any polynomial $p(t)$ of known degree $d=\deg p$ and norm $M=\|p\|$ admits an explicit upper bound for $|p(t)|$ on any disk $\{|t|\leqslant R\}$: $|p(t)| \leqslant MR^d$ for $R>1$.
2. A polynomial of unit norm $\|p\|=1$ admits a lower bound for $|p(t)|$ for points distant from its zero locus $Z=\{t:\ p(t)=0\}$. More precisely,

$\displaystyle |p(t)|\geqslant 2^{-O(d)}\left(\frac rR\right)^d,\qquad r=\mathrm{dist }(t,Z),\quad R=|t|>1.$

We expect that for an equation having only quasiunipotent Fuchsian singular points, the number of isolated roots of solutions can be explicitly bounded in terms of $n=\mathrm{ord }L,\ d=\max\deg p_k$ and $B=\angle L$. Indeed, it looks like we can cut out circular neighborhoods of all singularities and apply Theorem 1.

The trouble occurs when singularities are allowed to collide or almost collide. Then any slit separating them will necessarily pass through the area where the leading coefficient $p_0$ is dangerously small.

## Tuesday, November 25, 2014

### Schedule change

Filed under: Analytic ODE course,schedule — Sergei Yakovenko @ 11:53

The next lecture is moved from its usual Friday time slot to Wednesday (tomorrow), November 26, 10:00. This is one-time move.

### Lecture 7 (Nov. 24)

Filed under: Analytic ODE course,lecture — Sergei Yakovenko @ 11:50
Tags: , , ,

## Geometric form of non-oscillation theorems

Solutions of linear systems $\dot x(t)=A(t)x(t), \ x\in\mathbb R^n,\ t\in[0,\ell]$ can be highly oscillating relatively to hyperplanes $(p,x)=0, \ p\in\mathbb R^{n*}\smallsetminus 0$. However, there exists a class of system for which one can produce such bounds.

Let $\Gamma:t\mapsto x(t)$ be a smooth parametrized curve. Its osculating frame is the tuple of vector functions $v_1(t)=\dot x(t)$ (velocity), $v_2(t)=\dot v_1(t)$ (acceleration), till $v_n(t)=\dot v_{n-1}(t)$. Generically these vectors are linear independent for all $t$ except isolated points. The differential equations defining the curve up to a rigid motion have a “companion form”,

$\dot v_k=v_{k+1},\quad k=1,\dots,n-1,\qquad \dot v_n=\sum_{i=1}^n\alpha_i(t)v_i,\quad \alpha_i\in\mathbb R.$

Note that this is a vector ODE with scalar coefficients, i.e., a tuple of identical scalar ODEs. Besides, it may exhibit singularities: if the osculating frame degenerates (which may well happen at isolated points of the curve), the coefficients of this equation exhibit a pole at the corresponding moments of time $t\in[0,\ell]$.

However, the osculating frame is not a natural object: it depends on the parametrization. The invariant notion is the osculating flag, the flag of subspaces spanned (in $T_x\mathbb R^n\simeq\mathbb R^n$) by the vectors $\mathbb R_1 v_1\subset \mathbb Rv_1+\mathbb Rv_2\subset\cdots$. The flag can be naturally parametrized by the orthogonalization procedure applied to the osculating frame: by construction, this means that we consider the $n$-tuple of orthonormal vectors $e_1(t),\dots,e_n(t)$ with the property that

$\mathrm{Span\ }(v_1,\dots,v_k)= \mathrm{Span\ }(e_1,\dots,e_k),\qquad \forall k=1,\dots, n-1.$

This new frame satisfies the Frenet equations: their structure follows from the invariance of the flag and the orthogonality of the frame.

$\dot e_k(t)=\varkappa_{k-1}(t)e_{k-1}(t)+\varkappa_{k}(t)e_{k+1}(t),\qquad \varkappa_0\equiv\varkappa_{n}\equiv0.$

The functions $\varkappa_1(t),\dots,\varkappa_{n-1}(t)$ are called Frenet curvatures: they are nonnegative except for the last one (hypertorsion) which has sign and may change it at isolated hyperinflection points.

Definitions. (Absolute) integral curvatures of a smooth (say, real analytic) curve $\Gamma:[0,\ell]\to\mathbb R^n$, parametrized by the arclength $t$, are the quantities $K_j=\int_0^\ell|\varkappa_j(t)|\,\mathrm dt$, $j=1,\dots,n-1$, and $K_n=\pi\#\{t:\ \varkappa_{n-1}(t)=0\}$ (the last quality, equal to the number of hyperinflection points up to the constant $\pi$, is called integral hyperinflection).

Let $\Gamma:[0,\ell]\to\mathbb R^n\smallsetminus\{0\}$ be a smooth curve avoiding the origin in the space. Its absolute rotation around the origin $\Omega(G,0)$ is defined as the length of its spherical projection on the unit sphere, $x\mapsto \frac x{\|x\|}$.  The absolute rotation $\Omega(\Gamma, a)$ around any other point $a\notin\Gamma$ is defined by translating this point to the origin.

If $L\subset\mathbb R^n$ is a $k$-dimensional affine subspace disjoint from $\Gamma$ and $P_L:\mathbb R^n\to L^\perp$ the orthogonal projection on the orthogonal complement $L^\perp$, the absolute rotation $\Omega(\Gamma, L)$ of $\latex \Gamma$ around $L$ is the absolute rotation of the curve $P_L\circ\Gamma$ around the point $P_L(L)\in L^\perp\simeq \mathbb R^{n-k}$.

The absolute rotation of $\Gamma$ around an affine hyperplane $L$ is defined as $\pi\cdot \#(\Gamma\cap L)$.

Formally the 0-sphere $\mathbb S^0=\{\pm 1\}\subset\mathbb R^1$ is not connected, but it is convenient to make it into the metric space with two “antipodal” points at the distance $\pi$, similarly to higher dimensional unit spheres with antipodal points always distanced at $\pi$.

Denote by $\Omega_k(\Gamma)$ the supremum $\sup_{\dim L=k}\Omega(\Gamma,L)$, where the supremum is taken over all affine subspaces $L$ of dimension $k$ in $\mathbb R^n$.

Main Theorem.

$\Omega_k(\Gamma)\leqslant n + 4\bigl(K_1(\Gamma)+\cdots+K_{k+1}(\Gamma)\bigr) \qquad \forall k=0,\dots,n-1$.

The proof of this theorem is based on a combination of arguments from integral geometry and the Frobenius formula for a differential operator vanishing on given, say, real analytic functions $f_1(t),\dots,f_n(t)$. Denote by $W_k(t)$ the Wronski determinant of the first $k$ functions $f_1,\dots,f_k$, adding for convenience $W_0\equiv 1,\ W_1\equiv f_1$. These Wronskians are real analytic, and assuming that $W_n$ does not vanish identically, we can construct the linear $n$th order differential operator

$\displaystyle \frac{W_n}{W_{n-1}}\,\partial\,\frac{W_{n-1}}{W_{n}}\cdot\frac{W_{n-1}}{W_{n-2}}\,\partial\,\frac{W_{n-2}}{W_{n-1}}\,\cdots\, \frac{W_2}{W_1}\,\partial\,\frac{W_1}{W_2}\cdot\frac{W_1}{W_0}\,\partial\,\frac{W_0}{W_1}.$

One can instantly see that this operator is monic (composition of monic operators of order 1) and by induction prove that it vanishes on all functions $f_1,\dots, f_n$.

The straightforward application of the Rolle theorem guarantees that if all the Wronskians are nonvanishing on $[0,\ell]$, then the operator is disconjugate and no linear combination of functions $\sum c_i f_i(t)$ can have more than $n-1$ isolated root.

In the case where the Wronskians $W_k(t)$ are allowed to have isolated roots, numbering $\nu_k$ if counted with multiplicity, then the maximal number of zeros that a linear combination as above may exhibit, is bounded by $(n-1)+4\sum_{k=1}^n \nu_k$.

References.

1. A. Khovanskii, S. Yakovenko, Generalized Rolle theorem in $\mathbb R^n$ and $\mathbb C$. Contains detailed description of the so called Voorhoeve index, the total variation of argument of an analytic function on the boundary of its domain and why this serves as a substitute for the Rolle theorem over the complex numbers. As a corollary, rather sharp bounds for the number of complex roots of quasipolynomials $\sum_k p_k(z)\mathrm e^{\lambda_k z}$, $\lambda_k\in\mathbb C,\ p_k\in\mathbb C[z]$ in complex domains are obtained.
2. D. Novikov, S. Yakovenko, Integral curvatures, oscillation and rotation of smooth curves around affine subspaces. Contains the proof of the Main theorem cited below, with a slightly worse weights attached to the integral curvatures.
3. D. Nadler, S. Yakovenko, Oscillation and boundary curvature of holomorphic curves in $\mathbb C^n$. A complex analytic version of the Main theorem with improved estimates.

## Thursday, November 20, 2014

### Lecture 6 (Nov. 21, 2014)

Filed under: Analytic ODE course — Sergei Yakovenko @ 8:20
Tags: , ,

## Zeros of solutions of linear equations

Nontrivial (i.e., not identically zero) solutions of linear ordinary differential equations obviously possess certain properties concerning their roots (points where these solutions vanish). The simplest, in a sense paradigmal property, is the following.

Prototheorem. Let $u$ be a nontrivial solution of a sufficiently regular linear ordinary differential equation $Lu=0$ of order $n>0$. Then $u$ cannot have a root of multiplicity greater or equal than $n-1$.

Here by regularity we mean the condition that the operator $L=\partial^n+a_1(t)\partial^{n-1}+\cdots+a_{n-1}(t)\partial+a_n(t)$ has coefficients smooth enough to guarantee that any solution $u(t)$ near any point $a$ in the domain of its definition is uniquely determined by the initial conditions $u(a),u'(s),\dots,u^{(n-1)}(a)$.

Indeed, if $u$ has a root of multiplicity $n$, that is, all first $n-1$ derivatives of $u$ at $a$ vanish, then $u^{(n)}(a)=0$ by virtue of the equation and hence the by the uniqueness $u(t)$ must be identically zero.

In particular, solutions of first order equation $u'+a_1(t)u=0$ are nonvanishing, solutions of any second order equation $u''+a_1(t)u'+a_2(t)u=0$ may have only simple roots etc.

Theorem (de la Vallee Poussin, 1929). Assume that the coefficients of the LODE

$u^{(n)}+a_1(t)u^{(n-1)}+\cdots+a_{n-1}(t)u'+a_n(t)u=0,\qquad t\in[0,\ell],\qquad (\dag)$

are explicitly bounded,  $|a_k(t)|\leqslant A_k\in\mathbb R_+,\ \forall t\in[0,\ell],\ k=1,\dots,n$.

Assume that the bounds are small relative to the length of the interval, i.e.,

$\displaystyle \sum_{k=1}^n \frac{A_k}{k!}\ell^k<1.\qquad (*)$

Then any nontrivial solution of the equation has no more than $n-1$ isolated roots on $[0,\ell]$ .

## Novikov’s counterexample

What about linear systems of the first order?

Consider the system $\dot x=A(t)x$ with $x=(x_1,\dots,x_n)\in \mathbb R^n$ and the norm $\|A(t)\|$ explicitly bounded on $[0,\ell]$. Consider all possible linear combinations $u=\sum_k c_k x_k(t),\ c\in\mathbb R^n$. Can one expect a uniform upper bound for the number of roots of all combinations?

Let $a(t)$ be a polynomial having many zeros on $[0,t]$. Consider the $2\times 2$-system of the form

$\displaystyle \dot x_1=a(t)x_1,\qquad \dot x_2=(\dot a+ a^2)x_1.$

The first equation defines a nonvanishing function $x_1(t)$, the second equation – its derivative which vanishes at all roots of $a(t)$.

By replacing $a(t)$ by $\varepsilon a(t)$ one can achieve an arbitrarily small sup-norm of the coefficients of this system on the segment $[0,\ell]$ (or even any open complex neighborhood of this real segment). Thus no matter how small are the coefficients, the second component will have the specified number of isolated roots.

## Complexification

What about complex valued versions? There is no Rolle theorem for them.

I will describe three possible replacements, Kim’s theorem (1963), nearest in the spirit, and two versions of the argument principle.

Theorem (W. Kim)
Assume that an analytic LODE

$u^{(n)}+a_1(z)u^{(n-1)}+\cdots+a_{n-1}(z)u'+a_n(z)u=0,\qquad z\in D\subseteq\mathbb C$

is defined in a convex compact subset $D$ of diameter $\ell$ and the condition (*) holds. Then this equation is disconjugate in $D$: any solution has at most $n-1$ isolated roots.

This result follows from the interpolation inequality of the following type: if $u(z)$ is a function holomorphic in $D$ and has $n$ isolated roots there, then $\|u\|_D\leqslant \frac{\ell^n}{n!}\|u^{(n)}\|$ (the maximum modulus norm is assumed).

Consider the equation $(\dag)$ on the real interval but with complex-valued coefficients (and solutions). Solutions will be then real parameterized curves $u:[0,\ell]\to\mathbb C$ which only exceptionally rarely have roots. Instead of counting roots, one can measure their rotation around the origin $0\in\mathbb C$, which is defined as $R(u)=|\mathrm{Arg}~u(\ell)-\mathrm{Arg}~u(0)|$ for any continuous choice of the argument.

Theorem. Assume that

$\displaystyle \sum_{k=1}^n \frac{A_k}{k!}\ell^k<\frac12.$

Then rotation of any nontrivial solution $u$ is explicitly bounded: $R(u)<\pi (n+1)$.

If an analytic LODE with explicitly bounded coefficients is defined, say, on a triangle $D$, then application of this result to the sides of the triangle yields an explicit upper bound for the number of isolated roots of analytic solutions inside the triangle.

Reference

S. Yakovenko, On functions and curves defined by differential equations, §2.

## Tuesday, November 18, 2014

### Lecture 5 (Nov. 17)

Filed under: Analytic ODE course,lecture — Sergei Yakovenko @ 12:27
Tags: ,

## Fuchsian equivalence and Fuchsian classification

Definitions
A (formal, genuine) Fuchsian operator of order $n$ is a (formal, resp., converging) series of the form $L=\sum_{k=0}^\infty t_kp_k(\epsilon)$ with the coefficients $p_k\in\mathbb C[\epsilon]$ from the ring of polynomials in the variable $\epsilon$ with $n=\deg p_0\ge \deg p_k\ \forall k=1,2,\dots$.
The polynomial $p_0\in\mathbb C[\epsilon]$ is called the Euler part of $L$.

Two Fuchsian operators $L,M$ are $\mathscr F$-equivalent (formally or analytically), if there exist two Fuchsian operators $H,K$ such that $MH=KL$ and the Euler parts of $H,L$ are mutually prime in $\mathbb C[\epsilon]$.

Unlike the Weyl algebra $\mathscr M(\mathbb C,0)[\epsilon]$, the collection of Fuchsian operators is not a subalgebra, although it is “multiplicatively” (compositionally) closed.

The Fuchsian equivalence is indeed reflexive, transitive and symmetric. The first two properties are obvious, to prove the last one an additional effort is required. Indeed, for two Fuchsian operators $L,H$ of order $n$ with mutually prime Euler parts, one can construct two operators $U,V$ with holomorphic coefficients so that the identity $UL+VH=1$ holds, but in the leading terms of $U,V$ may well degenerate, thus violating the Fuchsian condition. However, one can always find such pair of operators of order greater by 1, which will still be Fuchsian. The rest is easy.

The following results can be proved by more or less direct computation in the algebra $\mathscr W=\mathbb C[[t]]\otimes\mathbb C[\epsilon]$:

• A Fuchsian operator with a nonresonant Euler part is $\mathscr F$-equivalent to its Euler part.
• Any Fuchsian operator is $\mathscr F$-equivalent to a polynomial operator from $\mathbb C[t]\otimes\mathbb C[\epsilon]$.
• Any Fuchsian operator is $\mathscr F$-equivalent to a polynomial operator of the form $L=(\epsilon -\lambda_1+q_1(t))\cdots(\epsilon -\lambda_n+q_n(t))$ with $q_1(t),\dots,q_n(t)$ being polynomials without free terms, $q_i(0)=0$, which is Liouville integrable.
• A Fuchsian operator has trivial (identical) monodromy if and only if it is $\mathscr F$-equivalent to an Euler operator with pairwise different integer roots. The corresponding equation has an apparent singularity (all solutions are analytic) if and only if all these roots are pairwise different nonnegative integers.

## Wednesday, November 12, 2014

### Lecture 4 (Nov. 14, 2014)

Filed under: Analytic ODE course,lecture — Sergei Yakovenko @ 5:47
Tags: , ,

## Algebraic theory of linear ordinary differential operators

• Differential field $\Bbbk=\mathscr M(\mathbb C^1,0)$ of meromorphic germs of functions of one variable $t\in(\mathbb C^1,0)$ + derivation $\partial =\frac{\mathrm d}{\mathrm dt}$ produce noncommutative polynomials $\Bbbk[\partial]$: a polynomial $L=\sum_{j=0}^n a_j\partial ^{n-j}$ acts on $\Bbbk$ in a natural way.
• The equation $Lu=0$ only exceptionally rarely has a solution in $\Bbbk$, but one can always construct a differential extension of $\Bbbk$ which will contain solutions of this equation.
• Analytically solutions of the equation form a tuple of functions $(u_1,\dots,u_n)$ analytic and multivalued in a punctured neighborhood of the origin. The multivaluedness is very special: the linear span remains the same after the analytic continuation, hence there exists a matrix $M\in\mathrm{GL}(n,\mathbb C)$ such that $\Delta (u_1,\dots,u_n)=(u_1,\dots, u_n)\cdot M$.
• Instead of $\partial$, any other derivation can  be used, in particular, the Euler derivation $\epsilon=t\partial$.
• Example. Equations with constant coefficients have the form $L=\sum c_j \partial^{n-j}$ with constant coefficients $c_j\in\mathbb C$. Such an operator can always be factorized into commuting factors, $L=c_0\,\prod_{\lambda_i\in\mathbb C} (\partial-\lambda_i)^{\nu_i}$ with $\sum\nu_i=n=\deg L$. A fundamental system of solutions consists of quasipolynomials $q_{ik}(t)=\mathrm e^{\lambda_i}t^k$, $0\leqslant k < \nu_i$. In a similar way the Euler operator has the form $L=\sum c_j\epsilon^{n-j}$ and its solutions are functions $u_{ik}=t^{\lambda_i}\ln^k t$, $k=0,1,\dots,\nu_i-1$ (look at the model equation $\epsilon^\nu u=0$).
• Weyl equivalence of of two operators. Two operators $L,M\in\Bbbk[\partial]$ of the same order are called Weyl equivalent, if there exist an operator $H\in\Bbbk[\partial]$ which maps any solution $u$ of the equation $Lu=0$ to a solution  $v=Hu$ of the equation $Mv=0$ isomorphically (i.e., no solution is mapped to zero).
The above definition means that the composition $MH$ vanishes on all solutions of $Lu=0$, hence must be divisible by $L$: $MH=KL$ for some $K\in\Bbbk[\partial]$.Note that the operator represented by each side of the above equality, is a non-commutative analog of the least common multiple of mutually prime polynomials $H,L$: it is divisible by both $L$ and $H$.
• Theorem. The Weyl equivalence is indeed an equivalence relationship: it is reflexive, symmetric and transitive.
The only thing that needs to be proved is the symmetry. Since $H, L$ are mutually prime, there exist two operators $U,V\in\Bbbk[\partial]$ such that $UL+VH=1$,  hence $LUL+LVH=L$. This identity means that $LVH$ is simultaneously divisible by $L$ and by $H$ (immediately). Hence $LVH$ is divisible by their least common multiple $KL=MH$: there exists an operator $W\in\Bbbk[\partial]$ such that $LVH=W\cdot MH=WMH$. But since the algebra $\Bbbk[\partial]$ is without zero divisors, the right factor $H$ can be cancelled, implying $LV=WM$, which means that $V$ maps solutions of $Mv=0$ into those of $Lu=0$.
• Different flavors of Weyl equivalence: regular (nonsingular) requiring $H, K$ be nonsingular or arbitrary.
• Theorem. Any nonsingular operator $L=\partial^n+\sum_1^n a_j \partial^j$ with holomorphic coefficients $a_j\in\mathscr O(\mathbb C,0)$, is regular Weyl equivalent to the operator $M=\partial^n$.
This result is analogous to the rectification theorem reducing any nonsingular system $\mathrm dX=\Omega X$ to $\mathrm dX=0$.
• Theorem. Any Fuchsian operator is Weyl equivalent to an Euler operator.
This is similar to the meromorphic classification of tame systems. The conjugacy $H$ may be non-Fuchsian.
• Missing part: a genuine analog of holomorphic classification of Fuchsian systems.

## Poincare-Dulac-Fuchs classification of Fuchsian operators

Instead of representing operators as non-commutative polynomials in $\partial$ or in $\epsilon$, one can represent them as non-commutative (formal) Taylor series of the form $L=\sum_{k\geqslant 0}t^k p_k(\epsilon)$ with the coefficients $p_k\in\mathbb C[\epsilon]$ from the commutative algebra of univariate polynomials, but not commuting with the “main variable” $t$.

Such an operator is Fuchsian of order $n$, if and only if $\deg p_k\leqslant n$ for all $k=1,2,\dots$, and $\deg p_0=n$. The polynomial $p_0$ is the “eulerization” of $L$, and the series can be considered as a noncommutative perturbation of the Euler operator $L_0=p_0\in\mathbb C[\epsilon]$.

Definition. The operator $L=p_0+tp_1+\cdots$ is non-resonant, if no two roots of $p_0$ differ by a nonzero integer, $\lambda_i-\lambda_j\notin\mathbb Z^*$.

Theorem. A non-resonant Fuchsian operator is Weyl equivalent to its Euler part with the conjugacy $H$ being a Fuchsian operator, $H=h_0+th_1+\cdots$, $\deg h_0\leqslant n-1,\ \gcd(p_0,h_0)=1$.

## In search of the general theory (to be continued)

References.

The classical paper by Ø. Ore (1932) in which the theory of non-commutative polynomials was established, and the draft of the paper by Shira Tanny and S.Y., based on Shira’s M.Sc. thesis (Weizmann Institute of Science, 2014).

## Local theory of Fuchsian systems (cont.)

• Resonant normal form.
Definition. A meromorphic Fuchsian singularity $\dot X=t^{-1}(A_0+tA_1+\cdots+t^k A_k+\cdots)X$, $A_0=\mathrm{diag}(\lambda_1,\dots,\lambda_n)+\mathrm N$, is in the (Poincare-Dulac) normal form, if for all $k=1,2,\dots$, the identities $t^\Lambda A_k t^{-\Lambda}=t^k A_k$ hold.
• Theorem. Any Fuchsian system is holomorphically gauge equivalent to a system in the normal form.
• Integrability of the normal form: let $I=\mathrm N+A_1+\cdots +A_k+\cdots$ (in fact, the sum is finite). Then the solution is given by the (non-commutative) product $X(t)=t^\Lambda t^I$. The monodromy is the (commutative) product, $M=\mathrm e^{2\pi \mathrm i \Lambda}\mathrm e^{2\pi\mathrm i I}$.

References: [IY], section 16.

## Linear high order homogeneous differential equations

• Differential operators as noncommutative polynomials in the variable $\partial=\frac {\mathrm d}{\mathrm dt}$ with coefficients in a differential field $\Bbbk=\mathscr M(\mathbb C^1,0)$ of meromorphic germs at the origin.
• Composition and factorization.
• Reduction of a linear equation $Lu=0$ to a system of linear first order equations and back. Singular and nonsingular equations.
• Euler derivation $\epsilon=t\partial$ and Fuchsian equations (“nonsingular with respect to $\epsilon$“).
• Division with remainder, greatest common divisor of two operators, divisibility and common solutions of two equations.
• Sauvage theorem. Tame equations are Fuchsian.

References: [IY], Section 19.

### Lecture 2 (Nov 7, 2014)

Filed under: Analytic ODE course,lecture — Sergei Yakovenko @ 6:04
Tags: ,

## Local theory of Fuchsian singular points

• Monodromy and holonomy.
• Growth of multivalued solutions.
• Tame singularities.
• Principal example: the Euler system $\dot X=\frac At X$, $A\in\mathfrak{gl}(n,\mathbb C)$. Solution:
$X(t)=t^A=\exp (A \ln t)$, monodromy $\Delta X(t) =X(t)M$, $M=\mathrm e^{2\pi\mathrm i A}$.
• Fuchsian condition.
• Gauge classification of linear systems, $A(t)\Longleftrightarrow \dot H(t)H^{-1}(t)+H(t)A(t)H^{-1}(t)$.
• Meromorphic gauge classification of tame (regular) systems.
• Holomorphic gauge classification of Fuchsian singularities: $A(t)=\frac 1t(A_0+tA_1+t^2A_2+\cdots)$,
$A_0=\Lambda+\mathrm N$, $\Lambda=\mathrm{diag}(\lambda_1,\dots,\lambda_n)$, $\mathrm N^n=0$.
• Resonances (integer differences between eigenvalues of $A_0$.
• Holomorphic Eulerization of non-resonant Fuchsian singularities.

Reference: [IY], section 16.

## Monday, November 3, 2014

### Lecture 1 (Nov. 3, 2014)

Filed under: Analytic ODE course — Sergei Yakovenko @ 6:34
Tags: , ,

The first lecture was introductory, containing the motivation for the forthcoming subjects.

The world of (real or complex) algebraic sets is tame: any question on the topological complexity admits an algorithmic solution and explicitly bounded answer.In particular, any algebraic set which consists of finitely many isolated points, admits an explicit bound for the number of these points by the product of degrees of equations defining this set (Bézout theorem). All the way around, equations involving nonalgebraic solutions to even simplest algebraic differential equations (sine/cosine), may define infinite sets (integer numbers). We will try to find out how the algebraic universe can be enlarged to include transcendental objects which still admit explicit bounds on their complexity.

It turns out that periods, integrals of rational forms over algebraic cycles, do possess such constructive finiteness, although this is far from easy to see. This finiteness is characteristic for solutions of rational Pfaffian systems with moderate singularities and special monodromy group.

## Part 1: General linear systems.

A linear system locally lives on a cylinder, the product of a (complex) linear space $\mathbb C^n$ and an open base $U\subset \mathbb C^k$. If $\Omega=\bigl(\omega_{ij}\bigr)$ is an $n\times n$-matrix of holomorphic 1-forms on the base $U$, then a linear system defined by this matrix 1-form, is a matrix differential equation $\mathrm dX=\Omega\cdot X$, whose solution is a holomorphically invertible matrix function $X=X(t)$, $t\in U$. If the base is one-dimensional, then $\Omega=A(t)\,\mathrm dt$ with a holomorphic matrix function $A(t)$, and the linear system takes the familiar shape $\dot X(t)=A(t)X(t)$ [IY, sect. 15]

A necessary and sufficient condition for a local existence of solution is vanishing of the curvature, which amounts to the  matrix identity $\mathrm d\Omega=\Omega\land\Omega$ (the right hand side is the matrix 2-form with the entries $\sum_{\ell=1}^k \omega_{i\ell}\land\omega_{\ell j}$, $i,j=1,\dots,k$).  See [NY, sect. 1].

Solution of a linear system is defined modulo a right multplicative constant matrix factor: $\mathrm d(XC)=\Omega XC$ for any $C\in\mathrm{GL}(n,\mathbb C)$, and any other solution has such form. Using this observation, any piecewise curve $latex\gamma$ on the base can be covered by small neighborhoods $U_\alpha$ with local solutions $X_\alpha$ in these neighborhoods, which agree on the pairwise intersections $U_\alpha\cap U_\beta$. If this was not the case for the initial choice of local solutions, this can be always achieved by suitably twisting them (replacing by $X_\alpha C_\alpha$ so that $X_\alpha C_\alpha=X_\beta C_\beta$ on the intersections). This explains how solutions can be continued analytically along any simple curve, yet after continuation along a closed path $\gamma$ the solution may acquire a non-trivial monodromy factor.

Next Page »

Blog at WordPress.com.