# Analgebraic Geometry

It so happened that at the beginning of 2016 I gave a talk on the conference “Geometric aspects of modern dynamics” in Porto, delivered a minicourse at Journées Louis Antoine in Rennes and wrote an expository paper for the European Mathematical Society Newsletter, all devoted to the same subject. The subject, provisionally dubbed as “Analgebraic geometry”, deals with algebraic-like properties (especially from the point of view of intersection theory) of real and complex analytic varieties defined by ordinary and Pfaffian differential equations with polynomial right hand sides. Thus

analgebraic = un-algebraic + analytic + algebraic (background) + weak algebraicity-like properties.

It turns out that this analgebraic geometry has very intimate connections with classical problems like Hilbert 16th problem, properties of periods of algebraic varieties, analytic number theory and arithmetic geometry.

For more details see the presentation prepared for the minicourse (or the shorter version of the talk) and the draft of the paper.

Any remarks and comments will be highly appreciated.

## Riemann-Hilbert problem

The Riemann-Hilbert problem consists in “constructing a Fuchsian system with a prescribed monodromy”.

More precisely, let $M_1,M_2,\dots,M_d$ be nondegenerate matrices such that their product is an identical matrix, and $a_0,a_1,\dots, a_d\in\mathbb C$ are distinct points, such that the segments $[a_0,a_k]\subset\mathbb C,\ k=1,\dots,d$ are all disjoint except for the point $a_0$ itself.

The problem is to construct a linear system of equations

$\displaystyle \dot X=A(t)X,\quad A(t)=\sum_{k=1}^d \frac{A_k}{t-a_k},\quad \sum_{k=1}^d A_k=0$,

such that the monodromy operator along the path “$\gamma_k=$segment $[a_0,a_k]+$ small loop around $a_k+$segment $[a_k,a_0]$” is equal to $M_k$.

The modern strategy of solving this problem is surgery. One can easily construct a local solution, a differential system on a neighborhood $U_k$ of the segment $[a_0,a_k]$, which has the specified monodromy. The phase space of this system is the cylinder $U_k\times\mathbb C^n$, and without loss of generality one can assume that together the neighborhoods $U_k$ cover the whole Riemann sphere $\mathbb CP^1=\mathbb C\cup\{\infty\}$. Patching together these local solutions, one can construct a linear system with the specified monodromy, but it will be defined not on $\mathbb C P^1\times\mathbb C^n$, as required, but on a more general object, holomorphic vector bundle over $\mathbb C P^1$.

Description of different vector bundles is of an independent interest and is well known. It turns out (Birkhoff), that each holomorphic vector bundle in dimension $n$ is completely determined by a(n unordered) tuple of integer numbers $d_1,\dots,d_n\in\mathbb Z$, and the bundle is trivial if and only if $d_1=\cdots=d_n=0$.

However, the strategy of solving the Riemann-Hilbert problem by construction of the bundle and determining its holomorphic type is complicated by two facts:

1. Determination of the holomorphic type of a bundle is a transcendental problem;
2. The local realization of the monodromy is by no means unique: in the non-resonant case one can realize any matrix $M_k$ by an Euler system with the eigenvalues which can be arbitrarily shifted by integers; in the resonant case one should add to this freedom also non-Euler systems. This freedom can change the holomorphic type of the vector bundle in a very broad range.

It turns out that the fundamental role in solvability of the Riemann-Hilbert problem plays the (ir)reducibility of the linear group generated by the matrices $M_1,\dots,M_k$.

Theorem (Bolibruch, Kostov). If the group is irredicible, i.e., there is no invariant subspace in $\mathbb C^n$ common for all operators $M_k$, then one can choose the local realizations in such a way that the resulting bundle is trivial and thus yields solution to the Riemann-Hilbert problem.

The proof is achieved as follows: one constructs a possibly nontrivial bundle realizing the given monodromy, and then this bundle is brutally trivialized by a transformation that is only meromorphic at one of the singularities. The result will be a system with all but one singularities being Fuchsian, and the problem reduces to bringing to the Fuchsian form the last point (assumed to be at infinity) by transformations of the form $X\mapsto P(t)X$ with $P$ being a matrix polynomial with a constant nonzero determinant.  The group of such transformations is considerably more subtle, but ultimately the freedom in construction of the initial bundle can be used to guarantee that the last point is also “Fuchsianizible”.

All the way around, if the monodromy group is reducible, then there is an obstruction of the torsion type exists for trivializing the bundle. This obstruction was first discovered by A. Bolibruch, and its description can be found in the textbook by Yu. Ilyashenko and SY (sections 16G and 18).

## Uniform bounds for parametric Fuchsian families

The previous lectures indicate how zeros of solutions can be counted for linear differential equations on the Riemann sphere. For an equation of the form

$u^{(n)} u+a_1(t)u^{(n-1)}u+\cdots+a_{n-1}(t)u'+a_n(t)u=0 ,\quad a_1,\dots, a_n\in\mathbb C(t)\qquad(*)$

one has to assume that:

1. The equation has only Fuchsian singularities at the poles of the coefficients $a_1,\dots,a_n$;
2. The monodromy of each singular point is quasiunipotent (i.e., all eigenvalues of the corresponding operator are on the unit circle);
3. The slope of the differential equation is known.

The slope is a badly formed and poorly computable number that characterizes the relative strength of the non-principal coefficients of the equation. It is defined as follows:

1. For a given affine chart $t\in\mathbb C$ on $\mathbb P^1$, multiply the equation (*) by the common denominator of the fractions for $a_k(t)$, reducing the corresponding operator to the form $b_0(t)\partial^n+b_1(t)\partial^{n-1}\cdots+b_n(t)$ with $b_0,\dots,b_n\in \mathbb C[t]$;
2. Define the affine slope as the $\max_{k=1,\dots,n}\frac{\|b_k\|}{\|b_0\|}$, where the norm of a polynomial $b(t)=\sum_j \beta_j t^j$ is the sum $\sum_j |\beta_j|$;
3. Define the conformal slope of an equation (*) as the supremum of the affine slopes of the corresponding operators over all affine charts on $\mathbb P^1$.
4. Claim. If the equation (*) is Fuchsian, then the conformal slope is finite.

The rationale behind the notion of the conformal slope of an equation is simple: it is assumed to be the sole parameter which allows to place an upper bound for the variation of arguments along “simple arcs” (say, circular arcs and line segments) which are away from the singular locus $\varSigma$ of the equation (*).

The dual notion is the conformal diameter of the singular locus. This is another badly computable but still controllable way to subdivide points of the singular locus into confluent groups that stay away from each other. The formal definition involves the sum of relative lengths of circular slits.

The claim (that is proved by similar arguments as the precious claim on boundedness of the conformal slope) is that a finite set points of the Riemann sphere $\mathbb P^1$ has conformal diameter bounded. Moreover, if $\varSigma\subseteq\mathbb P^m$ is an algebraic divisor of degree $d$ in the $m$-dimensional projective space, then the conformal diameter of any finite intersection
$\varSigma_\ell=\ell\cap\varSigma$ for any 1-dimensional line $\ell\subseteq\mathbb P^m$ is explicitly bounded in terms of $m,d$.

Together these results allow to prove the following general result.

Theorem (G. Binyamini, D. Novikov, S.Y.)

Consider a Pfaffian $n\times n$-system $\mathrm dX=\Omega X$ on the projective space $\mathbb P^m$ with the rational matrix 1-form of degree $d$. Assume that:

1. The system is integrable, $\mathrm d\Omega=\Omega\land\Omega$;
2. The system is regular, i.e., its solution matrix $X(t)$ grows at worst polynomially when $t$ tends to the polar locus
$\varSigma$ of the system;
3. The monodromy of the system along any small loop around $\varSigma$ is quasiunipotent.

Then the number of solutions of any solution is bounded in any triangle $T\subseteq\ell$ free from points of $late \varSigma$.

If in addition the system is defined over $\mathbb Q$ and has bitlength complexity $c$, then this number is explicitly bounded by a double exponential of the form $2^{c^{P(n,m,d)}}$, where $P(n,m,d)$ is an explicit polynomial of degree $\leqslant 60$ in these variables.

Remark. The quasiunipotence condition can be verified only for small loops around the principal (smooth) strata of $\varSigma$ by the Kashiwara theorem.

Reference

G. Binyamini, D. Novikov, and S. Yakovenko, On the number of zeros of Abelian integrals: A constructive solution of the infinitesimal Hilbert sixteenth problem, Inventiones Mathematicae 181 (2010), no. 2, 227-289, available here.

## Families of Fuchsian equations

A Fuchsian equation on $\mathbb C P^1$ with only quasiunipotent singularities admits an upper bound for the number of complex roots of its solutions, which depends on the equation, in particular, in the “magnitude” (slope), but also on the relative position of its singularities.

We are interested in finding conditions ensuring that this bound does not “explode”. The easiest way to formulate this is to consider parametric families of Fuchsian equations.

We will assume that the parametric family has the form

$L_\lambda u=0,\qquad L_\lambda=\sum_{k=0}^n p_k(t,\lambda)\partial^k,\quad p_k\in\mathbb C[t,\lambda]\qquad (*)$

with the coefficients $p_k$ polynomial in $t$ and rationally depending on the parameters $\lambda\in\mathbb P^m$ (one can consider them as homogeneous polynomials of the same degree on $\mathbb C^{m+1}$). For some values of $\lambda$ the operator $L_\lambda$ may degenerate (the leading coefficient vanishes identically, not excluding the degeneracy $L_\lambda\equiv0$). Such values, however, should constitute a proper algebraic subvariety $\Lambda\subset\mathbb P^m$.

Note that, because of the semicontinuity, it is sufficient to establish the global uniform bound for the number of isolated roots only for $\lambda\notin\Lambda$: complex roots cannot disappear in the blue sky…

We will impose the following qualitative conditions, imposed on the family (*).

1. Isomonodromy: when parameters change, the monodromy group remains “the same”.
2. Tameness (regularity): solutions $u_\lambda(t)$ of the equations grow at most polynomially when $\lambda\to\Lambda$.
3. Quasiunipotence: all singular points always have quasiunipotent monodromy.

The last condition is the “regularity” with respect to the parameters rather than with respect to the independent variable $t$. All conditions need to be accurately formulated, but one can give a simple example producing such families.

Consider a rational matrix-valued 1-form $\Omega$ on $\mathbb P^1\times\mathbb P^m$ with the polar locus $\varSigma\subset \mathbb P^1\times\mathbb P^m$ which is an algebraic divisor (singular hypersurface). Assume that the linear system $\mathrm dX=\Omega X$ is locally solvable and regular on $\varSigma$. Then for any fixed $\lambda$ the first row components of the (multivalued) matrix function $X(t,\lambda)$ satisfy a linear Fuchsian equation $L_\lambda u=0$ rationally depending on $\lambda$. This way we get the family of equations automatically satisfying the first two conditions above. It turns out that the third condition is sufficient to verify only for a generic equation of the family.

(Kashiwara theorem follows).

## Boundedness of the slope

In the arbitrary family (*) the slope $\angle L_\lambda$ is a semialgebraic function of the parameter $\lambda\notin\Lambda$, eventually undefined on the locus $\Lambda$ itself, and may well be unbounded.

However, in the isomonodromic regular family this is impossible.

(Grigoriev theorem follows)

Corollary: conformal pseudoinvariance of the slope.

## Wednesday, December 3, 2008

### IH16 and friends: the final dash

Finally the two texts concerned with solution of the Infinitesimal Hilbert problem, are put into the polished form (including the publisher’s LaTeX style files). The new revisions, already uploaded to ArXiv, differ from the initial submissions only by corrected typos, a few rearrangements aimed at improving the readability of the texts, and a couple of more references added. There is absolutely no need to read the new revision if you already have read the first one.

Mostly for the reasons of “internal convenience” the complete references are repoduced here:

• G. Binyamini and S. Yakovenko, Polynomial Bounds for Oscillation of Solutions of Fuchsian Systems, posted as arXiv:0808.2950v2 [math.DS], 36 p.p., submitted to Ann. Inst. Fourier (Dec. 2008), accepted (February, 2009)
• G. Binyamini, D. Novikov and S. Yakovenko, On the Number of Zeros of Abelian Integrals: A Constructive Solution of the Infinitesimal Hilbert Sixteenth Problem, posted as arXiv:0808.2952v2 [math.DS], 57 p.p., submitted to Inventiones Mathematicae (Nov. 2008), accepted (Oct. 2009).

# Piecemeal remarks on rational matrix functions of a complex variable

The global theory of rational linear systems on $\mathbb C P^1$ requires the study of (rational) gauge transformations which are holomorphic and holomorphically invertible except for a single point. If this point is at infinity, then the matrix of such transformation is necessarily polynomial with constant nonzero determinant. Such matrix functions are provisionally referred to as monopoles, $H(t)\in\textrm{GL}(n,\mathbb C[t]),\ \text{det}H=\text{const}\ne 0$.

Multiplication of a rational matrix function $H(t)$ from the left by a monopole matrix $\begin{pmatrix}1 & t\\ & 1\end{pmatrix}$ corresponds to adding the second row of $H$, multiplied by $t$, to the first row. Thus manipulations with rows of $H$, which aim at Gauss-type elimination of certain monomials from matrix elements, can be represented as gauge actions of the monopole group. The principal result that will be used throughout the next few lectures, is the following Bolibruch Permutation Lemma.

Lemma. Let $H(t)$ be the germ of a matrix function, holomorphic and invertible at $t=\infty$. Then for any ordered tuple of integer numbers $D=\{d_1,\dots,d_n\}$ the product $t^D\,H(t)$, $t^D=\text{diag}(t^{d_1},\dots,t^{d_n})$, is monopole equivalent to a product of the form $H'(t)\,t^{D'}$, where $H'(t)$ is also holomorphic and invertible at $t=\infty$, and $D'$ is a permutation of the tuple $D$.

The proof of this result is not difficult, yet is too technical to be delivered in the classroom.

Create a free website or blog at WordPress.com.