## Zeros of solutions of linear equations

Nontrivial (i.e., not identically zero) solutions of linear ordinary differential equations obviously possess certain properties concerning their roots (points where these solutions vanish). The simplest, in a sense paradigmal property, is the following.

**Prototheorem**. *Let be a nontrivial solution of a sufficiently regular linear ordinary differential equation of order . Then cannot have a root of multiplicity greater or equal than *.

Here by regularity we mean the condition that the operator has coefficients smooth enough to guarantee that any solution near any point in the domain of its definition is uniquely determined by the initial conditions .

Indeed, if has a root of multiplicity , that is, all first derivatives of at vanish, then by virtue of the equation and hence the by the uniqueness must be identically zero.

In particular, solutions of first order equation are nonvanishing, solutions of any second order equation may have only simple roots etc.

## What about quantitative versions?

**Theorem ** (de la Vallee Poussin, 1929). *Assume that the coefficients of the LODE*

*are explicitly bounded, .*

*Assume that the bounds are small relative to the length of the interval, i.e.,*

*Then any nontrivial solution of the equation has no more than isolated roots on .*

## Novikov’s counterexample

What about linear systems of the first order?

Consider the system with and the norm explicitly bounded on . Consider all possible linear combinations . Can one expect a uniform upper bound for the number of roots of all combinations?

Let be a polynomial having many zeros on . Consider the -system of the form

The first equation defines a nonvanishing function , the second equation – its derivative which vanishes at all roots of .

By replacing by one can achieve an arbitrarily small sup-norm of the coefficients of this system on the segment (or even any open complex neighborhood of this real segment). Thus no matter how small are the coefficients, the second component will have the specified number of isolated roots.

## Complexification

What about complex valued versions? There is no Rolle theorem for them.

I will describe three possible replacements, Kim’s theorem (1963), nearest in the spirit, and two versions of the argument principle.

**Theorem** (W. Kim)

*Assume that an analytic LODE*

*is defined in a convex compact subset of diameter and the condition (*) holds. Then this equation is disconjugate in : any solution has at most isolated roots.*

This result follows from the interpolation inequality of the following type: if is a function holomorphic in and has isolated roots there, then (the maximum modulus norm is assumed).

Consider the equation on the real interval but with complex-valued coefficients (and solutions). Solutions will be then real parameterized curves which only exceptionally rarely have roots. Instead of counting roots, one can measure their rotation around the origin , which is defined as for any continuous choice of the argument.

**Theorem**. Assume that

Then rotation of any nontrivial solution is explicitly bounded: .

If an analytic LODE with explicitly bounded coefficients is defined, say, on a triangle , then application of this result to the sides of the triangle yields an explicit upper bound for the number of isolated roots of analytic solutions inside the triangle.

**Reference**

S. Yakovenko, On functions and curves defined by differential equations, §2.