## Functions of two variables and their derivation. Application to the study of planar curves.

The idea of linear approximation (and differentiability) can be easily adopted for functions of more than one variable. Apart from the usual (scalar) functions we will consider applications (parametrized curves) and functions of two variables.

A curve can be considered in kinematic (mechanical) terms as the description of a travel in the plane with coordinates , parametrized by the time . The notions of limit, continuity, differentiability are defined as the corresponding properties of the coordinate functions and .

**Examples.**

- The graph of any continuous (differentiable) function of one variable is a continuous (resp., differentiable) curve .
- The circle is the image of the curve . Note that this image is “covered” infinitely many times: preimage of any point on the circle is an entire arithmetical progression with the difference .
- The cusp (חוֹד) is the image of the differentiable curve . Yet the cusp itself is not “smooth-looking”:

The “linear approximation” at a point takes the unit “positive” vector to , attached to the point , to the vector , tangent to the plane at the point , with the coordinates . The vector is called the *velocity vector*, or the* tangent vector* to the curve.

### When a differentiable parametrized curve is smooth? and what is smoothness?

**Definition**. A planar curve is smooth at a point , if inside some sufficiently small square this curve can be represented as the graph of a function or differentiable at the point (resp., ).

**Proposition**. A differentiable (parametrized) curve with nonzero velocity vector is smooth at the corresponding point.

Proof. Assume that ; then the function has nonzero derivative and hence is locally invertible, so that can be expressed as a differentiable function of , . Then the curve is the graph of the function .

**Problem**. Give an example of a differentiable (parametrized) curve with zero velocity at some point, which is nonetheless smooth.

## Differentiability of functions of two variables

While the definitions of the limit and continuity for a function of two variables are rather straightforward (open intervals in the one-variable definition should be replaced by open squares with the rest staying the same), the notion of differentiability cannot so easily be reduced to one-dimensional case.

We want the function of two variables be approximable by a linear map near the point . This means that there exist two constants (coordinates of the approximating map) such that the difference is fast going to zero,

Denote by (resp., the linear functions which take the value (resp., ) on the vector with the coordinates . Then the linear map , approximating the function , can be written as . To compute the coefficients , consider the functions and of one argument each. Then , are partial derivatives of with respect to the variables at the point .

**Definition**. For a function of two variables the partial derivative with respect to the variable at a point is the limit (if it exists)

**Example**. Consider the function . Its differential is equal to and at a point on the unit circle it vanishes on the vector with coordinates tangent to the level curve (circle) passing through the point .

Functions of two variables are nicely represented by their level curves (קווי גובה). Typically these look like smooth curves, but eventually one can observe “singularities”.

**Examples**. Draw the level curves of the functions and . In the second case you may help yourself by changing variables from to as follows, and look at the picture below,

**Definition**. A point inside the domain of a differentiable function is called *critical*, if the linear approximation (the differential) is identically zero, i.e., if both partial derivatives are zeros. Otherwise the point is called *regular*.

If we replace a nonlinear function by its affine approximation , at a regular point, then the level curves of form a family of parallel lines. It turns out that for regular points these lines give a linear approximation for the (nonlinear) level curves of the initial nonlinear function. In other words, the regularity condition (a form of nondegeneracy) is the condition which guarantees that the linear approximation works nicely also when studying the level curves.

**Theorem**. If a point is *regular* for a differentiable function , then there exists a small square around such that the piece of the level curve of passing through this point* is a smooth curve tangent to the level line of the affine approximation* .

This claim is often called the Implicit Function Theorem (משפט הפונקציות הסתומות), however, the traditional formulation of the Implicit Function Theorem looks quite different!

**Theorem**. If at a point the partial derivative of a differential function is nonzero, then the equation , , can be locally resolved with respect to : there exists a differentiable function defined on a sufficiently small interval , such that and . The derivative of this function is given by the ratio,

**Exercise**. Prove that the two formulations of the Implicit Function Theorem are in fact equivalent.

## Leave a Reply