Page:EB1911 - Volume 14.djvu/578

OUTLINES] The differential coefficient $\frac{\partial^nf}{\partial x^p \partial y^q \partial z^r},$

in which p+q+r = n, is formed by differentiating p times with respect to x, q times with respect to y, r times with respect to z, the differentiations being performed in any order. Abbreviated notations are sometimes used in such forms as

$f_$ or $f_{x,y,z}^{(p,q,r)}.$

Differentials of higher orders are introduced by the defining equation

in which the expression (dx·∂/∂x + dy·∂/∂y)n is developed by the binomial theorem in the same way as if dx·∂/∂x and dy·∂/∂y were numbers, and (∂/∂x)r·(∂/∂y)n−r ƒ is replaced by ∂nƒ/∂xr ∂yn−r. When there are more than two variables the multinomial theorem must be used instead of the binomial theorem.

The problem of forming the second and higher differential coefficients of implicit functions can be solved at once by means of partial differential coefficients. For example, if ƒ(x, y) = 0 is the equation defining y as a function of x, we have

$\frac{d^2y}{dx^2}=\left(\frac{\partial f}{\partial y}\right)^{-3} \left \{ \left(\frac{\partial f}{\partial y}\right)^2 \frac{d^2f}{dx^2}-2\frac{\partial f}{\partial x}\cdot \frac{\partial f}{\partial y} \cdot \frac{\partial^2f}{\partial x \partial y}+\left(\frac{\partial f}{\partial y}\right)^2\frac{d^2f}{dy^2} \right \}.$

The differential expression Xdx + Ydy, in which both X and Y are functions of the two variables x and y, is a total differential if there exists a function ƒ of x and y which is such that

$\partial f/\partial x=\text{X}, \quad \partial f/\partial y=\text{Y}.$

When this is the case we have the relation

Conversely, when this equation is satisfied there exists a function ƒ which is such that

$df=\text{X}dx+\text{Y}dy.\,$

The expression Xdx + Ydy in which X and Y are connected by the relation (ii.) is often described as a “perfect differential.” The theory of the perfect differential can be extended to functions of n variables, and in this case there are n(n−1) such relations as (ii.).

In the case of a function of two variables x, y an abbreviated notation is often adopted for differential coefficients. The function being denoted by z, we write

$p,q,r,s,t \,$ for $\frac{\partial z}{\partial x},\frac{\partial z}{\partial y},\frac{\partial^2 z}{\partial x^2},\frac{\partial^2 z}{\partial x \partial y},\frac{\partial^2 z}{\partial y^2}.$

Partial differential coefficients of the second order are important in geometry as expressing the curvature of surfaces. When a surface is given by an equation of the form z = ƒ(x, y), the lines of curvature are determined by the equation

$\left \{ (1+q^2)s-pqt \right \}(dy)^2+\left \{ (1+q^2)r-(1+p^2)t \right \}dxdy$ $-\left \{ (1+p^2)s-pqr \right \}(dx)^2=0,$

and the principal radii of curvature are the values of R which satisfy the equation

$\text{R}^2(rt-s^2)-\text{R} \left \{(1+q^2)r-2pqs+(1+p^2)t\right \} \sqrt(1+p^2+q^2)+(1+p^2+q^2)^2=0.$

44. The problem of change of variables was first considered by Brook Taylor in his Methodus incrementorum. In the case considered by Taylor y is expressed as a function of z, and z as a function of x, and it is desired to express the differential coefficients of y with respect to x without eliminating

z. The result can be obtained at once by the rules for differentiating a product and a function of a function. We have

The introduction of partial differential coefficients enables us to deal with more general cases of change of variables than that considered above. If u, v are new variables, and x, y are connected with them by equations of the type

while y is either an explicit or an implicit function of x, we have the problem of expressing the differential coefficients of various orders of y with respect to x in terms of the differential coefficients of v with respect to u. We have

$\frac{dy}{dx}=\left(\frac{\partial f_2}{\partial u}+\frac{\partial f_2}{\partial v}\frac{\partial v}{\partial u}\right)\Bigg/ \left( \frac{\partial f_1}{\partial u}+\frac{\partial f_1}{\partial v}\frac{\partial v}{\partial u}\right)$

by the rule of the total differential. In the same way, by means of differentials of higher orders, we may express d&#8202;2y/dx2, and so on.

Equations such as (i.) may be interpreted as effecting a transformation by which a point (u, v) is made to correspond to a point (x, y). The whole theory of transformations, and of functions, or differential expressions, which remain invariant under groups of transformations, has been studied exhaustively by Sophus Lie (see, in particular, his Theorie der Transformationsgruppen, Leipzig, 1888–1893). (See also and ).

A more general problem of change of variables is presented when it is desired to express the partial differential coefficients of a function V with respect to x, y, in terms of those with respect to u, v, , where u, v, are connected with x, y,  by any functional relations. When there are two variables x, y, and u, v are given functions of x, y, we have

$\frac{\partial V}{\partial x}=\frac{\partial V}{\partial u}\frac{\partial u}{\partial x}+\frac{\partial V}{\partial v}\frac{\partial v}{\partial x},$ $\frac{\partial V}{\partial y}=\frac{\partial V}{\partial u}\frac{\partial u}{\partial y}+\frac{\partial V}{\partial v}\frac{\partial v}{\partial y},$

and the differential coefficients of higher orders are to be formed by repeated applications of the rule for differentiating a product and the rules of the type

$\frac{\partial}{\partial x}=\frac{\partial u}{\partial x}\frac{\partial}{\partial u}+\frac{\partial v}{\partial x}\frac{\partial}{\partial v}.$

When x, y are given functions of u, v, we have, instead of the above, such equations as

$\frac{\partial V}{\partial u}=\frac{\partial V}{\partial x}\frac{\partial x}{\partial u}+\frac{\partial V}{\partial y}\frac{\partial y}{\partial u};$

and ∂V/∂x, ∂V/∂y can be found by solving these equations, provided the Jacobian ∂(x, y)/∂(u, v) is not zero. The generalization of this method for the case of more than two variables need not detain us.

In cases like that here considered it is sometimes more convenient not to regard the equations connecting x, y with u, v as effecting a point transformation, but to consider the loci u = const., v = const. as two “families” of curves. Then in any region of the plane of (x, y) in which the Jacobian ∂(x, y)/∂(u, v) does not vanish or become infinite, any point (x, y) is uniquely determined by the values of u and v which belong to the curves of the two families that pass through the point. Such variables as u, v are then described as “curvilinear coordinates” of the point. This method is applicable to any number of variables. When the loci u = const., intersect each other at right angles, the variables are “orthogonal” curvilinear coordinates. Three-dimensional systems of such coordinates have important applications in mathematical physics. Reference may be made to G. Lamé, Leçons sur les coordonnées curvilignes (Paris, 1859), and to G. Darboux, Leçons sur les coordonnées curvilignes et systèmes orthogonaux (Paris, 1898).

When such a coordinate as u is connected with x and y by a functional relation of the form ƒ(x, y, u) = 0 the curves u = const. are a family of curves, and this family may be such that no two curves of the family have a common point. When this is not the case the points in which a curve ƒ(x, y, u) = 0 is intersected by a curve ƒ(x, y, u + u) = 0 tend to limiting positions as u is diminished indefinitely. The locus of these limiting positions is the “envelope” of the family, and in general it touches all the curves of the family. It is easy to see that, if u, v are the parameters of two families of curves which have envelopes, the Jacobian ∂(x, y)/∂(u, v) vanishes at all points on these envelopes. It is easy to see also that at any point where the reciprocal Jacobian ∂(u, v)/∂(x, y) vanishes, a curve of the family u touches a curve of the family v.

If three variables x, y, z are connected by a functional relation ƒ(x, y, z) = 0, one of them, z say, may be regarded as an implicit function of the other two, and the partial differential coefficients of z with respect to x and y can be formed by the rule of the total differential. We have

$\frac{\partial z}{\partial x}=-\frac{\partial f}{\partial x}\left/\frac{\partial f}{\partial z}\right., \quad \frac{\partial z}{\partial y}=-\frac{\partial f}{\partial y}\left/\frac{\partial f}{\partial z}\right.,$

and there is no difficulty in proceeding to express the higher differential coefficients. There arises the problem of expressing the partial differential coefficients of x with respect to y and z in terms of those of z with respect to x and y. The problem is known as that of “changing the dependent variable.” It is solved by applying the rule of the total differential. Similar considerations are applicable to all cases in which n variables are connected by fewer than n equations.

45. Taylor’s theorem can be extended to functions of several variables. In the case of two variables the general formula, with a remainder after n terms, can be written most simply in the form

$f(a+h, b+k)=f(a, b)+df(a, b)+\frac{1}{2!}d^2f(a, b)+\ldots +\frac{1}{(n-1)!}d^{n-1}f(a,b)+\frac{1}{n!}d^nf(a+\theta h,b+\theta k),$

in which

$d^rf(a,b)=\left \lbrack \left(h \frac{\partial}{\partial x}+k\frac{\partial}{\partial y}\right)^rf(x,y)\right \rbrack_{x=a, y=b},$