Page:EB1911 - Volume 14.djvu/577

Rh as the “infinite part” of (x). The resolution of a function which becomes infinite into an infinite part and a finite part can often be effected by taking the infinite part to be infinite of the same order as one of the functions in the scale written above, or in some more comprehensive scale. This resolution is the inverse of the process of evaluating an indeterminate form of the type ∞ − ∞.

For example lim.undefined{(ex−1)−1−x−1} is finite and equal to = , and the function (ex−1)−1−x−1 can be expanded in a power series in x.

39. The nature of a function of two or more variables, and the meaning to be attached to continuity and limits in respect of such functions, have been explained under. The theorems of differential calculus which relate to such functions are in general the same whether the number

of variables is two or any greater number, and it will generally be convenient to state the theorems for two variables.

40. Let u or ƒ (x, y) denote a function of two variables x and y. If we regard y as constant, u or ƒ becomes a function of one variable x, and we may seek to differentiate it with respect to x. If the function of x is differentiable, the differential coefficient which is formed in this way is called the

“partial differential coefficient” of u or ƒ with respect to x, and is denoted by $∂u⁄∂x$ or $∂ƒ⁄∂x$. The symbol “∂” was appropriated for partial differentiation by C. G. J. Jacobi (1841). It had before been written indifferently with “d&#8202;” as a symbol of differentiation. Euler had written (dƒ/dx) for the partial differential coefficient of ƒ with respect to x. Sometimes it is desirable to put in evidence the variable which is treated as constant, and then the partial differential coefficient is written “$$\left( \frac{df}{dx} \right)_y$$” or “$$\left( \frac{|\partial f}{\partial x} \right)_y$$”. This course is often adopted by writers on Thermodynamics. Sometimes the symbols d or ∂ are dropped, and the partial differential coefficient is denoted by ux or ƒx. As a definition of the partial differential coefficient we have the formula

$\frac{\partial f}{\partial x}=\lim_{h=0}\frac{f(x+h, y)-f(x,y)}{h}$

In the same way we may form the partial differential coefficient with respect to y by treating x as a constant.

The introduction of partial differential coefficients enables us to solve at once for a surface a problem analogous to the problem of tangents for a curve; and it also enables us to take the first step in the solution of the problem of maxima and minima for a function of several variables. If the equation of a surface is expressed in the form z = ƒ(x, y), the direction cosines of the normal to the surface at any point are in the ratios ∂ƒ/∂x : ∂ƒ/∂y : = 1. If ƒ is a maximum or a minimum at (x, y), then ∂ƒ/∂x and ∂ƒ/∂y vanish at that point.

In applications of the differential calculus to mathematical physics we are in general concerned with functions of three variables x, y, z, which represent the coordinates of a point; and then considerable importance attaches to partial differential coefficients which are formed by a particular rule. Let F(x, y, z) be the function, P a point (x, y, z), P′ a neighbouring point (x + x, y + y, z + z), and let s be the length of PP′. The value of F(x, y, z) at P may be denoted shortly by F(P). A limit of the same nature as a partial differential coefficient is expressed by the formula

$\lim_{\Delta s=0}\frac{\text{F(P}^\prime)=\text{F(P)}}{\Delta s},$|undefined

in which s is diminished indefinitely by bringing P′ up to P, and P′ is supposed to approach P along a straight line, for example, the tangent to a curve or the normal to a surface. The limit in question is denoted by ∂F/∂h, in which it is understood that h indicates a direction, that of PP′. If l, m, n are the direction cosines of the limiting direction of the line PP′, supposed drawn from P to P′, then

$\frac{\partial \text{F}}{\partial h}=l\frac{\partial \text{F}}{\partial x}+m\frac{\partial \text{F}}{\partial y}+n\frac{\partial \text{F}}{\partial z}.$|undefined

The operation of forming ∂F/∂h is called “differentiation with respect to an axis” or “vector differentiation.”

41. The most important theorem in regard to partial differential coefficients is the theorem of the total differential. We may write down the equation

$f(a+h,b+k)-f(a,b)=f(a+h,b+k)-f(a,b+k)\, +f(a,b+k)-f(a,b).\,$ If ƒx is a continuous function of x when x lies between a and a + h and y = b + k, and if further ƒy is a continuous function of y when y lies between b and d+k, there exist values of and  which lie between 0 and 1 and have the properties expressed by the equations

$\begin{matrix}f(a+h,b+k)-f(a,b+k) & = & hf_z(a+\theta h,b+k)\\ f(a,b+k)-f(a,b) & = & kf_y(a,b+\eta k)\end{matrix}$ Further, ƒx(a + h, b + k) and ƒy(a, b + k) tend to the limits ƒx(a, b) and ƒy(a, b) when h and k tend to zero, provided the differential coefficients ƒx, ƒy, are continuous at the point (a, b). Hence in this case the above equation can be written

$f(a+h, b+k)-f(a, b)=hf_x(a, b)+kf_y(a, b)+\text{R}, \,$ where

$\lim_{h=0,\,k=0}\frac{\text{R}}{h}=0 \mbox{ and } \lim_{h=0,\,k=0}\frac{\text{R}}{k}=0.$|undefined

In accordance with the notation of differentials this equation gives

$df=\frac{\partial f}{\partial x}dx+ \frac{\partial f}{\partial y}dy.$

Just as in the case of functions of one variable, dx and dy are arbitrary finite differences, and dƒ is not the difference of two values of ƒ, but is so much of this difference as need be retained for the purpose of forming differential coefficients.

The theorem of the total differential is immediately applicable to the differentiation of implicit functions. When y is a function of x which is given by an equation of the form ƒ(x, y) = 0, and it is either impossible or inconvenient to solve this equation so as to express y as an explicit function of x, the differential coefficient dy/dx can be formed without solving the equation. We have at once

$\frac{dy}{dx}=-\frac{\partial f}{\partial x}\left / \frac{\partial f}{\partial y}\right.$

This rule was known, in all essentials, to Fermat and de Sluse before the invention of the algorithm, of the differential calculus.

An important theorem, first proved by Euler, is immediately deducible from the theorem of the total differential. If ƒ(x, y) is a homogeneous function of degree n then

$x\frac{\partial f}{\partial x}+y\frac{\partial f}{\partial y}=nf(x,y).$

The theorem is applicable to functions of any number of variables and is generally known as Euler’s theorem of homogeneous functions.

42. Many problems in which partial differential coefficients occur are simplified by the introduction of certain determinants called “Jacobians” or “functional determinants.” They were introduced into Analysis by C. G. J. Jacobi (J. f. Math., Crelle, Bd. 22, 1841, p. 319). The Jacobian of u1,

u2, un with respect to x1, x2,  xn is the determinant

$\begin{vmatrix}\frac{\partial u_1}{\partial x_1} & \frac{\partial u_1}{\partial x_2} & \cdots & \frac{\partial u_1}{\partial x_n} \\ \frac{\partial u_2}{\partial x_1} & \frac{\partial u_2}{\partial x_2} & \cdots & \frac{\partial u_2}{\partial x_n} \\ \vdots & & & \\ \frac{\partial u_n}{\partial x_1} & \frac{\partial u_n}{\partial x_2} & \cdots & \frac{\partial u_n}{\partial x_n} \end{vmatrix}$

in which the constituents of the rth row are the n partial differential coefficients of ur, with respect to the n variables x. This determinant is expressed shortly by

$\frac{\partial (u_1,u_2,\ldots,u_n)}{\partial (x_1,x_2,\ldots,x_n)}.$

Jacobians possess many properties analogous to those of ordinary differential coefficients, for example, the following:—

$\frac{\partial (u_1,u_2,\ldots,u_n)}{\partial (x_1,x_2,\ldots,x_n)} \times \frac{\partial (x_1,x_2,\ldots,x_n)}{\partial (u_1,u_2,\ldots,u_n)}=1,$

$\frac{\partial (u_1,u_2,\ldots,u_n)}{\partial (y_1,y_2,\ldots,y_n)} \times \frac{\partial (y_1,y_2,\ldots,y_n)}{\partial (x_1,x_2,\ldots,x_n)}=\frac{\partial (u_1,u_2,\ldots,u_n)}{\partial (x_1,x_2,\ldots,x_n)}.$

If n functions (u1, u2, un) of n variables (x1, x2, , xn) are not independent, but are connected by a relation ƒ(u1, u2, un) = 0, then

$\frac{\partial (u_1,u_2,\ldots,u_n)}{\partial (x_1,x_2,\ldots,x_n)}=0;$

and, conversely, when this condition is satisfied identically the functions u1, u2, un are not independent.

43. Partial differential coefficients of the second and higher orders can be formed in the same way as those of the first order. For example, when there are two variables x, y, the first partial derivatives ∂ƒ/∂x and ∂ƒ/∂y are functions of x and y, which we may seek to differentiate partially with

respect to x or y. The most important theorem in relation to partial differential coefficients of orders higher than the first is the theorem that the values of such coefficients do not depend upon the order in which the differentiations are performed. For example, we have the equation

This theorem is not true without limitation. The conditions for its validity have been investigated very completely by H. A. Schwarz (see his Ges. math. Abhandlungen, Bd. 2, Berlin, 1890, p. 275). It is a sufficient, though not a necessary, condition that all the differential coefficients concerned should be continuous functions of x, y. In consequence of the relation (i.) the differential coefficients expressed in the two members of this relation are written

$\frac{\partial^2f}{\partial x \partial y}$ or $\frac{\partial^2f}{\partial y \partial x}.$