Page:EB1911 - Volume 08.djvu/247

 wherein p0, p1 are power series in x, y, should satisfy the equation, it is necessary, as we find by equating like terms, that

and in general ps+1 = &delta;0ps + s1&delta;1ps−1 + s2&delta;2ps−2 +... + &delta;sp0, where

Now compare with the given equation another equation A(xyt)dF/dx + B(xyt)dF/dy = dF/dt, wherein each coefficient in the expansion of either A or B is real and positive, and not less than the absolute value of the corresponding coefficient in the expansion of a or b. In the second equation let us substitute a series

wherein the coefficients in P0 are real and positive, and each not less than the absolute value of the corresponding coefficient in p0; then putting &Delta;r = Ard/dx + Brd/dy we obtain necessary equations of the same form as before, namely,

and in general Ps+1 = &Delta;0Ps, + s1&Delta;1Ps−1 + ... + &Delta;sP0. These give for every coefficient in Ps+1 an integral aggregate with real positive coefficients of the coefficients in Ps, Ps−1, ..., P0 and the coefficients in A and B; and they are the same aggregates as would be given by the previously obtained equations for the corresponding coefficients in ps+1 in terms of the coefficients in ps, ps−1, ..., p0 and the coefficients in a and b. Hence as the coefficients in P0 and also in A, B are real and positive, it follows that the values obtained in succession for the coefficients in P1, P2, ... are real and positive; and further, taking account of the fact that the absolute value of a sum of terms is not greater than the sum of the absolute values of the terms, it follows, for each value of s, that every coefficient in ps+1 is, in absolute value, not greater than the corresponding coefficient in Ps+1. Thus if the series for F be convergent, the series for &fnof; will also be; and we are thus reduced to (1), specifying functions A, B with real positive coefficients, each in absolute value not less than the corresponding coefficient in a, b; (2) proving that the equation AdF/dx + BdF/dy = dF/dt possesses an integral P0 + tP1 + t²P2/2! + ... in which the coefficients in P0 are real and positive, and each not less than the absolute value of the corresponding coefficient in p0. If a, b be developable for x, y both in absolute value less than r and for t less in absolute value than R, and for such values a, b be both less in absolute value than the real positive constant M, it is not difficult to verify that we may take A = B = M[1 − (x + y)/r]−1 (1 − t/R)−1, and obtain $\mathrm{F} =r - (r - x - y) \left[ 1 - \frac{4\mathrm{MR}}{r}\left(1 - \frac{x + y}{r}\right)^2\log \left( 1 - \frac{t}{\mathrm{R}}\right)^{- 1}\right]^{\frac{1}{2}},$|undefined and that this solves the problem when x, y, t are sufficiently small for the two cases p0 = x, p0 = y. One obvious application of the general theorem is to the proof of the existence of an integral of an ordinary linear differential equation given by the n equations dy/dx = y1, dy1/dx = y2, ..., dyn−1/dx = p − p1yn−1 − ... − pny; but in fact any simultaneous system of ordinary equations is reducible to a system of the form

Suppose we have k homogeneous linear partial equations of the first order in n independent variables, the general equation being a&sigma;1d&fnof;/dx1 + ... + a&sigma;nd&fnof;/dxn = 0, where &sigma; = 1, ... k, and that we desire to know whether the equations have common solutions, and if so, how many. It is to be understood that the equations are linearly independent, which implies that k &le; n and not every determinant of k rows and columns is identically zero in the matrix in which the i-th element of the &sigma;-th row is a&sigma;i}(i = 1, ... n, &sigma; = 1, ... k). Denoting the left side of the &sigma;-th equation by P&sigma;&fnof;, it is clear that every common solution of the two equations P&sigma;&fnof; = 0, P&rho;&fnof; = 0, is also a solution of the equation P&rho;(p&sigma;&fnof;), P&sigma;(p&rho;&fnof;), We immediately find, however, that this is also a linear equation, namely, &Sigma;Hid&fnof;/dxi = 0 where Hi = P&rho;a&sigma; − P&sigma;a&rho;, and if it be not already contained among the given equations, or be linearly deducible from them, it may be added to them, as not introducing any additional limitation of the possibility of their having common solutions. Proceeding thus with every pair of the original equations, and then with every pair of the possibly augmented system so obtained, and so on continually, we shall arrive at a system of equations, linearly independent of each other and therefore not more than n in number, such that the combination, in the way described, of every pair of them, leads to an equation which is linearly deducible from them. If the number of this so-called complete system is n, the equations give d&fnof;/dx1 = 0 ... d&fnof;/dxn = 0, leading to the nugatory result &fnof; = a constant. Suppose, then, the number of this system to be r &lt; n; suppose, further, that from the matrix of the coefficients a determinant of r rows and columns not vanishing identically is that formed by the coefficients of the differential coefficients of &fnof; in regard to x1 ... xr; also that the coefficients are all developable about the values x1 = xº1, ... xn= xºn, and that for these values the determinant just spoken of is not zero. Then the main theorem is that the complete system of r equations, and therefore the originally given set of k equations, have in common n − r solutions, say &omega;r+1, ... &omega;n, which reduce respectively to xr+1, ... xn when in them for x1, ... xr are respectively put xº1, ... xºr; so that also the equations have in common a solution reducing when x1 = xº1, ... xr = xºr to an arbitrary function &psi;(xr+1, ... xn) which is developable about xºr+1, ... xºn, namely, this common solution is &psi;(&omega;r+1, ... &omega;n). It is seen at once that this result is a generalization of the theorem for r = 1, and its proof is conveniently given by induction from that case. It can be verified without difficulty (1) that if from the r equations of the complete system we form r independent linear aggregates, with coefficients not necessarily constants, the new system is also a complete system; (2) that if in place of the independent variables x1, ... xn we introduce any other variables which are independent functions of the former, the new equations also form a complete system. It is convenient, then, from the complete system of r equations to form r new equations by solving separately for d&fnof;/dx1, ..., d&fnof;/dxr; suppose the general equation of the new system to be Q&sigma;&fnof; = d&fnof;/dx&sigma; + c&sigma;jr+1d&fnof;/dxr+1 + ... + c&sigma;nd&fnof;/dxn = 0 (&sigma; = 1, ... r). Then it is easily obvious that the equation Q&rho;Q&sigma;&fnof; − Q&sigma;Q&rho;&fnof; = 0 contains only the differential coefficients of &fnof; in regard to xr+1 ... xn; as it is at most a linear function of Q1&fnof;, ... Qr&fnof;, it must be identically zero. So reduced the system is called a Jacobian system. Of this system Q1&fnof;=0 has n − 1 principal solutions reducing respectively to x2, ... xn when

and its form shows that of these the first r − 1 are exactly x2 ... xr. Let these n − 1 functions together with x1 be introduced as n new independent variables in all the r equations. Since the first equation is satisfied by n − 1 of the new independent variables, it will contain no differential coefficients in regard to them, and will reduce therefore simply to d&fnof;/dx1 = 0, expressing that any common solution of the r equations is a function only of the n − 1 remaining variables. Thereby the investigation of the common solutions is reduced to the same problem for r − 1 equations in n − 1 variables. Proceeding thus, we reach at length one equation in n − r + 1 variables, from which, by retracing the analysis, the proposition stated is seen to follow.

The analogy with the case of one equation is, however, still closer. With the coefficients c&sigma;j, of the equations Q&sigma;&fnof; = 0 in transposed array (&sigma; = 1, ... r, j = r + 1, ... n) we can put down the (n − r) equations, dxj = c1jdx1 + ... + crjdxr, equivalent to the r(n − r) equations dxj/dx&sigma; = c&sigma;r. That consistent with them we may be able to regard xr+1, ... xn as functions of x1, ... xr, these being regarded as independent variables, it is clearly necessary that when we differentiate c&sigma;j in regard to x&rho; on this hypothesis the result should be the same as when we differentiate c&rho;j, in regard to x&sigma; on this hypothesis. The differential coefficient of a function &fnof; of x1, ... xn on this hypothesis, in regard to x&rho;j is, however, d&fnof;/dx&rho; + c&rho;jr+1d&fnof;/dxr+1 + ... + c&rho;nd&fnof;/dxn, namely, is Q&rho;&fnof;. Thus the consistence of the n − r total equations requires the conditions Q&rho;c&sigma;j − Q&sigma;c&rho;j = 0, which are, however, verified in virtue of Q&rho;(Q&sigma;&fnof;) − Q&sigma;(Q&rho;&fnof;) = 0. And it can in fact be easily verified that if &omega;r+1, ... &omega;n be the principal solutions of the Jacobian system, Q&sigma;&fnof; = 0, reducing respectively to xr+1, ... xn when x1 = xº1, ... xr = xºr, and the equations &omega;r+1 = x0r+1, ... &omega;n = xºn be solved for xr+1, ... xn to give xj = &psi;j(x1, ... xr, x0r+1, ... xºn), these values solve the total equations and reduce respectively to x0r+1, ... xºn when x1 = xº1 ... xr = xºr. And the total equations have no other solutions with these initial values. Conversely, the existence of these solutions of the total equations can be deduced a priori and the theory of the Jacobian system based upon them. The theory of such total equations, in general, finds its natural place under the heading Pfaffian Expressions, below.

A practical method of reducing the solution of the r equations of a Jacobian system to that of a single equation in n − r + 1 variables may be explained in connexion with a geometrical interpretation which will perhaps be clearer in a particular case, say n = 3, r = 2. There is then only one total equation, say dz = adz + bdy; if we do not take account of the condition of integrability, which is in this case da/dy + bda/dz = db/dx + adb/dz, this equation may be regarded as defining through an arbitrary point (x0, y0, z0) of three-dimensioned space (about which a, b are developable) a plane, namely, z − z0 = a0(x − x0) + b0(y − y0), and therefore, through this arbitrary point &infin;2 directions, namely, all those in the plane. If now there be a surface z = &psi;(x, y), satisfying dz = adz + bdy and passing through (x0, y0, z0), this plane will touch the surface, and the operations of passing along the surface from (x0, y0, z0) to (x0 + dx0, y0, z0 + dz0) and then to (x0 + dx0, y0 + dy0, Z0 + d1z0), ought to lead to the same value of d1z0 as do the operations of passing along the surface from (x0, y0, z0) to (x0, y0 + dy0, z0 + &delta;z0), and then to (x0 + dx0, y0 + dy0, z0 + &delta;1z0), namely, &delta;1z0 ought to be equal to d1z0. But we find $a_{0}dx_{0}+ b_{0}dy_{0}+ dx_{0}dy_{0}\left(\frac{db}{dx_{0}} + a_{0}\frac{db}{dz_{0}}\right),$|undefined and so at once reach the condition of integrability. If now we put