Page:EB1911 - Volume 08.djvu/252

 transformations or rationality group of another differential equation (see below); in particular, when the rationality group of an ordinary linear differential equation is integrable, the equation can be solved by quadratures.

Following the practical and provisional division of theories of differential equations, to which we alluded at starting, into transformation theories and function theories, we pass now to give some account of the latter. These are both a necessary logical complement of the former, and the only remaining resource when the expedients of the former have been exhausted. While in the former investigations we have dealt only with values of the independent variables about which the functions are developable, the leading idea now becomes, as was long ago remarked by G. Green, the consideration of the neighbourhood of the values of the variables for which this developable character ceases. Beginning, as before, with existence theorems applicable for ordinary values of the variables, we are to consider the cases of failure of such theorems.

When in a given set of differential equations the number of equations is greater than the number of dependent variables, the equations cannot be expected to have common solutions unless certain conditions of compatibility, obtainable by equating different forms of the same differential coefficients deducible from the equations, are satisfied. We have had examples in systems of linear equations, and in the case of a set of equations p1 = &phi;1,, pr = &phi;r . For the case when the number of equations is the same as that of dependent variables, the following is a general theorem which should be referred to: Let there be r equations in r dependent variables z1, zr and n independent  variables x1,  xn; let the differential coefficient of z&sigma; of highest order which enters be of order h&sigma;, and suppose d h&sigma;z&sigma; / dx1h&sigma; to enter, so that the equations can be written d h&sigma;z&sigma; / dx1h&sigma; = &Phi;&sigma;, where in the general differential coefficient of z&rho; which enters in &Phi;&sigma;, say d k1 + + kn z&rho; / dx1k1 dxnkn, we have k1 &lt; h&rho; and k1 + + kn &le; h&rho;. Let a1, an, b1,  br, and b&rho;k 1  kn be a set of values of x1, xn, z1,  zr and of the differential coefficients entering in &Phi;&sigma; about which all the functions &Phi;1,  &Phi;r, are developable. Corresponding to each dependent variable z&sigma;, we take now a set of h&sigma; functions of x2, xn, say &phi;&sigma;, &phi;&sigma;;(1),  ,&phi;&sigma;h−1 arbitrary save that they must be developable about a2, a3,  an, and such that for these values of x2,  xn, the function &phi;&rho; reduces to b&rho;, and the differential coefficient d k2 + + kn &phi;&rho;k1 / dx2k2 dxnkn reduces to b&rho;k 1 kn. Then the theorem is that there exists one, and only one, set of functions z1, zr, of x2,  xn developable about a1,  an satisfying the given differential equations, and such that for x1 = a1 we have z&sigma;＝&phi;&sigma;, dz&sigma; / dx1＝&phi;&sigma;(1), d h&sigma;−1z&sigma; / d h&sigma;−1x1＝&phi;&sigma;h&sigma;−1. And, moreover, if the arbitrary functions &phi;&sigma;, &phi;&sigma;(1) contain a certain number of arbitrary variables t1,  tm, and be developable about the values tº1,  tºm of these variables, the solutions z1,  zr will contain t1,  tm, and be developable about tº1,  tºm.

The proof of this theorem may be given by showing that if ordinary power series in x1 − a1, xn − an, t1 − tº1,  tm − tºm be substituted in the equations wherein in z&sigma; the coefficients of (x1 − a1)º, x1 − a1,, (x1 − a1)h&sigma;−1 are the arbitrary functions &phi;&sigma;, &phi;&sigma;(1),  &phi;&sigma;h−1, divided respectively by 1, 1!, 2!, &c., then the differential equations determine uniquely all the other coefficients, and that the resulting series are convergent. We rely, in fact, upon the theory of monogenic analytical functions (see ), a function being determined entirely by its development in the neighbourhood of one set of values of the independent variables, from which all its other values arise by continuation; it being of course understood that the coefficients in the differential equations are to be continued at the same time. But it is to be remarked that there is no ground for believing, if this method of continuation be utilized, that the function is single-valued; we may quite well return to the same values of the independent variables with a different value of the function; belonging, as we say, to a different branch of the function; and there is even no reason for assuming that the number of branches is finite, or that different branches have the same singular points and regions of existence. Moreover, and this is the most difficult consideration of all, all these circumstances may be dependent upon the values supposed given to the arbitrary constants of the integral; in other words, the singular points may be either fixed, being determined by the differential equations themselves, or they may be movable with the variation of the arbitrary constants of integration. Such difficulties arise even in establishing the reversion of an elliptic integral, in solving the equation $dx⁄ds$2 ＝(x − a1)(x − a2)(x − a3)(x − a4); about an ordinary value the right side is developable; if we put x − a1 = t12, the right side becomes developable about t1 = 0; if we put x = 1/t, the right side of the changed equation is developable about t = 0; it is quite easy to show that the integral reducing to a definite value x0 for a value s0 is obtainable by a series in integral powers; this, however, must be supplemented by showing that for no value of s does the value of x become entirely undetermined.

These remarks will show the place of the theory now to be sketched of a particular class of ordinary linear homogeneous differential equations whose importance arises from the completeness and generality with which they can be discussed. We have seen that if in the equations dy/dx＝y1, dy1/dx＝y2,, dyn−2/dx＝yn−1, dyn−1/dx＝any + an−1y1 + + a1yn−1, where a1, a2,, an are now to be taken to be rational functions of x, the value x = xº be one for which no one of these rational functions is infinite, and yº, yº1, , yºn−1 be quite arbitrary finite values, then the equations are satisfied by y＝yºu + yº1u1 + + yºn−1un−1, where u, u1,, un−1 are functions of x, independent of yº, yºn−1, developable about x = xº; this value of y is such that for x = xº the functions y, y1  yn−1 reduce respectively to yº, yº1,  yºn−1; it can be proved that the region of existence of these series extends within a circle centre xº and radius equal to the distance from xº of the nearest point at which one of a1,  an becomes infinite. Now consider a region enclosing xº and only one of the places, say &Sigma;, at which one of a1, an becomes infinite. When x is made to describe a closed curve in this region, including this point &Sigma; in its interior, it may well happen that the continuations of the functions u, u1,, un−1 give, when we have returned to the point x, values v, v1, , vn−1, so that the integral under consideration becomes changed to yº + yº1v1 + + yºn−1vn−1. At xº let this branch and the corresponding values of y1, yn−1 be &eta;º, &eta;º1,  &eta;ºn−1; then, as there is only one series satisfying the equation and reducing to (&eta;º, &eta;º1,  &eta;ºn−1) for x = xº and the coefficients in the differential equation are single-valued functions, we must have &eta;ºu + &eta;º1u1 +  + &eta;ºn−1un−1 = yºv + yº1v1 +  + yºn−1vn−1; as this holds for arbitrary values of yº  yºn−1, upon which u,  un−1 and v,  vn−1 do not depend, it follows that each of v,  vn−1 is a linear function of u,  un−1 with constant coefficients, say vi = Ai1u +  + Ainun−1. Then yºv + + yºn−1vn−1＝(&Sigma;i Ai1 yºi)u + + (&Sigma;i Ain yºi) un−1; this is equal to &mu;(yºu + + yºn−1un−1) if &Sigma;i Air yºi = &mu;yºr−1; eliminating yº  yºn−1 from these linear equations, we have a determinantal equation of order n for &mu;; let &mu;1 be one of its roots; determining the ratios of yº, y1º,  yºn−1 to satisfy the linear equations, we have thus proved that there exists an integral, H, of the equation, which when continued round the point &Sigma; and back to the starting-point, becomes changed to H1 = &mu;1H. Let now &xi; be the value of x at &Sigma; and r1 one of the values of (1/2&pi;i) log &mu;1; consider the function (x − &xi;)−r1H; when x makes a circuit round x = &xi;, this becomes changed to exp (−2&pi;ir1) (x − &xi;)−r1 &mu;H, that is, is unchanged; thus we may put H = (x − &xi;)r1&phi;1, &phi;1 being a function single-valued for paths in the region considered described about &Sigma;, and therefore, by Laurent’s Theorem (see ), capable of expression in the annular region about this point by a series of positive and negative integral powers of x − &xi;, which in general may contain an infinite number of negative powers; there is, however, no reason to suppose r1 to be an integer, or even real. Thus, if all the roots of the determinantal equation in &mu; are different, we obtain n integrals of the forms (x − &xi;)r1&phi;1,, (x − &xi;)rn&phi;n. In general we obtain as many integrals of this form as there are really different roots; and the problem arises to discover, in case a root be k times repeated, k − 1 equations of as simple a form as possible to replace the k − 1 equations of the form yº + + yºn−1vn−1 = &mu;(yº +  + yºn−1un−1) which would have existed had the roots been different. The most natural method of obtaining a suggestion lies probably in remarking that if r2 = r1 + h, there is an integral [(x − &xi;)r1 + h&phi;2 − (x − &xi;)r1&phi;1] / h, where the coefficients in &phi;2 are