Page:EB1911 - Volume 01.djvu/660

 ordinary algebra. This study was inaugurated by George Peacock, who was one of the earliest mathematicians to recognize the symbolic character of the fundamental principles of algebra. About the same time, D. F. Gregory published a paper “on the real nature of symbolical algebra.” In Germany the work of Martin Ohm (System der Mathematik, 1822) marks a step forward. Notable service was also rendered by Augustus de Morgan, who applied logical analysis to the laws of mathematics.

The geometrical interpretation of imaginary quantities had a influence on the development of symbolic algebras. The attempts to elucidate this question by H. Kühn (1750–1751) and Jean Robert Argand (1806) were completed by Karl Friedrich Gauss, and the formulation of various systems of vector analysis by Sir William Rowan Hamilton, Hermann Grassmann and others, followed. These algebras were essentially geometrical, and it remained, more or less, for the American mathematician Benjamin Peirce to devise systems of pure symbolic algebras; in this work he was ably seconded by his son Charles S. Peirce. In England, multiple algebra was developed by James Joseph Sylvester, who, in company with Arthur Cayley, expanded the theory of matrices, the germs of which are to be found in the writings of Hamilton (see above, under (B); and ).

The preceding summary shows the specialized nature which algebra has assumed since the 17th century. To attempt a history of the development of the various topics in this article is inappropriate, and we refer the reader to the separate articles.

  . The  of algebraic forms is to a large extent connected with the linear transformation of algebraical polynomials which involve two or more variables. The theories of determinants and of symmetric functions and of the algebra of differential operations have an important bearing upon this comparatively new branch of mathematics. They are the chief instruments of research, and have themselves much benefited by being so employed. When a homogeneous polynomial is transformed by general linear substitutions as hereafter explained, and is then expressed in the original form with new coefficients affecting the new variables, certain functions of the new coefficients and variables are numerical multiples of the same functions of the original coefficients and variables. The investigation of the properties of these functions, as well for a single form as for a simultaneous set of forms, and as well for one as for many series of variables, is included in the theory of invariants. As far back as 1773 Joseph Louis Lagrange, and later Carl Friedrich Gauss, had met with simple cases of such functions, George Boole, in 1841 (Camb. Math. Journ. iii. pp. 1-20), made important steps, but it was not till 1845 that Arthur Cayley (Coll. Math. Papers, i. pp. 80-94, 95-112) showed by his calculus of that an infinite series of such functions might be obtained systematically. The subject was carried on over a long series of years by himself, J. J. Sylvester, G. Salmon, L. O. Hesse, S. H. Aronhold, C. Hermite, Francesco Brioschi, R. F. A. Clebsch, P. Gordon, &c. The year 1868 saw a considerable enlargement of the field of operations. This arose from the study by Felix Klein and Sophus Lie of a new theory of groups of substitutions; it was shown that there exists an invariant theory connected with every group of linear substitutions. The invariant theory then existing was classified by them as appertaining to “finite continuous groups.” Other “Galois” groups were defined whose substitution coefficients have fixed numerical values, and are particularly associated with the theory of equations. Arithmetical groups, connected with the theory of quadratic forms and other branches of the theory of numbers, which are termed “discontinuous,” and infinite groups connected with differential forms and equations, came into existence, and also particular linear and higher transformations connected with analysis and geometry. The effect of this was to many branches of mathematics and greatly to increase the number of workers. The subject of transformation in general has been treated by Sophus Lie in the classical work Theorie der Transformationsgruppen. The present article is merely concerned with algebraical linear transformation. Two methods of treatment have been carried on in parallel lines, the unsymbolic and the symbolic; both of these originated with Cayley, but he with Sylvester and the English school have in the main confined themselves to the former, whilst Aronhold, Clebsch, Gordan, and the continental schools have principally restricted themselves to the latter. The two methods have been conducted so as to be in constant touch, though the nature of the results obtained by the one differs much from those which flow naturally from the other. Each has been singularly successful in discovering new lines of advance and in encouraging the other to renewed efforts. P. Gordan first proved that for any system of forms there exists a finite number of covariants, in terms of which all others are expressible as rational and integral functions. This enabled David Hilbert to produce a very simple unsymbolic proof of the same theorem. So the theory of the forms appertaining to a binary form of unrestricted order was first worked out by Cayley and P. A. MacMahon by unsymbolic methods, and later G. E. Stroh, from a knowledge of the results, was able to verify and extend the results by the symbolic method. The partition method of treating symmetrical algebra is one which has been singularly successful in indicating new paths of advance in the theory of invariants; the important theorem of expressibility is, directly we exclude unity from the partitions, a theorem concerning the expressibility of covariants, and involves the theory of the reducible forms and of the syzygies. The theory brought forward has not yet found a place in any systematic treatise in any language, so that it has been judged proper to give a fairly complete account of it.

Let there be given $$n^2$$ quantities

$$ \begin{matrix} a_{11} & a_{12} & a_{13} & \ldots & a_{1n} \\ a_{21} & a_{22} & a_{23} & \ldots & a_{2n} \\ a_{31} & a_{32} & a_{33} & \ldots & a_{3n} \\ . & . & . & \ldots &. \\ a_{n1} & a_{n2} & a_{n3} & \ldots & a_{nn} \end{matrix} $$

and form from them a product of $$n$$ quantities

$$ \begin{matrix} a_{1 \alpha} & a_{2 \beta} & a_{3 \gamma} & \ldots & a_{n \nu}, \end{matrix} $$

where the first suffixes are the natural numbers $$1, 2, 3, \ldots n$$ taken in order, and $$\alpha, \beta, \gamma, \ldots \nu$$ is some permutation of these $$n$$ numbers. This permutation by a transposition of two numbers, say $$\alpha, \beta,$$ becomes $$\beta, \alpha, \gamma, \ldots \nu ,$$ and by successively transposing pairs of letters the permutation can be reduced to the form $$1, 2, 3, \ldots n.$$ Let $$k$$ such transpositions be necessary; then the expression

$$ \Sigma (-)^k a_{1 \alpha} a_{2 \beta} a_{3 \gamma} \ldots a_{n \nu}, $$

the summation being for all permutations of the $$n$$ numbers, is called the determinant of the $$n^2$$ quantities. The quantities $$a_{1 \alpha} a_{2 \beta} \ldots$$ are called the elements of the determinant; the term $$(-)^k a_{1 \alpha} a_{2 \beta} a_{3 \gamma} \ldots a_{n \nu}$$ is called a member of the determinant, and there are evidently $$n!$$ members corresponding to the $$n!$$ permutations of the $$n$$ numbers $$1, 2, 3, \ldots n.$$ The determinant is usually written

$$ \Delta = \begin{vmatrix} a_{11} & a_{12} & a_{13} & \ldots & a_{1n} \\ a_{21} & a_{22} & a_{23} & \ldots & a_{2n} \\ a_{31} & a_{32} & a_{33} & \ldots & a_{3n} \\ . & . & . & ..\phantom{.} &. \\ a_{n1} & a_{n2} & a_{n3} & \ldots & a_{nn} \end{vmatrix} $$

the square array being termed the matrix of the determinant. A matrix has in many parts of mathematics a signification apart from its evaluation as a determinant. A theory of matrices has been constructed by Cayley in connexion particularly with the theory of linear transformation. The matrix consists of $$n$$rows and $$n$$ columns. Each row as well as each column supplies one and only one element to each member of the determinant. Consideration of the definition of the determinant shows that the value is unaltered when the suffixes in each element are transposed.

Theorem.—If the determinant is transformed so as to read by columns as it formerly did by rows its value is unchanged. The leading member of the determinant is $$a_{11} a_{22} a_{33} \ldots a_{nn},$$ and corresponds to the principal diagonal of the matrix.

We write frequently

$$ \Delta = \Sigma \pm a_{11} a_{22} a_{33} \ldots a_{nn} = (a_{11} a_{22} a_{33} \ldots a_{nn}). $$

If the first two columns of the determinant be transposed the