Page:EB1911 - Volume 01.djvu/662

 add to the (n+1)th row; by $$b_{21}, b_{22} \dots b_{2n}$$, and add to the (n+2)th row; by $$b_{31}, b_{32} \dots b_{3n}$$ and add to the (n+3)rd row, &c. C then becomes

$\begin{vmatrix} a_{11}b_{11}+a_{12}b_{12}+\dots+a_{1n}b_{1n}, & a_{21}b_{11}+a_{22}b_{12}+\dots+a_{2n}b_{1n}, & \dots & a_{n1}b_{11}+a_{n2}b_{12}+\dots+a_{nn}b_{1n} \\

a_{11}b_{21}+a_{12}b_{22}+\dots+a_{1n}b_{2n}, & a_{21}b_{21}+a_{22}b_{22}+\dots+a_{2n}b_{2n}, & \dots & a_{n1}b_{21}+a_{n2}b_{22}+\dots+a_{nn}b_{2n} \\

a_{11}b_{31}+a_{12}b_{32}+\dots+a_{1n}b_{3n}, & a_{21}b_{31}+a_{22}b_{32}+\dots+a_{2n}b_{3n}, & \dots & a_{n1}b_{31}+a_{n2}b_{32}+\dots+a_{nn}b_{3n} \\

. ~ ~ . ~ ~ . & . ~ ~ . ~ ~ . && . ~ ~ . ~ ~ . \\

a_{11}b_{n1}+a_{12}b_{n2}+\dots+a_{1n}b_{nn}, & a_{21}b_{n1}+a_{22}b_{n2}+\dots+a_{2n}b_{nn}, & \dots & a_{n1}b_{n1}+a_{n2}b_{n2}+\dots+a_{nn}b_{nn} \\

\end{vmatrix}$

and all the elements of D become zero. Now by the expansion theorem the determinant becomes

$(-)^{1+2+3+ \ldots + 2n}\mbox{B.C} = (-1)^{n(2n+1)+n}\mbox{C} = \mbox{C}.$

We thus obtain for the product a determinant of order $$n$$. We may say that, in the resulting determinant, the element in the i&#8202;th row and kth column is obtained by multiplying the elements in the kth row of the first determinant severally by the elements in the i&#8202;th row of the second, and has the expression

$a_{k1}b_{i1} + a_{k2}b_{i2} + a_{k3}b_{i3} \dots + a_{kn}b_{in}$,

and we obtain other expressions by transforming either or both determinants so as to read by columns as they formerly did by rows.

Remark.—In particular the square of a determinant is a determinant of the same order $$(b_{11}b_{22}b_{33}\dotsb_{nn})$$ such that $$b_{ik} = b_{ki}$$; it is for this reason termed symmetrical.

The Adjoint or Reciprocal Determinant arises from $$\Delta = (a_{11}a_{22}a_{33}\dots a_{nn})$$ by substituting for each element $$\mbox{A}_{ik}$$ the corresponding minor $$\mbox{A}_{ik}$$ so as to form $$\mbox{D} = (\mbox{A}_{11}\mbox{A}_{22}\mbox{A}_{33} \dots \mbox{A}_{nn})$$. If we form the product $$\Delta. \mbox{D}$$ by the theorem for the multiplication of determinants we find that the element in the i&#8202;th row and kth column of the product is

$a_{ki}\mbox{A}_{i1} + a_{k2}\mbox{A}_{i2} + \dots + a_{kn}\mbox{A}_{in}$,

the value of which is zero when $$k$$ is different from $$i$$, whilst it has the value $$\Delta$$ when $$k = i$$. Hence the product determinant has the principal diagonal elements each equal to $$\Delta$$ and the remaining elements zero. Its value is therefore $$\Delta^n$$ and we have the identity

$\mbox{D}. \Delta = \Delta^n$ or $\mbox{D} = \Delta^{n-1}$.

It can now be proved that the first minor of the adjoint determinant, say $$\mbox{B}_{rs}$$ is equal to $$\Delta^{n-2}a_{rs}$$.

From the equations

and comparison of the first and third systems yields

$\mbox{B}_{rs} = \Delta^{n-2}a_{rs}$.

In general it can be proved that any minor of order $$p$$ of the adjoint is equal to the complementary of the corresponding minor of the original multiplied by the (p – 1)th power of the original determinant.

Theorem.—The adjoint determinant is the (n – 1)th power of the original determinant. The adjoint determinant will be seen subsequently to present itself in the theory of linear equations and in the theory of linear transformation.

Determinants of Special Forms.—It was observed above that the square of a determinant when expressed as a determinant of the same order is such that its elements have the property expressed by $$a_{ik} = a_{ki}$$. Such determinants are called symmetrical. It is easy to see that the adjoint determinant is also symmetrical, viz. such that $$\mbox{A}_{ik} = \mbox{A}_{ki}$$, for the determinant got by suppressing the i&#8202;th row and kth column differs only by an interchange of rows and columns from that got by suppressing the kth row and i&#8202;th column. If any symmetrical determinant vanish and be bordered as shown below

$\begin{vmatrix} a_{11} & a_{12} & a_{13} & \Lambda_{1} \\ a_{12} & a_{22} & a_{23} & \Lambda_{2} \\ a_{13} & a_{23} & a_{33} & \Lambda_{3} \\ \Lambda_{1} & \Lambda_{2} & \Lambda_{3} &. \end{vmatrix}$

it is a perfect square when considered as a function of $$\Lambda_{1}, \Lambda_{2}, \Lambda_{3}$$. For since $$\mbox{A}_{11}\mbox{A}_{22} - \mbox{A}_{12}^3 = \Delta a_{33}$$, with similar relations, we have a number of relations similar to $$\mbox{A}_{11}\mbox{A}_{22} = \mbox{A}_{12}^2$$, and either $$\mbox{A}_{rs} = + \sqrt (\mbox{A}_{rr}\mbox{A}_{ss})$$ or $$- \sqrt (\mbox{A}_{rr}\mbox{A}_{ss})$$ for all different values of $$r$$ and $$s$$. Now the determinant has the value

$- \{ \lambda_{1}^2 \mbox{A}_{11} + \lambda_{2}^2 \mbox{A}_{22} + \lambda_{3}^2 \mbox{A}_{33} + 2 \lambda_{2} \lambda_{3} \mbox{A}_{23} + 2 \lambda_{2} \lambda_{1} \mbox{A}_{31} + 2 \lambda_{1} \lambda_{2} \mbox{A}_{12} \}$

$= - \Sigma \lambda_{r}^2 \mbox{A}_{rr} - 2 \Sigma \lambda_{r} \lambda_{s} \mbox{A}_{rs}$ in general, and hence by substitution

$\pm \{ \lambda_{1} \sqrt \mbox{A}_{11} + \lambda_{2} \sqrt \mbox{A}_{22} + \dots + \lambda_{n} \sqrt \mbox{A}_{nn} \}^2.$

A skew symmetric determinant has $$a_{rr} = 0$$ and $$a_{rs} = -a_{sr}$$ for all values of $$r$$ and $$s$$. Such a determinant when of uneven degree vanishes, for if we multiply each row by $$-1$$ we multiply the determinant by $$(-1)^n = -1$$, and the effect of this is otherwise merely to transpose the determinant so that it reads by rows as it formerly did by columns, an operation which we know leaves the determinant unaltered. Hence $$\Delta = -\Delta$$ or $$\Delta = 0$$. When a skew symmetric determinant is of even degree it is a perfect square. This theorem is due to Cayley, and reference may be made to Salmon’s Higher Algebra, 4th ed. Art. 39. In the case of the determinant of order 4 the square root is

$\mbox{A}_{12}\mbox{A}_{34} - \mbox{A}_{13}\mbox{A}_{24} + \mbox{A}_{14}\mbox{A}_{23}$.

A skew determinant is one which is skew symmetric in all respects, except that the elements of the leading diagonal are not all zero. Such a determinant is of importance in the theory of orthogonal substitution. In the theory of surfaces we transform from one set of three rectangular axes to another by the substitutions

$\mbox{X} = ax \ + by \ + cz,$

$\mbox{Y} = a'x + b'y + c'z,$

$\mbox{Z} = ax + by + c''z,$

where $$\mbox{X}^2 + \mbox{Y}^2 + \mbox{Z}^2 = x^2 + y^2 + z^2$$. This relation implies six equations between the coefficients, so that only three of them are independent. Further we find

$x = a\mbox{X} + a'\mbox{Y} + a''\mbox{Z},$

$y = b\mbox{X} + b'\mbox{Y} + b''\mbox{Z},$

$z = c\mbox{X} + c'\mbox{Y} + c''\mbox{Z},$

and the problem is to express the nine coefficients in terms of three independent quantities.

In general in space of $$n$$ dimensions we have $$n$$ substitutions similar to

$X_{1} = a_{11}x_{1} + A_{12}x_{2} + \dots + a_{1n}x_{n}$,

and we have to express the $$n^2$$ coefficients in terms of $$\tfrac{1}{2}n(n-1)$$ independent quantities; which must be possible, because

where $$b_{rr} = 1$$ and $$b_{rs} = -b_{sr}$$ for all values of $$r$$ and $$s$$. There are then $$\tfrac{1}{2}n(n-1)$$ quantities $$b_{rs}$$. Let the determinant of the b’s be $$\Delta_{b}$$ and $$B_{rs}$$, the minor corresponding to $$b_{rs}$$. We can eliminate the quantities $$\xi_{1}, \xi_{2}, \dots \xi_{n}$$ and obtain $$n$$ relations

$\Delta_{b} \text{X}_{1} = (2\text{B}_{11} - \Delta_{b})x_{1}$$+ 2 \text{B}_{12}\text{X}_{2} + 2\text{B}_{31}x_{3} + \dots,$

$\Delta_{b}\text{X}_{2} =$$ 2\text{B}_{12}x_{1} + (2\text{B}_{22} - \Delta_{b})x_{2} + 2\text{B}_{32}x_{3} + \dots,$

$.$$.$$.$$.$$.$$.$$.$

and from these another equivalent set

$\Delta_{b} x_{1} = (2\text{B}_{11} - \Delta_{b})\text{X}_{1}$$+ 2 \text{B}_{12}\text{X}_{2} + 2\text{B}_{13}\text{X}_{3} + \dots,$

$\Delta_{b}x_{2} =$$ 2\text{B}_{21}\text{X}_{1} + (2\text{B}_{22} - \Delta_{b})\text{X}_{2} + 2\text{B}_{23}\text{X}_{3} + \dots,$

$.$$.$$.$$.$$.$$.$$.$

and now writing

$\frac{2\text{B}_{ii} - \Delta_{b}}{\Delta_{b}} = a_{ii}, \qquad \frac{2\text{B}_{ik}}{\Delta_{b}} = a_{ik},$|undefined

we have a transformation which is orthogonal, because $$\Sigma X^2 = \Sigma x^2$$ and the elements $$a_{ii}$$, $$a_{ik}$$ are functions of the $$\tfrac{1}{2}n(n-1)$$ independent quantities $$b$$. We may therefore form an orthogonal transformation in association with every skew determinant which has its leading diagonal elements unity, for the $$\tfrac{1}{2}n(n-1)$$ quantities $$b$$ are clearly arbitrary.

For the second order we may take

$\Delta_{b} = \begin{vmatrix} 1, & \lambda \\ -\lambda, & 1 \end{vmatrix} = 1 + \Delta^2$,

and the adjoint determinant is the same; hence

$(1 + \lambda^2)x_{1} = (1-\lambda^2)\text{X}_{1}+$$2 \lambda \text{X}_{2},$

$(1 + \lambda^2)x_{2}$$= -2 \lambda \text{X}_{1} + (1 - \lambda^2)\text{X}_{2}.$

Similarly, for the order 3, we take

$\Delta_{b} = \begin{vmatrix} 1 & \nu & -\mu \\ -\nu & 1 & \lambda \\ \mu & -\lambda & 1 \end{vmatrix} = 1 + \lambda^2 + \mu^2 + \nu^2,$

and the adjoint is

$\begin{vmatrix} 1 + \lambda^2 & \nu + \lambda \mu & -\mu + \lambda \nu \\ -\nu + \lambda \mu & 1 + \mu^2 & \lambda + \mu \nu \\ \mu + \lambda \nu & -\lambda + \mu \nu & 1 + \nu^2 \end{vmatrix}$,

leading to the orthogonal substitution

Functional determinants were first investigated by Jacobi in a work De Determinantibus Functionalibus. Suppose $$n$$ dependent variables $$y_{1}, y_{2}, \dots y_{n}$$, each of which is a function of $$n$$ independent variables $$x_{1}, x_{2}, \dots x_{n}$$, so that $$y_{s} = f_{s}(x_{1}, x_{2}, \dots x_{n})$$. From the differential coefficients of the y’s with regard to the x’s we form the functional determinant