Translation:Demonstratio nova theorematis omnem functionem algebraicam rationalem integram unius variabilis in factores reales primi vel secundi gradus resolvi posse

1.
Any given algebraic equation can be reduced to the form

$$x^m + Ax^{m-1} + Bx^{m-2} + \text{etc.} + M = 0$$

such that $m$ is a positive integer. If we denote the first part of this equation by $X,$ and assume that the equation $X = 0$  is satisfied by multiple unequal values of $x,$  say by setting $x=\alpha,$  $x =\beta,$  $x = \gamma,$  etc., then the function $X$  will be divisible by the product of the factors $x-\alpha,$  $x-\beta,$  $x-\gamma,$  etc. Conversely, if the product of several simple factors $x-\alpha,$  $x-\beta,$  $x-\gamma,$  etc. divides the function $X,$  the equation $X = 0$  will be satisfied by setting $x$  equal to any of the quantities $\alpha, \beta, \gamma,$  etc. Finally, if $X$  is equal to the product of $m$  such simple factors (whether they are all different or some of them are identical), then no other simple factors besides these can divide the function $X.$  Therefore, an equation of degree $m$  cannot have more roots than $m,$  but it is evident that an equation of degree $m$  can have fewer roots, even if $X$  is resolvable into $m$  simple factors: if some of these factors are identical, then the number of different ways the equation can be satisfied will necessarily be less than $m.$  Nevertheless, for the sake of elegance, geometers prefer to say that the equation has $m$  roots even in this case, but that some of them are equal to each other: a liberty they could certainly take.

2.
The things explained so far are sufficiently demonstrated in algebraic works and do not violate geometric rigor anywhere. However, analysts seem to have adopted the theorem on which almost the entire theory of equations is built too hastily and without proper solid proof: that a function such as $X$ can always be resolved into $m$  simple factors, or, which is entirely consistent with it, that an equation of degree $m$  does indeed have $m$  roots. But since in quadratic equations, we often encounter cases that contradict this theorem, algebraists were forced to introduce a certain imaginary quantity whose square is $-1,$ and then they acknowledged that if quantities of the form $a+b\surd{-1}$  are treated as real, then the theorem holds not only for quadratic but also for cubic and biquadratic equations. However, it is by no means permissible to infer from this that any equation of the fifth degree or higher can be satisfied by admitting quantities of the form $a+b\surd{-1},$ or as it is often expressed (although I would prefer a less slippery phrase), that the roots of such an equation can be reduced to the form $a+b\surd{-1}.$  This theorem, as stated in the title of this paper, does not differ from what has been mentioned before. The aim of this dissertation is to provide a new rigorous demonstration of this theorem.

Moreover, since analysts discovered that there are infinitely many equations that have no roots at all unless quantities of the form $a+b\surd{-1}$ are admitted, these fictitious quantities, considered as a special kind of quantity and called imaginary to distinguish them from real ones, have been introduced into the entire analysis; on what grounds? I do not dispute this point here. I will complete my demonstration without any aid from imaginary quantities, although I would be allowed the same freedom that all recent analysts have used.

3.
Although what is presented in most elementary books as the proof of our theorem is so trivial and deviates so much from geometric rigor that it is scarcely worth mentioning, I will touch on it briefly so that nothing seems to be lacking. To demonstrate that the equation

$$x^m + Ax^{m-1} + Bx^{m-2} + \text{etc.} + M = 0$$

or $X = 0$ indeed has $m$  roots, they attempt to prove that $X$  can be resolved into $m$  simple factors. To achieve this, they assume $m$ simple factors $x-\alpha,$  $x-\beta,$  $x-\gamma,$  etc., where $\alpha,$  $\beta,$  $\gamma,$  etc. are still unknown, and they set the product of these equal to the function $X.$  Then, by comparing coefficients, they deduce $m$  equations from which they claim that the unknowns $\alpha,$  $\beta,$  $\gamma,$  etc. can be determined, and the number of these equations is also $m.$  In particular, $m-1$  unknowns can be eliminated, leading to an equation that contains only the unknown $\alpha.$  Leaving aside other criticisms that could be made of this argumentation, let us simply ask how we can be certain that the final equation indeed has any roots. Why couldn’t it be the case that neither this final equation nor any magnitude proposed satisfies any value in the entire range of real and imaginary quantities? However, experts will easily see that this final equation must necessarily be entirely identical to the proposed one if the calculation is properly conducted; namely, after eliminating the unknowns $\beta,$ $\gamma,$  etc., the equation

$$\alpha^m + A\alpha^{m-1} + B\alpha^{m-2} + \text{etc.} + M = 0$$

should emerge. There is no need to elaborate further on this reasoning.

Some authors, who seem to have perceived the weakness of this method, take it as an axiom that any equation indeed has roots, whether possible or impossible. However, they do not seem to have clearly explained what is meant by possible and impossible quantities. If possible quantities are meant to denote the same as real and impossible as imaginary, this axiom cannot be admitted without proper demonstration, and instead requires proof. Nevertheless, the terms do not seem to be intended in that sense, but rather the meaning of the axiom appears to be: ‘Although we are not yet certain that there necessarily exist $m$ real or imaginary quantities satisfying a given equation of degree $m,$  we will assume it for a while; for if by chance it should happen that so many real and imaginary quantities cannot be found, then at least we have an escape, and we can say that the remaining ones are impossible.’ If someone prefers to use this phrase rather than simply saying that the equation in this case will not have so many roots, I have no objection; but if, at that point, they treat these impossible roots as if they were something true, and, for example, say that the sum of all the roots of the equation $x^m + A x^{m-1} + \text{etc.} = 0$  is $= -A,$  even if some of them are impossible (which expression explicitly means even if some are missing), then I cannot approve of it. For impossible roots, accepted in this sense, are still roots, and then that axiom cannot be admitted without some kind of demonstration, as it is not unreasonable to ask whether equations can exist that do not even have impossible roots. I always understand the term imaginary quantity here to refer to a quantity in the form $a+b\surd{-1}$ as long as $b$  is not equal to $0.$  In this sense, this expression has always been accepted by all geometers of the first order, and I consider those who wanted to call the quantity $a+b\surd{-1}$  imaginary only in the case where $a = 0,$  and impossible only when $a \neq 0,$  not worth listening to, as this distinction is neither necessary nor of any utility. If imaginary quantities are to be retained in analysis altogether (which seems more advisable than abolishing them, provided they are solidly established), then they must necessarily be regarded as equally possible as real quantities; hence, I would prefer to encompass real and imaginary quantities under the common designation of possible quantities: conversely, I would call a quantity impossible if it should satisfy conditions that cannot be satisfied even by admitting imaginary ones, yet in a way that this phrase means the same as saying that such a quantity does not exist in the entire range of magnitudes. From this standpoint, I would not concede the formation of a peculiar class of quantities. If someone says that an equilateral right-angled triangle is impossible, no one will deny it. But if he wants to consider such an impossible triangle as a new kind of triangles and apply properties of other triangles to it, would anyone take it seriously? This would be playing with words or rather abusing them. Although even eminent mathematicians have often applied truths that manifestly presuppose the possibility of certain quantities to cases where the possibility was still doubtful; and while I do not deny that such licenses usually pertain only to the form and semblance of reasoning, which the keen edge of true geometry can soon penetrate: yet it seems more advisable and more worthy of the sublimity of a science celebrated as the most perfect example of clarity and certainty, either to entirely prohibit such liberties, or at least to use them sparingly and only where those less practiced might find it difficult to perceive the matter without their aid, and where it could still be handled as rigorously, if perhaps less briefly. However, I do not deny that what I have said here against the abuse of impossibilities can be applied in some respects against imaginaries as well: yet I reserve the vindication of these and the fuller exposition of this whole matter for another occasion.

4.
Before I review the demonstrations of our theorem by other geometers and point out what seems to me to be objectionable in each, I make the following observation: it is sufficient to show only that for any equation of any degree

$$x^m + Ax^{m-1} + Bx^{m-2} + \text{etc.} + M = 0$$

or $X = 0$ (where the coefficients $A, B$  etc. are assumed to be real), it can be satisfied in at least one way by the value of $x$  expressed as $a+b\surd{-1}.$  For it is evident that $X$  will then be divisible by a real quadratic factor $xx - 2ax + aa + bb$  if $b$  is not $=0,$  and by a real simple factor $x-a$  if $b = 0.$  In both cases, the factor will be real and of lower degree than $X;$  and since, by the same reasoning, it must have a real factor of the first or second degree, it is clear that by continuing this operation, the function $X$  will eventually be resolved into real simple or double factors, or, if you prefer to use two simple imaginaries instead of each real double factor, into $m$  simple factors.

5.
The first proof of the theorem is owed to the illustrious geometer and can be found in ''Recherches sur le calcul integral, Histoire de l’Acad. de Berlin, Année'' 1746, p. 182 ff. The same proof is also in, Traité du calcul integral, à Paris 1754, p. 47 ff. The main points of his method are as follows.

First, he shows that if any function $X$ of the variable $x$  becomes $= 0$  either for $x = 0$  or for $x = \infty,$  and if it can acquire an infinitely small positive real value by assigning a real value to $x,$  then this function can also obtain an infinitely small negative real value, either from a real value of $x,$  or an imaginary value of the form $p+q\surd{-1}.$  Namely, denoting $\Omega$  as the infinitely small value of $X,$  and $\omega$  as the corresponding value of $x,$  he asserts that $\omega$  can be expressed by a highly convergent series $a\Omega^{\alpha} + b \Omega^{\beta} + c \Omega^{\gamma}$  etc., where the exponents $\alpha,$  $\beta,$  $\gamma$  etc. are continuously increasing rational quantities, and thus become at least a certain distance from the positive starting point and make the terms in which they appear infinitely small. Now, if among all these exponents there is none that is a fraction with an even denominator, all the terms of the series become real for both positive and negative values of $\Omega;$ but if some fractions with even denominators are found among these exponents, it is established that, for negative values of $\Omega,$  the corresponding terms are expressed in the form $p+q\surd{-1}.$  However, due to the convergence of the infinite series in the former case, it suffices to retain only the first (i.e., the largest) term; in the latter case, it is unnecessary to go beyond the term that first introduces an imaginary part.

Through similar reasoning, it can be shown that if $X$ can obtain a negative infinitely small value from a real value of $x,$  then that function can acquire a positive real infinitely small value from a real value of $x$  or from an imaginary value in the form $p+q\surd{-1}.$

Hence he further concludes that a real finite value of $X$ can also be found, in the former case negative and in the latter case positive, which can be produced from an imaginary value of $x$  of the form $p+q\surd{-1}.$

From this, it follows that if $X$ is a function of $x$  such that it obtains a real value $V$  from a real value $v$  of $x,$  and also obtains an infinitely small real value, either greater or smaller than $V,$  from a real value of $x,$  then it can also receive an infinitely small real value, which is either smaller and greater than $V$  (respectively), by assigning to $x$  a value of the form $p+q\surd{-1}.$  This is easily derived from the above if $X$  is conceived to be replaced by $V + Y,$  and $x$  by $v+y.$

Finally, asserts that $X$  can traverse any interval between two real values $R,$  $S$  (i.e., becoming equal to $R$  and $S,$  and all intermediate real values), by assigning to $x$  values of the form $p+q\surd{-1};$  that is, the function $X$  can increase or decrease by any finite real quantity (depending on whether $S>R$  or $SR;$  and a minimum when $S<R$ ), say $T,$  which would be obtained from a value of $x,$  $p+q\surd{-1},$  in such a way that no value of $x$  in a similar form could be assigned that would bring the function $X$  closer to $U$  by the smallest excess. Now, if in the equation between $X$ and $x,$  $p+q\surd{-1}$  is substituted everywhere for $x,$  and then the real part and the part involving the factor $\surd{-1}$  are equated, two equations will result from this (in which $p,$  $q,$  and $X$  occur mixed with constants) that, through elimination, can yield two others, in one of which $p,$  $X,$  and constants are found, and in the other, $p$  is free containing only $q,$  $X,$  and constants. Therefore, since $X$ has traversed all values from $R$  to $T$  by real values of $p, q,$  according to the above, $X$  can still approach the value $U$  by assigning values to $p,$  $q$  such as $\alpha + \gamma \surd{-1},$  $\beta + \delta\surd{-1},$  respectively. From this, it follows that $x = \alpha - \delta + (\gamma + \beta)\surd{-1},$ i.e. it is still in the form $p+q\surd{-1},$  contrary to the hypothesis.

Now, if $X$ is supposed to represent a function such as $x^m + Ax^{m-1} + Bx^{m-2} + \text{etc.} + M,$  it is clear that there is no problem, and such real values can be assigned to $x$  so that $X$  traverses any interval between two real values. Therefore, $x$ can also obtain some value in the form $p+q\surd{-1},$  from which $X$  becomes $=0.$  Q.E.D.

6.
The objections that can be raised against ’s demonstration mostly come down to the following.

1. raises no doubt about the existence of values of $x$  for which the given values of $X$  correspond but assumes it and only investigates the form of these values.

Although this objection is in itself very serious, it pertains here only to the form of expression, which can easily be corrected to completely invalidate it.

2. The assertion that $\omega$ can always be expressed by such a series as he posits is certainly false if $X$  is supposed to represent any transcendental function (as  hints at in several places). This is evident, for example, if $X = e^{\frac{1}{x}}$ or $X = \frac{1}{\log X}.$  However, if we restrict the demonstration to the case where $X$  is an algebraic function of $x$  (which is sufficient for the present matter), the proposition is certainly true. Nevertheless, provided no evidence to support his assumption; the illustrious  assumes that $X$  is an algebraic function of $x$  and recommends the use of ’s parallelogram series for finding it.

3. He uses infinitely small quantities more freely than can be justified with geometric rigor or at least would be granted by a careful analyst in our age (where they rightly face skepticism). He also did not explain the leap from the value of $\Omega$ being infinitely small to it being finite sufficiently clearly. His conclusion that $\Omega$ can obtain a finite value seems to be derived not so much from the possibility of an infinitely small value of $\Omega$  as from the fact that, denoting $\Omega$  as a very small quantity, due to the great convergence of the series, the closer the true value of $\omega$  is approached, or the more accurately the equation expressing the relation between $\omega$  and $\Omega$  or $x$  and $X$  is satisfied, the more terms of the series are taken. Furthermore, the entire argument seems too vague to draw any rigorous conclusions from it: it should be noted that there are series that, no matter how small a value is assigned to the quantity according to which their powers progress, always diverge, so that if they continue far enough, you can reach terms greater than any given quantity. This happens when the coefficients of the series constitute a hypergeometric progression. Therefore, it should have been necessarily demonstrated that such a hypergeometric series cannot arise in the present case.

However, it seems to me that did not rightly resort to infinite series here, and they are not suitable for establishing this fundamental theorem of the theory of equations.

4. From the assumption that $X$ can attain the value $S$  but not the value $U,$  it does not necessarily follow that there must be a value $T$  between $S$  and $U$  which $X$  can reach but not exceed. Another case remains: namely, it could be that there is a limit between $S$ and $U$  that can be approached as closely as desired by $X,$  but $X$  never actually reaches it. From the arguments provided by, it only follows that $X$ can always surpass any value it has reached by a finite quantity, for example, when it has become $= S,$  it can still increase by some finite quantity $\Omega,$  and with this, a new increment $\Omega'$  may occur, then another increase $\Omega'',$  etc., so that no increment should be considered final, but there can always be a new one added. Although the multitude of possible increments is not limited by any boundaries, it could still be the case that if the increments $\Omega,$ $\Omega',$  $\Omega,$  etc. continuously decrease, the sum $S + \Omega + \Omega' + \Omega,$  etc. never reaches a certain limit, no matter how many terms are considered.

While this case cannot occur when $X$ represents a complete algebraic function of $x,$  without a demonstration, this inability to occur must necessarily be considered a methodological flaw. However, when $X$ is a transcendental function or even a fractional algebraic function, this case can indeed occur, for example, whenever a certain value of $X$  corresponds to an infinitely large value of $x.$  Then, the ian method seems not without many difficulties and possibly in some cases impossible to reduce to unquestionable principles.

For these reasons, I cannot consider the ian demonstration as satisfactory. However, despite this, I do not believe that the true essence of the demonstration is in any way compromised by all objections, and I think that based on the same foundation (though with a vastly different rationale and at least a more comprehensive perspective), not only a rigorous demonstration of our theorem can be built, but also everything that can be desired concerning the theory of transcendent equations. I will discuss this matter more extensively on another occasion; see meanwhile below, article 24.

7.
After, published his investigations on the same subject in ''Recherches sur les racines imaginaires des equations, Hist. de l’Acad. de Berlin A.'' 1749, p. 223 sqq. He presented two methods, and the essence of the first is summarized in the following.

First, aims to demonstrate that if $m$  is any power of $2,$  then the function $x^{2m} + Bx^{2m-2} + C x^{2m-3} + \text{etc.} + M = X$  (where the coefficient of the second term is $= 0$ ) can always be resolved into two real factors, in which $x$  has up to $m$  dimensions. To achieve this, he considers two factors:

$$x^m - u x^{m-1} + \alpha x^{m-2} + \beta x^{m-3} + \text{etc., and } x^m + u x^{m-1}+\lambda x^{m-2} + \mu x^{m-3} + \text{etc.}$$

where the coefficients $u,$ $\alpha,$  $\beta,$  etc., $\lambda,$  $\mu,$  etc. are still unknown. Their product is set equal to the function $X.$ The comparison of coefficients yields $2m-1$  equations, and it only needs to be shown that the unknowns $u,$  $\alpha,$  $\beta,$  etc., $\lambda,$  $\mu,$  etc. (whose number is also $2m -1$ ) can be assigned real values satisfying these equations. asserts that if $u$ is considered as known initially, such that the number of unknowns is one less than the number of equations, then by properly combining these using algebraic methods, all $\alpha,$  $\beta,$  etc., $\lambda,$  $\mu,$  etc. can be rationally determined, without any extraction of roots, by $u$  and the known coefficients $B, C,$  etc. Furthermore, all $\alpha,$  $\beta,$  etc., $\lambda,$  $\mu,$  etc. can be eliminated, resulting in the equation $U = 0,$  where $U$  is an integral function of $u$  and the known coefficients. It suffices here to know one property of this equation, namely, that the last term in $U$ (which does not involve the unknown $u$ ) must be negative. It follows that the equation must have at least one real root, meaning that $u$ and consequently $\alpha,$  $\beta,$  etc., $\lambda,$  $\mu,$  etc. can be determined in at least one real way. This property can be confirmed through the following reflections: When $x^m - u x^{m-1}+\alpha x^{m-2} +$ etc. is assumed to be a factor of the function $X,$  $u$  will necessarily be the sum of $m$  roots of the equation $X = 0,$  and thus it must have as many values as there are ways to choose $m$  out of $2m$  roots, which is given by the combinatorial calculation $\frac{2m. 2m-1. 2m-2. \dots. m+1}{1.2.3 \dots m}.$ This number will always be oddly even (a not difficult demonstration is omitted here): if $2k$  is assumed for this number, then $k$  will be odd; the equation $U = 0$  will then be of the $2k^{th}$  degree. Now, since the second term is missing in the equation $X = 0,$ the sum of all $2m$  roots will be $0.$  It is clear that if the sum of any $m$  roots is $+p,$  the sum of the remaining roots will be $-p,$  i.e., if $+p$  is among the values of $u,$  then $-p$  will also be among them. Hence, concludes that $U$  is the product of $k$  double factors of the form $uu - pp,$  $uu - qq,$  $uu - rr,$  etc., representing $+p,$  $-p,$  $+q,$  $-q,$  etc., all $2k$  roots of the equation $U = 0.$  Therefore, due to the multitude of odd factors, the last term in $U$  will be the square of the product $pqr$  etc., with a negative sign. Moreover, the square of this product can always be rationally determined from the coefficients $B, C,$ etc., and will consequently be a real quantity. Thus, the square of this with a negative sign will certainly be a negative quantity. Q.E.D.

Since these two real factors of $X$ are of degree $m,$  and $m$  is a power of the number $2,$  each factor can again be resolved into two real factors of dimension $\tfrac{1}{2}m,$  by the same reasoning. However, as through repeated halving of the number $m,$ one necessarily eventually reaches two, it is evident that by the continuation of this operation, the function $X$  will ultimately be resolved into real second-degree factors.

If, on the other hand, a function is presented in which the second term is not lacking, say $x^{2m} + A x^{2m-1} + B x^{2m-2} + \text{etc.} + M,$ also denoting $2m$  as the power of binary, this will, by the substitution $x = y - \frac{A}{2m},$  be transformed into a similar function lacking a second term. Hence, it is easily concluded that such a function is also resolvable into real second-degree factors.

Finally, for a given function of degree $n,$ where $n$  is not a binary power: let the nearest higher binary power be denoted as $= 2m,$  and multiply the given function by $2 m - n$  arbitrary real simple factors. From the resolvability of the product into real second-degree factors, it is straightforwardly derived that the given function must also be resolvable into real factors of the second or first degree.

8.
Against this demonstration, one can object:

1. The rule by which concludes that from $2m-1$  equations, $2m-2$  unknowns $\alpha,$  $\beta$  etc., $\lambda,$  $\mu$  etc. can all be rationally determined, is not general but often admits exceptions. For example, if in Art. 3 one attempts to express the remaining unknowns and coefficients rationally by considering some unknowns as known, they will easily find that this is impossible, and that the unknown quantities can only be determined by an equation of degree $m-1.$ Although it can be immediately seen a priori that this must necessarily happen, it could be rightly doubted whether, even in the present case, for certain values of $m,$  the situation is such that the unknowns $\alpha,$  $\beta$  etc., $\lambda,$  $\mu$  etc. cannot be determined by an equation possibly of a degree greater than $2m.$  For the case where the equation $X = 0$  is of the fourth degree,  extracts rational values of the coefficients through $u$  and the given coefficients; the same can indeed be done in all higher-degree equations, but it certainly requires a more extensive explanation. However, it seems worthwhile to delve more deeply and more generally into those formulas that rationally express $\alpha,$ $\beta$  etc. through $u, B, C$  etc.; I will undertake a more detailed discussion on this and other matters related to the theory of elimination (an argument by no means exhausted) on another occasion.

2. However, even if it is demonstrated that for an equation of any degree $X = 0,$ formulas can always be found that express $\alpha,$  $\beta,$  etc., $\lambda,$  $\mu,$  etc. rationally through $u,$  $B,$  $C,$  etc., it is certain that for certain specific values of the coefficients $B,$  $C,$  etc., those formulas can become indeterminate, so that not only is it impossible to define those unknowns rationally from $u,$  $B,$  $C,$  etc., but in some cases, no real values of $\alpha,$  $\beta,$  etc., $\lambda,$  $\mu,$  etc. correspond to any real value of $u.$  For the confirmation of this matter, for brevity, I refer the reader to ’s dissertation itself, where on p. 236 the equation of the fourth degree is more extensively explained. Everyone will immediately see that the formulas for the coefficients $\alpha,$ $\beta$  become indeterminate if $C = 0$  and the value $0$  is assumed for $u,$  and their values cannot be assigned without extracting roots, and even more, not real values, if the quantity $BB-4D$  is negative. Although in this case it is easy to see that $u$ can still have other real values for which real values of $\alpha,$  $\beta$  correspond, still, someone might fear that the solution of this difficulty (which  did not touch at all) may require much more effort in higher-degree equations. Certainly, this matter should by no means be passed over in silence in an exact demonstration.

3. The illustrious tacitly assumes that the equation $X=0$  has $2m$  roots, and he establishes that their sum is $= 0$  because the second term in $X$  is absent. My opinion on this assumption (which all authors use in this argument) was already declared in Art. 3 above. The proposition that the sum of all roots of an equation equals the negative of the coefficient of the first term does not seem to be applicable to equations except those which have roots. Since it must be proved by this very demonstration that the equation $X=0$ actually has roots, it does not seem permissible to assume their existence. Undoubtedly, those who have not yet penetrated the fallacy of this paralogism will respond, it is not demonstrated here that the equation $X=0$ can be satisfied (for this expression means having roots), but that it can only be satisfied by values of $x$  of the form $a + b \surd{-1};$  the former is to be taken as an axiom. However, since other forms of quantities cannot be conceived beyond the real and imaginary $a + b \surd{-1},$ it is not clear enough how what should be demonstrated differs from what is assumed as an axiom; indeed, even if it were possible to conceive other forms of quantities, such as $F,$  $F',$  $F,$  etc., it should not be admitted without demonstration that the equation can be satisfied by some value of $x$  either real, or in the form $a + b \surd{-1},$  or in the form $F,$  or in $F',$  etc. Therefore, that axiom cannot have any other meaning than this: Any equation can be satisfied either by a real value of the unknown, or by an imaginary value in the form $a + b \surd{-1},$  or perhaps by a value in some hitherto unknown form, or'' by a value that is not contained in any form whatsoever. But how such quantities, about which one cannot even imagine an idea — true shadows of shadows — can be added or multiplied, is not understood with the clarity demanded in mathematics.

Now, I do not intend to render the conclusions that derived from his assumption at all suspect through these objections; rather, I am confident that they can be confirmed by a method neither difficult nor very different from the ian one, in such a way that there should be no doubt left for anyone, even the slightest. I only criticize the form, which, although it can be of great utility in discovering new truths, seems to be not at all commendable in demonstrating before the public.

4. As for the demonstration of the assertion that the product $pqr$ etc., can be rationally determined from the coefficients in $X,$  the illustrious  has brought nothing at all. All that he explains on this matter in equations of fourth degree is as follows (where $\mathfrak{a},$ $\mathfrak{b},$  $\mathfrak{c},$  $\mathfrak{d}$  are the roots of the proposed equation $x^4 + B xx + C x + D = 0$ ):

‘On m’objectera sans doute, que j’ai supposé ici, que la quantité $pqr$ etait une quantité réelle, et que son quarré $ppqqrr$  était affirmatif; ce qui était encore douteux, vu que les racines $\mathfrak{a},$  $\mathfrak{b},$  $\mathfrak{c},$  $\mathfrak{d}$  etant imaginaires, il pourrait bien arriver, que le quarré de la quantité $pqr,$  qui en est composée, fut négatif. Or je réponds à cela que ce cas ne saurait jamais avoir lieu; car quelque imaginaires que soient les racines $\mathfrak{a},$ $\mathfrak{b},$  $\mathfrak{c},$  $\mathfrak{d},$  on sait pourtant, qu’il doit y avoir $\mathfrak{a} + \mathfrak{b} + \mathfrak{c} + \mathfrak{d} = 0;$  $\mathfrak{a}\mathfrak{b} + \mathfrak{a}\mathfrak{c} + \mathfrak{a}\mathfrak{d} + \mathfrak{b}\mathfrak{c} + \mathfrak{b}\mathfrak{d} + \mathfrak{c}\mathfrak{d} = B;$  $\mathfrak{a}\mathfrak{b}\mathfrak{c}+\mathfrak{a}\mathfrak{b}\mathfrak{d}+\mathfrak{a}\mathfrak{c}\mathfrak{d}+\mathfrak{b}\mathfrak{c}\mathfrak{d} = -C$  ; $\mathfrak{a}\mathfrak{b}\mathfrak{c}\mathfrak{d}=D,$  ces quantites $B,$  $C,$  $D$  étant réelles. Mais puisque $p = \mathfrak{a} + \mathfrak{b},$ $q =\mathfrak{a} + \mathfrak{c},$  $r = \mathfrak{a} + \mathfrak{d},$  leur produit $pqr = (\mathfrak{a} + \mathfrak{b})(\mathfrak{a} + \mathfrak{c})(\mathfrak{a} + \mathfrak{d})$  est déterminable comme on sait, par les quantités $B,$  $C,$  $D,$  et sera par conséquent réel, tout comme nous avons vu, qu’il est effectivement $pqr = -C,$  et $ppqqrr = CC.$  On reconnaı̂tra aisément de même, que dans les plus hautes équations cette même circonstance doit avoir lieu, et qu’on ne saurait me faire des objections de ce côté.’

However, did not add anywhere that the product $pqr,$  etc., can be rationally determined by $B,$  $C,$  etc., although he seems to have always understood it implicitly, as without it the demonstration can have no force. Indeed, it is true in equations of the fourth degree that if the product $(\mathfrak{a} + \mathfrak{b})(\mathfrak{a} + \mathfrak{c})(\mathfrak{a} + \mathfrak{d})$ is expanded, it yields $\mathfrak{a}\mathfrak{a}(\mathfrak{a} + \mathfrak{b} + \mathfrak{c} + \mathfrak{d}) \mathfrak{a}\mathfrak{b}\mathfrak{c}+\mathfrak{a}\mathfrak{b}\mathfrak{d}+\mathfrak{a}\mathfrak{c}\mathfrak{d}+\mathfrak{b}\mathfrak{c}\mathfrak{d}= - C.$  However, it does not seem clear enough how, in all higher-degree equations, the product can be rationally determined by the coefficients. The distinguished, who first observed this (Miscell. phil. math. soc. Taurin. Vol. I, p. 117), rightly contends that without a rigorous demonstration of this proposition, the method loses all its force, and he admits that it seems quite difficult to him, describing the fruitless attempts he made in that direction. However, this matter can be easily completed by the following method (of which I can only provide a summary here): Although it is not clear enough in equations of the fourth degree that the product $(\mathfrak{a} + \mathfrak{b})(\mathfrak{a} + \mathfrak{c})(\mathfrak{a} + \mathfrak{d})$ can be determined by the coefficients $B,$  $C,$  $D,$  it can be easily seen that the same product is also $= (\mathfrak{b} + \mathfrak{a})(\mathfrak{b} + \mathfrak{c})(\mathfrak{b} + \mathfrak{d}),$  as well as $= (\mathfrak{c} + \mathfrak{a})(\mathfrak{c} + \mathfrak{b})(\mathfrak{c} + \mathfrak{d}),$  and finally also $= (\mathfrak{d} + \mathfrak{a})(\mathfrak{d} + \mathfrak{b})(\mathfrak{d} + \mathfrak{c}).$  Therefore, the product $pqr$  will be a quarter of the sum $(\mathfrak{b} + \mathfrak{a})(\mathfrak{b} + \mathfrak{c})(\mathfrak{b} + \mathfrak{d})+(\mathfrak{c} + \mathfrak{a})(\mathfrak{c} + \mathfrak{b})(\mathfrak{c} + \mathfrak{d})+(\mathfrak{d} + \mathfrak{a})(\mathfrak{d} + \mathfrak{b})(\mathfrak{d} + \mathfrak{c}),$  which, if expanded, can be foreseen a priori to be a rational integral function of the roots $\mathfrak{a},$  $\mathfrak{b},$  $\mathfrak{c},$  $\mathfrak{d}$  in which they all enter in the same way. Such functions can always be expressed rationally by the coefficients of the equation whose roots are $\mathfrak{a},$ $\mathfrak{b},$  $\mathfrak{c},$  $\mathfrak{d}.$  — The same is also evident if the product $pqr$  is brought into this form:

$$\tfrac{1}{2}\left( \mathfrak{a} + \mathfrak{b} - \mathfrak{c} - \mathfrak{d} \right) \times  \tfrac{1}{2}\left( \mathfrak{a} + \mathfrak{c} - \mathfrak{b} - \mathfrak{d}  \right)  \times  \tfrac{1}{2}\left( \mathfrak{a} + \mathfrak{d} - \mathfrak{b} - \mathfrak{c}  \right)$$

The expanded product of this expression, involving all $\mathfrak{a},$ $\mathfrak{b},$  $\mathfrak{c},$  $\mathfrak{d}$  in the same way, can be easily foreseen. Knowledgeable individuals will simultaneously gather how this can be applied to higher-degree equations. I reserve the complete exposition of the demonstration, which brevity does not permit me to include here, along with a more extensive discussion of functions involving multiple variables, for another occasion.

Now, I observe that in addition to these four objections, there are still some other aspects in the demonstration of that could be criticized, which I pass over in silence lest I seem to be an overly severe critic, especially since the foregoing seems to sufficiently demonstrate that the demonstration, in the form in which it is proposed by, cannot be considered complete.

After this demonstration, presents another way to reduce the theorem for equations whose degree is not a binary power to the resolution of such equations. However, since this method teaches nothing for equations whose degree is a binary power and, moreover, is equally susceptible to all the aforementioned objections (except the fourth) as the initial general demonstration, there is no need to elaborate on it here.

9.
In the same paper, on page 263, the illustrious endeavored to further confirm our theorem by another method, the essence of which is as follows: Given an equation $x^n + Ax^{n-1} + B x^{n-2} \text{ etc.} = 0,$  an analytic expression representing its roots explicitly could not be found so far for exponents $n>4;$  however, it seems certain (as  asserts) that it can contain nothing else but arithmetic operations and root extractions, increasingly complicated as $n$  grows. If this is conceded, excellently demonstrates that, no matter how complicated the radical signs are among themselves, the formulas can always be represented by the form $M+N\surd{-1},$  where $M, N$  are real quantities.

Against this reasoning, one can object that, after so many great efforts by geometers, there remains little hope of ever reaching a general solution for algebraic equations. It becomes more and more likely that such a resolution is entirely impossible and contradictory. This should not seem too paradoxical, especially since what is commonly called the solution of an equation is properly nothing other than its reduction to pure equations. For the solution of pure equations is not taught but assumed, and if you express the root of the equation $x^m = H$ as $\sqrt[m]{H},$  you have not solved it, nor have you done more than if you were to invent some symbol to denote the root of the equation $x^n + Ax^{n-1}+ \text{ etc.} = 0$  and equate the root to it. It is true that pure equations, due to the ease of finding their roots by approximation and the elegant connection that all roots have with each other, excel above all others and are therefore not to be blamed for analysts denoting these roots by a specific symbol. However, it does not follow from this that the root of any equation can be expressed by these symbols. Or, in other words, it is assumed without sufficient reason that the solution of any equation can be reduced to the solution of pure equations. Perhaps it would not be so difficult to rigorously demonstrate the impossibility already for the fifth degree, about which I will present more extensive discussions elsewhere. Here, it suffices to note that the general solvability of equations, in the sense accepted here, is still highly doubtful, and therefore, the demonstration, whose entire validity depends on that assumption, currently carries no weight.

10.
Later, the distinguished, having noticed a deficiency in Euler’s initial demonstration (see objection 4 in article 8) that he could not rectify, attempted another approach, which he presented in the aforementioned commentary on page 120. This approach is as follows:

Suppose we have the equation $Z = 0,$ representing a function of degree $m$  in an unknown $z.$  If $m$  is an odd number, then it is clear that this equation has a real root. However, if $m$ is even, the distinguished  attempts to prove in the following way that the equation has at least one root of the form $p + q\surd{-1}.$  Let $m = 2^n i,$  where $i$  is an odd number, and suppose that $zz + uz + M$  is a divisor of the function $Z.$  Then each value of $u$  will be the sum of two roots of the equation $Z = 0$  (with the sign changed). Therefore, $u$ will have $\frac{m \cdot (m-1)}{1 \cdot 2} = m'$  values, and if $u$  is assumed to be determined by the equation $U = 0$  (where $U$  is a function involving $u$  and known coefficients in $Z$ ), this will be of degree $m'.$  It can be easily seen that $m'$  will be of the form $2^{n-1} i',$  where $i'$  is an odd number. Now, unless $m'$ is odd, assume again that $uu + u'u + M'$  is a divisor of $U.$  By similar reasoning, $u'$  will be determined by the equation $U' = 0,$  where $U'$  is a function of degree $\frac{m' \cdot (m'-1)}{1 \cdot 2}$  in $u'.$  Setting $\frac{m' \cdot (m'-1)}{1 \cdot 2} = m,$  $m$  will be of the form $2^{n-2}i,$  where $i$  is an odd number. Now, unless $m$ is odd, assume again that $u'u'+uu'+M$  is a divisor of $U',$  and then $u$  will be determined by the equation $U = 0,$  which has degree $m,$  where $m$  is of the form $2^{n-3}i',$  an odd number. It is evident that in the series of equations $U=0,$ $U'=0,$  $U''=0,$  etc., the degree will be odd, and thus have a real root. For brevity, let us assume $n = 3,$ so that the equation $U=0$  has a real root $u.$  It can be easily understood that the same reasoning holds for any other value of $n.$  Then, the coefficient $M$  and the coefficients in $U'$  (which can be easily seen to be integral functions of the coefficients in $Z$ ) or are asserted by  to be rationally determinable from $u$  and the coefficients of $Z,$  and are therefore real. It follows that the roots of the equation $u'u'+uu'+M=0$ will be of the form $p + q\surd{-1}.$  They will also satisfy the equation $U' = 0,$  i.e., this equation will have roots of the form $p + q\surd{-1}.$  Finally, by similar reasoning, it follows that even $M$  will be in the same form, and consequently, the root of the equation $zz+uz+M=0$  will also satisfy the given equation $Z=0.$  Hence, any equation will have at least one root in the form $p+q\surd{-1}.$

11.
Objections 1, 2, 3, which I made against the first demonstration of (art. 8), have the same force against this method. However, there is a difference, so that the second objection, to which ’s demonstration was only liable in certain special cases, must now apply to all cases. Specifically, it can be a priori demonstrated that even if a formula is given expressing the coefficient $M'$ rationally in terms of $u$  and the coefficients in $Z,$  it must necessarily become indeterminate for multiple values of $u';$  likewise, a formula expressing the coefficient $M$  in terms of $u$  must become indeterminate for certain values of $u'',$  and so on. This will be most clearly understood if we take the example of a quartic equation. Let us assume, therefore, that $m = 4,$ and let the roots of the equation $Z = 0$  be $\alpha,$  $\beta,$  $\gamma,$  $\delta.$  Then it is clear that the equation $U = 0$  will be of the sixth degree, and its roots will be $-(\alpha+\beta),$  $-(\alpha+\gamma),$  $-(\alpha+\delta),$  $-(\beta+\gamma),$  $-(\beta+\delta),$  $-(\gamma+\delta).$  The equation $U'=0$  will be of the fifteenth degree, and its values of $u'$  will be

$$\begin{array}{c} \begin{array}{cccccc} 2\alpha+\beta+\gamma, & 2\alpha + \beta + \delta,& 2\alpha + \gamma + \delta, & 2\beta + \alpha + \gamma,& 2\beta + \alpha + \delta, & 2\beta + \gamma + \delta,\\ 2\gamma+ \alpha + \beta, & 2\gamma + \alpha + \delta,& 2 \gamma + \beta + \delta,& 2 \delta + \alpha + \beta, &2\delta + \alpha + \gamma, &2\delta + \beta + \gamma, \end{array} \\ \begin{array}{ccc} \alpha + \beta + \gamma + \delta, & \alpha + \beta + \gamma + \delta, & \alpha + \beta + \gamma + \delta \end{array} \end{array}$$

Now, in this equation, since its degree is odd, it will have to have a root, and it will indeed have the real root $\alpha+\beta+\gamma+\delta$ (which, with the sign of the first coefficient in $Z$  changed, is equal and therefore not only real but also rational, if the coefficients in $Z$  are rational). But it can be easily seen that if a formula is given that rationally expresses the value of $M'$ in terms of the corresponding value of $u',$  it must necessarily become indeterminate for $u' = \alpha + \beta + \gamma + \delta.$  For this value is a root of the equation $U' = 0,$  and the three values of $M'$  corresponding to it will be, for example, $(\alpha+\beta)(\gamma+\delta),$  $(\alpha+\gamma)(\beta+\delta),$  and $(\alpha+\delta)(\beta+\gamma),$  all of which can be irrational. Clearly, a rational formula could not produce an irrational value of $M'$ in this case, nor could it produce three distinct values. From this example, it is evident that the method of is by no means satisfactory, but if it is to be made complete from every aspect, a much deeper investigation into the theory of elimination is required.

12.
Finally, Lagrange dealt with our theorem in his work ''Sur la forme des racines imaginaires des équations, Nouv. Mém. de l’Acad. de Berlin 1772, p. 222 sqq''. This great geometer especially endeavored to repair the deficiencies in Euler’s first demonstration, particularly addressing those aspects constituting objections two and four as outlined above (art. 8). He delved so deeply into these matters that nothing more is desired, except perhaps in the previous discussion on the theory of elimination (on which this entire investigation is based), certain doubts may seem to remain. However, he did not touch upon the third objection at all, and the entire inquiry is built on the assumption that the equation of degree $m$ indeed has $m$  roots.

Therefore, with careful consideration of what has been presented so far, I hope that experts will find a new demonstration of this most important theorem, derived from entirely different principles, to be not unwelcome. I now proceed to present it.

13.
''Let $m$ denote any positive integer. Then the function $\sin \varphi \cdot x^m - \sin m \varphi \cdot r^{m-1} x + \sin(m-1)\varphi \cdot r^m$ will be divisible by $xx - 2 \cos \varphi \cdot rx + rr$ ''.

Proof. For $m = 1,$ the function becomes $= 0,$  and hence it is divisible by any factor. For $m=2,$ the quotient becomes $\sin \varphi,$  and for any larger value, it will be $\sin \varphi.x^{m-2} + \sin 2\varphi. rx^{m-3} + \sin 3\varphi. rrx^{m-4} + \text{etc.}+\sin(m-1)\varphi. r^{m-2}.$ It can be easily confirmed that by multiplying this function by $xx-2\cos\varphi. rx + rr,$ the product becomes equal to the given function.

14.
If the quantity $r$ and the angle $\varphi$  are determined in such a way that we have the equations

$$\begin{aligned} r^m \cos m \varphi + A r^{m-1} \cos(m-1) \varphi + B r^{m-2}\cos(m-2) \varphi + \text{etc.} &\\ + Krr \cos 2\varphi + Lr \cos \varphi + M &= 0 \quad.\quad.\quad[1] \end{aligned}$$

$$\begin{aligned} r^m \sin m \varphi + A r^{m-1} \sin(m-1) \varphi + B r^{m-2}\sin(m-2) \varphi + \text{etc.}& \\ + Krr \sin 2\varphi + Lr \sin \varphi & = 0 \quad.\quad.\quad[2] \end{aligned}$$

then the function $x^m+Ax^{m-1} + Bx^{m-2}+\text{etc.}+Kxx + Lx + M = X$ will be divisible by the double factor $xx-2\cos\varphi.rx+rr,$  provided $r\sin\varphi$  is not $=0;$  if $r\sin\varphi = 0,$  then the same function will be divisible by the simple factor $x - r \cos\varphi.$ 

Proof. I. From the preceding article, all of the following quantities will be divisible by $xx - 2\cos \varphi. rx + rr{:}$

$$ \begin{array}{lll} \sin \varphi. rx^m &- \sin m \varphi. r^m x &+ \sin (m-1) \varphi. r^{m+1} \\ A\sin \varphi. rx^{m-1} &- A\sin (m-1) \varphi. r^{m-1} x &+ A\sin (m-2) \varphi. r^{m} \\ B\sin \varphi. rx^{m-2} &- B \sin (m-2) \varphi. r^{m-2} x &+ B\sin (m-3) \varphi. r^{m-1} \\ &\text{etc.} & \text{etc.} \\ K\sin \varphi. rx^2 &- K\sin 2 \varphi. rr x &+K\sin \varphi. r^{3} \\ L\sin \varphi. rx &- L \sin \varphi. r x & \\ M\sin \varphi. r & &+ M\sin(-\varphi).r \end{array}$$

Therefore, the sum of these quantities will also be divisible by $xx - 2\cos\varphi.rx + rr.$ The terms of the first group constitute the sum $\sin\varphi.rX;$  the sum of the second group is $0$  due to [2]; and it is easily seen that the sum of the third group also vanishes, if [1] is multiplied by $\sin \varphi$  and [2] by $\cos \varphi,$  and the products are subtracted. Hence, it follows that the function $\sin \varphi. r X$ is divisible by $xx - 2 \cos \varphi .rx +rr,$  and therefore, unless $r \sin \varphi = 0,$  so is the function $X.$  Q.E.P.

II. If $r \sin \varphi = 0,$ then either $r = 0$  or $\sin \varphi = 0.$  In the former case, $M=0$  due to [1], and therefore $X$  is divisible by $x$  or $x - r\cos\varphi;$  in the latter case, $\cos \varphi = \pm 1,$  $\cos 2 \varphi = +1,$  $\cos 3 \varphi = \pm 1,$  and generally $\cos n\varphi = \cos \varphi^n.$  Therefore, due to [1], $X=0$  when $x=r \cos \varphi,$  and hence the function $X$  is divisible by $x-r\cos\varphi.$  Q.E.S.

15.
The preceding theorem is often demonstrated with the aid of imaginary quantities, see Introductio in Analysin Infinitorum Vol. I p.110; I deemed it worthwhile to show how it can be equally easily derived without their assistance. It is already evident that for the proof of our theorem, nothing else is required than to show: Given any function $X$ of the form $x^m + Ax^{m-1}+Bx^{m-2}+\text{etc.} +Lx + M,$  $r$  and $\varphi$  can be determined in such a way that equations [1] and [2] hold. From this, it will follow that $X$ has a real factor of the first or second degree; however, the division will necessarily produce a real quotient of a lower degree, which, for the same reason, will also have a factor of the first or second degree. By continuing this operation, $X$ will eventually be resolved into simple or double real factors. Thus, the goal of the following discussion is to prove that theorem.

16.
Imagine an infinite fixed plane (the plane of the table, Fig. 1), and on this, an infinite fixed straight line $GC$ passing through the fixed point $C.$  Assume any length as the unit so that all lines can be expressed by numbers. At any point $P$ on the plane, with a distance $r$  from the center $C$  and an angle $GCP = \varphi,$  erect a perpendicular equal to the value of the expression

$$r^m \sin m \varphi + A r^{m-1} \sin(m-1)\varphi + \text{etc.} + L r\sin \phi,$$

which, for brevity, I will always denote by $T$ in the following. I always consider the distance $r$ as positive, and for points on the other side of the axis, the angle $\varphi$  should be considered either as greater than two right angles or as negative (which here is equivalent). The ends of these perpendiculars (which should be taken above the plane for a positive value of $T,$ below for a negative value of $T,$  and on the plane itself when $T$  vanishes) will be on a continuous curved surface everywhere infinite, which, for brevity, I will call the first surface in the following. Similarly, in exactly the same way, another surface, whose height above any point on the plane is

$$r^m\cos m\varphi + Ar^{m-1}\cos(m-1)\varphi + \text{etc.}+Lr\cos\varphi+M,$$

which I will denote by $U$ for brevity. This surface will also be continuous and everywhere infinite, and I will distinguish from the former by the term second surface. Then it is evident that the whole matter revolves around proving that at least one point exists that lies simultaneously in the plane, on the first surface, and on the second surface.

17.
It can be easily seen that the first surface lies partly above and partly below the plane; for the distance from the center $r$ can be taken so large that the remaining terms in $T$  become negligible compared to the first term $r^m \sin m \varphi;$  this term, however, can be either positive or negative for a properly determined angle $\varphi.$  Therefore, the fixed plane will necessarily intersect the first surface; I will call this intersection of the plane with the first surface the first line, which will be determined by the equation $T = 0.$  For the same reason, the plane will intersect the second surface; the intersection will constitute a curve determined by the equation $U = 0,$  which I will call the second line. Strictly speaking, each curve will consist of several branches that can be entirely separate, but each will be a continuous line. Indeed, the first line will always be such that it is called a complex, and the axis $GC$ should be regarded as part of this curve; for any value assigned to $r,$  $T$  will always be $= 0$  when $\varphi$  is either $= 0$  or $= 180^o.$  However, it is better to consider the complex of all branches passing through all points where $T = 0$  as one curve (according to the usage generally accepted in higher geometry), and similarly for all branches passing through all points where $U=0.$  It is now evident that the problem has been reduced to proving that at least one point exists in the plane where some branch of the first line intersects some branch of the second line. To achieve this, it will be necessary to closely examine the nature of these lines.

18.
First of all, I observe that both curves are algebraic, namely, if brought back to orthogonal coordinates, they are of order $m.$ Starting with the abscissas from $C,$  with $x$  toward $G,$  and ordinates $y$  toward $P,$  we have $x = r \cos \varphi,$  $y = r \sin \varphi,$  and thus, generally, for any $n,$

$$\begin{aligned} r^n \sin n \varphi = nx^{n-1}y - \tfrac{n. n-1. n-2}{1. 2 . 3} x^{n-3} y^3 + \tfrac{n \dots n-4}{1 \dots. 5} x^{n-5}y^5 - \text{etc.}, \\ r^n \cos n \varphi = x^n - \tfrac{n. n-1}{1. 2} x^{n-2} yy + \tfrac{n. n-1. n-2. n-3 }{ 1. 2 . 3 . 4} x^{n-4}y^4 - \text{etc.} \end{aligned}$$

Therefore, both $T$ and $U$  will consist of several terms of this kind $a x^{\alpha} y^{\beta},$  denoting $\alpha,$  $\beta$  as positive integers whose sum is at most $= m.$  Moreover, it can be easily foreseen that all terms of $T$  involve the factor $y,$  and therefore, the first line is composed of a line (whose equation is $y = 0$ ) and a curve of order $m-1.$  However, it is not necessary to consider this distinction here.

A matter of greater significance will be the investigation of whether the first and second lines have infinite branches and how many of each. At an infinite distance from the point $C,$ the first line, whose equation is $\sin m \varphi + \frac{A}{r} \sin (m-1)\varphi + \frac{B}{rr} \sin (m-2) \varphi \text{ etc.} = 0,$  will merge with the line whose equation is $\sin m \varphi = 0.$  The latter exhibits $m$  straight lines intersecting at point $C,$  where the first is the axis $GCG',$  and the others are inclined at angles $\frac{1}{m} 180,$  $\frac{2}{m} 180,$  $\frac{3}{m} 180$  etc. degrees against it. Therefore, the first line has $2m$ infinite branches, which, when described around the circle with an infinitely large radius, divide the circumference into $2m$  equal parts. The division occurs in such a way that the circumference is intersected by the first branch at the intersection of the circle and the axis, by the second at a distance of $\frac{1}{m} 180^o,$ by the third at a distance of $\frac{2}{m} 180^o,$  and so on.

Similarly, the second line at an infinite distance from the center will have an asymptote expressed by the equation $\cos m \phi = 0.$ This asymptote is a complex of $m$  straight lines at point $C,$  intersecting at equal angles, such that the first forms an angle of $\frac{1}{m}90^o,$  the second an angle of $\frac{3}{m}90^o,$  the third an angle of $\frac{5}{m}90^o,$  and so on. Therefore, the second line will also have $2m$ infinite branches, each occupying the middle position between the two nearest branches of the first line. This arrangement causes the branches to intersect the circumference of a circle described with an infinitely large radius at points that are $\frac{1}{m}90^o,$ $\frac{3}{m}90^o,$  $\frac{5}{m}90^o$  etc. away from the axis.

However, it is evident that the axis itself always constitutes two infinite branches of the first line, namely the first and ${m+1}^{st}.$ This arrangement of the branches is clearly shown in Fig. 2, for the case $m = 4,$  where the branches of the second line are represented with dotted lines to distinguish them from the branches of the first line. The same applies to Fig. 4. Since these conclusions are of utmost importance, and infinitely large quantities may offend some readers, I will demonstrate them without the support of the infinite in the following article.

19.
With all the conditions as stated above, a circle can be described from the center $C,$ on whose circumference there are $2m$  points where $T=0$  and an equal number of points where $U=0,$  arranged such that each latter point lies between two former points.

Denote the sum of all coefficients $A,$ $B,$  etc., up to $K,$  $L,$  $M$  by $S,$  and let $R$  be taken such that $R > S\surd{2}$  and $R > 1$. Then I say that in a circle described with a radius $R,$ the conditions stated in the theorem necessarily hold. Specifically, for simplicity, designate the point on its circumference that is $\frac{1}{m}45$ degrees away from its intersection with the left side of the axis, or for which $\varphi = \frac{1}{m} 45^o,$  by (1), and similarly, the point that is $\frac{3}{m}45^o$  away from this intersection, or for which $\varphi = \frac{3}{m} 45^o,$  by (3); and the point where $\varphi = \frac{5}{m} 45^o,$  by (5), and so on up to $(8m-1),$  which is $\frac{8m-1}{m} 45$  degrees away from that intersection, if you always progress in the same direction (or $\frac{1}{m}45^o$  from the opposite side), so that a total of $4m$  points are on the circumference, spaced at equal intervals. Then one point will lie between $(8m-1)$ and (1) for which $T=0;$  similarly, there will be singular points between (3) and (5); between (7) and (9); between (11) and (13), and so on, with a total of $2m$  points. Likewise, each point for which $U=0$ will lie between (1) and (3); between (5) and (7); between (9) and (11), with the total count also $=2m.$  Finally, apart from these $4m$  points, there will be no other points in the entire circumference for which either $T$  or $U$  is $= 0.$

Proof. I. In the point (1), we have $m\varphi = 45^o,$ and thus

$$T = R^{m-1}(R\surd \tfrac{1}{2} + A \sin (m-1)\varphi + \frac{B}{R} \sin (m-2) \varphi + \text{etc.}+\frac{L}{R^{m-2}}\sin \varphi )$$

However, the sum $\text{etc.}$ involving $A \sin (m-1) \varphi + \frac{B}{R} \sin (m-2) \phi$  etc. cannot be greater than $S.$  Therefore, it must necessarily be less than $R\surd{\tfrac{1}{2}}.$  It follows that at this point, the value of $T$  is certainly positive. Hence, $T$ will have a positive value when $m\varphi$  lies between $45^o$  and $135^o,$  i.e., from point (1) to (3), the value of $T$  will always be positive. By the same reasoning, $T$ will have a positive value from point (9) to (11) and generally from any point $(8k+1)$  to $8k+3,$  where $k$  denotes any integer. Similarly, $T$ will have a negative value everywhere between (5) and (7), between (13) and (15), etc., and generally between $(8k + 5)$  and $(8k+7),$  so it can never be $= 0$  in these intervals. But since the value is positive at (3) and negative at (5), it must be $= 0$ somewhere between (3) and (5); also somewhere between (7) and (9); between (11) and (13), etc., up to the interval between $(8m-1)$  and (1) inclusive, so that altogether at $2m$  points, $T = 0.$  Q.E.D.

II. That no other points with this property exist beyond these $2m$ points can be understood as follows. Since there are none between (1) and (3), between (5) and (7), etc., it could not be otherwise unless more such points existed, which would happen only if at least two were in some interval between (3) and (5) or between (7) and (9), etc. Then necessarily in the same interval, $T$ would be either a maximum or minimum, and thus $\frac{dT}{d\varphi} = 0.$  But $\frac{dT}{d\varphi} = mR^{m-2}(R\cos m \varphi + \frac{m-1}{m} A \cos(m-1)\varphi + \text{etc.}),$  and $\cos m\varphi$  between (3) and (5) is always negative and $>\surd{\tfrac{1}{2}}.$  Hence, it is easily seen that in this entire interval, $\frac{dT}{d\varphi}$  is a negative quantity, and similarly, between (7) and (9) everywhere positive; between (11) and (13) negative, etc., so that $0$  cannot exist in any of these intervals. Therefore, etc. Q.E.S.

III. In a wholly similar manner, it is demonstrated that $U$ has a negative value everywhere between (3) and (5), between (11) and (13), etc., and generally between $(8k+3)$  and $(8k+5);$  positive, however, between (7) and (9), between (15) and (17), etc., and generally between $(8k+7)$  and $(8k+9).$  Hence, it immediately follows that $U=0$  must occur somewhere between (1) and (3), between (5) and (7), etc., i.e., in $2m$  points. However, in none of these intervals can $\frac{dU}{d\varphi}=0$ occur (which is easily proved similarly as above): therefore, more than those $2m$  points on the circumference of the circle will not be given, where $U=0.$  Q.E.T. et Q.

Moreover, the part of this theorem according to which more than $2m$ points do not exist where $T=0,$  nor more than $2m$  where $U = 0,$  can also be demonstrated from the fact that the equations $T = 0,$  $U=0$  represent curves of $m^{th}$  order, such as, according to higher geometry, cannot be cut in more than $2m$  points, a circle being a curve of the second order.

20.
If another circle with a radius greater than $R$ is described from the same center, then it will be divided in the same way: between points (3) and (5), there will be one point where $T=0,$  likewise between (7) and (9), etc. It will be easily observed that the less the radius of this circle differs from the radius $R,$  the closer such points between (3) and (5) should be on the circumferences of both circles. The same will occur if a circle with a radius somewhat smaller than $R$ but greater than $S\surd{2}$  and $1$  is described. From this, it is easily understood that the circumference of the circle described with a radius $R$ is actually cut at the point between (3) and (5) where $T=0$  by some branch of the first line; the same holds for the other points where $T=0.$  Similarly, it is evident that the circumference of this circle is cut at all $2m$  points where $U=0$  by some branch of the second line. These conclusions can also be expressed in the following way: When a circle of the appropriate size is described from the center $C,$ $2m$  branches of the first line and $2m$  branches of the second line will enter this, in such a way that the two nearest branches of the first line are separated by some branch of the second line. See Fig. 2, where the circle is now of finite size, and the numbers assigned to each branch are not to be confused with the numbers by which I designated specific limits in the previous article and in this for the sake of brevity.

21.
Now, from this relative arrangement of the branches entering the circle, the intersection of some branch of the first line with a branch of the second line within the circle can be deduced in various ways. I am almost ignorant of which method to choose among the rest. The following seems very clear: Let’s designate (Fig. 2) a point on the circumference of the circle, where it is cut by a branch from the left side of the axis (which itself is one of the $2m$ branches of the first line) as $0;$  the nearest point where a branch of the second line enters, as $1;$  the next point to this, where the second branch of the first line enters, as $2,$  and so on up to $4m-1,$  so that in any point marked with an even number, a branch of the second line enters the circle, contrary to a branch of the first line expressed in all points by an odd number. It is well known from higher geometry that an algebraic curve (or each part of any algebraic curve if it happens to be composed of several) may either return into itself or extend infinitely on both sides, so if any branch of an algebraic curve enters a finite space, it must necessarily come out again somewhere from this space. Hence, it is easily concluded that any point marked with an even number (or, for the sake of brevity, any even point) should be connected by a branch of the first line with another even point within the circle, and similarly, any point marked with an odd number should be connected with another similar point by a branch of the second line. Although the connection of these two points according to the nature of the function $X$ can be very different, so that it cannot be determined in general, it can be easily demonstrated that in any case, an intersection of the first line with the second line always occurs.

22.
The demonstration of this necessity seems most conveniently representable by reductio ad absurdum. Namely, let’s assume that the connection of any two even points and any two odd points can be arranged in such a way that no intersection of a branch of the first line with a branch of the second line arises from it. Since the axis is a part of the first line, clearly point $0$ must be connected with point $2m.$  Therefore, point $1$  cannot be connected with any point beyond the axis, i.e., with no point expressed by a number greater than $2m,$  otherwise the connecting line would necessarily cut the axis. So, if $1$ is assumed to be connected with point $n,$  then $n < 2m.$  By similar reasoning, if $2$  is connected with $n',$  then $n' < n,$  because otherwise, the branch $2...n'$  would necessarily cut the branch $1...n.$  For the same reason, point $3$  will be connected with some point between $4$  and $n',$  and it is clear that if $3,$  $4,$  $5,$  etc., are assumed to be connected with $n,$  $n,$  $n',$  etc., $n$  lies between $5$  and $n,$  $n'$  between $6$  and $n''',$  etc. Hence, it is evident that, finally reaching some point $h$  connected with point $h+2,$  the branch entering the circle at point $h+1$  will necessarily cut the branch connecting points $h$  and $h+2.$  However, since one of these two branches will belong to the first line and the other to the second, it is now clear that the assumption is contradictory, and therefore, an intersection of the first line with the second line must necessarily occur somewhere.

If this is combined with the preceding discussions, it will be concluded from all the explanations that the theorem, a rational algebraic function of one indeterminate can be resolved into factors of the first or second degree with real coefficients, has been rigorously demonstrated.

23.
Moreover, it can be easily deduced from the same principles, that not only one but at least $m$ intersections of the first line with the second line are given, although it is also possible for the first line to be cut by several branches of the second line at the same point, in which case the function $X$  will have multiple equal factors. However, since it suffices here to have demonstrated the necessity of one intersection, I do not dwell further on this matter for the sake of brevity. For the same reason, I do not pursue other properties of these lines here in more detail, such as the intersection always occurring at right angles, or if multiple branches of each curve coincide at the same point, the first line having as many branches as the second line, and these being alternately placed, intersecting at equal angles, etc.

Finally, I observe that it is not impossible for the preceding demonstration, which I built on geometric principles here, to be presented in a purely analytical form. However, I believed that the representation I explained here would be less abstract, and the essence of the proof could be put more clearly before the eyes than could be expected from an analytical demonstration.

As a bonus, I will suggest another method for proving our theorem, which, at first glance, will seem not only very different from the preceding demonstration but also from all the other demonstrations explained above, and yet it is fundamentally the same as the ian method. I leave it to those familiar with the subject to compare it with the previous one and explore the parallelism between the two. It is attached solely for their benefit.

24.
Above the plane of Figure 4, relative to the axis $CG$ and the fixed point $C,$  I assume that the first and second surfaces are described in the same way as above. Take any point located on any branch of the first line, where $T = 0$ (for example, any point $M$  lying on the axis), and unless $U = 0$  at this point, proceed from this point in the first line towards the direction where the absolute magnitude of $U$  decreases. If, by chance, the absolute value of $U$ decreases in both directions at the point $M,$  it is arbitrary where you proceed; but I will immediately explain what to do if $U$  increases in both directions. It is clear that as long as you always progress in the first line, you will necessarily reach a point where $U = 0$ or one where the value of $U$  becomes a minimum, for example, the point $N.$  In the former case, the sought point is found; in the latter, it can be demonstrated that in this point, multiple branches of the first line intersect (indeed, an equal number of branches), and their semiaxes are so arranged that if you deviate towards any of them (either here or there), the value of $U$  will continue to decrease. (For the sake of brevity, I must suppress the demonstration of this theorem, which, although not more difficult, is more extensive.) In this branch, you can then progress again until $U$ becomes $= 0$  (as happens in Fig. 4 at $P$ ) or again a minimum. Then, deviating again, you will necessarily reach a point where $U = 0.$

Against this demonstration, a doubt could be raised about whether it is possible that no matter how far you progress, and even though the value of $U$ always decreases, these decrements continuously become slower, and nevertheless, that value never reaches a certain limit. This objection would correspond to the fourth in Article 6. But it would not be difficult to assign a limit, such that once you surpass it, the value of $U$ must necessarily not only change more rapidly but also not decrease any longer, so that before reaching this limit, the value $0$  must have necessarily been reached. However, I reserve the opportunity to elaborate more extensively on this and other points that I could only touch upon in this demonstration on another occasion.

We discovered the principles on which this demonstration is based in October 1797.