Page:Encyclopædia Britannica, Ninth Edition, v. 19.djvu/806

Rh 782 PROBABILITY between /x ami v that is, that the mean of all the errors lies between ft.r- 1 and vr- 1, if r is the number of the partial errors in (37). The most likely value of x (that is, for which the frequency is greatest) is of course x = m, and the chance that x dees not differ from m by more than 5 is In this put (.r- //0{2(/i - /}} --=; ;. . tlx^dt^^h-i). The limits m8 for x become 8 (2(A-i)} -* for t ; hence, putting we find is tlie probability that the sum x of the errors in (37) lies between the limits mT/2(A- 1) ; w is also the probability that the mean of all the errors, xr~ l, lies between the limits j!r- 1 rr- 1 /2(/i-t). 51. The important result (48), which is the key to the whole theory of errors, contains several particular cases which Laplace gives in his fourth chapter. We may first make one or two remarks on it. (1) h i is always positive ; for in (41) 1 &amp;gt;aj, Ao&amp;gt;cc, &c. , because the mean of the squares of n numbers is always greater than the square of the mean. 1 (2) To find the mean value M(a?) of the sum x, and the mean value of its square M(ar ! ), we have fx*ydx J &quot; fxtidx ~r f J - ~ M(a:) -f~r ; fydx the limits being cc. Hence The first is obvious from the fact that to every value m + z for 9- there corresponds another m - z. Both results also easily follow from common algebra : the case is that of a sum, x, X=f 1 + f 2 + e 3 -&amp;gt; r, &c., where each quantity ej, e 2 , f 3. . . . goes through an independent series of values ; and it is easily proved that 52. One particular case of the general problem in art. 48 is when the errors e 1? e 2, f 3. . . .in (37) all follow exactly the same law; as, for instance, if e l, t ., , e 3. . . .are the errors committed in observing the same magnitude, under exactly the same circumstances, a great number of times; and we are asked to find the chance that the sum of the errors, or that their arithmetical mean, shall fall between given limits. Here the law of facility for each error is of course the same, though we may not know what it is. We have then from (41) dt j, so that in eq. (52), 2 rr= - / JtrJO is the probability that the mean_ofall the errors shall lie between a 1 TV2r rr (A~- a, here is the mean of all the possible values of the error in this par ticular observation, which are of course infinite in number ; and (53) shows us, what is evident beforehand, that the more the number r of observations is increased the narrower do the limits for the mean error become for a given probability -a ; so that if, suppose, we take r = 3, and r=oo, we have very nearly w=l, and it becomes practically certain that the mean of the actual observations will differ from a, by an infinitesimal deviation. 53. What we have found hitherto would be of very little practi cal use, because the constants involved suppose the amounts of the Trors known, and therefore the true value known of the quantity ich is observed or measured. It is, however, precisely this true value which we usually do not know and are trying to find. Let us now suppose a large number r of measurements, which we will call i&quot;a 3 a r, The (unknown) errors of the observations will be or M( 1 ) = M( 1 ) - A ; or the mean of the errors is the error committed in taking the mean of the observations as the value of A. Hence (53) -a is the probability that the error committed in taking the mean of the observations as the truth shall lie between Here oj is the true mean of the errors of an infinite number of observations, Aj the mean of their squares. As we have no means of determining aj (except that it is nearly equal to the mean of the errors we are dealing with, which would give us no result), we have to limit the generality of the question by assuming that the law of error of the observation gives positive and negative errors with equal facility ; if so o 1 = 0, and we have the probability TXT that the error lies between T/2r J Aj. Here A-,, which is the mean of the squares of all possible values of the error of the observation, will be at least very nearly the mean square of the actual values of the errors, if r is large ; = M(J) - (Ma : ) 2 + (Mfltj - A) 2. Rejecting the last term, as the square of a very small quantity, A 1 = M-(M 1 ) 2 , and we have the probability -sr (in (53)) that the error in taking the mean of the observations as the truth lies between T V2y-i{M(aJ)-(Mo 1 )} .... (f,4), a value depending on the mean square, and mean first power, of the observed values. These limits may be put in a different form, rather easier for calculation. If fi,f^,fy ... / r be the apparent errors, that is, not the real ones, but what they would be on the hypothesis that the mean is the true value, then, putting M for r~ l (a l + a. 2 + . . . a r ) , /! = !- M, / 2 = ,-M,. . . / r = ,.-M; M(o)-2M. M + M*-M(aJ)-(Ma,); so that A = M(/i), and (54) may be written T X /27 7T 70^+/!+. . ./&amp;gt;-!. . . . (55). 54. In the last article we have made no assumption as to the law of frequency of the error in the observation we are considering, except that it gives positive and negative values with equal facility. If, however, we adopt the hypothesis (see art. 47) that every error in practice arises from the joint operation of a number of independ ent causes, the partial error due to each of which is of very small importance, then the process in art. 48 will apply, and we may con clude that the errors of every series of observations of the same mag nitude made in the same circumstances follow the law of frequency in formula (48) ; and if we suppose, as is universally done, that positive and negative values are equally probable, the law will be and the probability (49) will be (56), where c is a constant, which is sometimes called the modulus of the system. Every error in practice, then, is of the form (56), and is similar to every other. If c be small, the error has small amplitudes, and the series of observations are accurate. If, as supposed in art. 53, a set of observations have been made, we can determine the modulus c, with an accuracy increasing with the number in the set. For (art. 51) ^c 2 = true mean square of all possible values of the error. This we have called A t in last article, and have shown it nearly equal to M(a] 2 ) - (M^) 2 or M(/j - ) ; so that errors. 55. Thus, if a set of observations have been made, and c thus determined from them, it is easy to see that Moan error = r 7r~* = 5642c Menu square of error c .... (57). Probable error = 4769c The mean error means that of all the positive or all the negative errors. The probable error is the value which half the errors exceed and half fall short of, so that it is an even chance that the error of any particular observation lies between the limits 4769c. Its value is found from the table in art. 9, taking ! = . 56. We have often to consider the law of error of the sum of
 * narte_oi_a_magnitu(le whose true but unknown vulue is A.
 * c 2 = mean square of obs. - (mean of obs. )- = mean square of apparent