Eight Lectures on Theoretical Physics/III

The problem with which we shall be occupied in the present lecture is that of a closer investigation of the atomic theory of matter. It is, however, not my intention to introduce this theory with nothing further, and to set it up as something apart and disconnected with other physical theories, but I intend above all to bring out the peculiar significance of the atomic theory as related to the present general system of theoretical physics; for in this way only will it be possible to regard the whole system as one containing within itself the essential compact unity, and thereby to realize the principal object of these lectures.

Consequently it is self evident that we must rely on that sort of treatment which we have recognized in last week's lecture as fundamental. That is, the division of all physical processes into reversible and irreversible processes. Furthermore, we shall be convinced that the accomplishment of this division is only possible through the atomic theory of matter, or, in other words, that irreversibility leads of necessity to atomistics.

I have already referred at the close of the first lecture to the fact that in pure thermodynamics, which knows nothing of an atomic structure and which regards all substances as absolutely continuous, the difference between reversible and irreversible processes can only be defined in one way, which a priori carries a provisional character and does not withstand penetrating analysis. This appears immediately evident when one reflects that the purely thermodynamic definition of irreversibility which proceeds from the impossibility of the realization of certain changes in nature, as, e. g., the transformation of heat into work without compensation, has at the outset assumed a definite limit to man's mental capacity, while, however, such a limit is not indicated in reality. On the contrary: mankind is making every endeavor to press beyond the present boundaries of its capacity, and we hope that later on many things will be attained which, perhaps, many regard at present as impossible of accomplishment. Can it not happen then that a process, which up to the present has been regarded as irreversible, may be proved, through a new discovery or invention, to be reversible? In this case the whole structure of the second law would undeniably collapse, for the irreversibility of a single process conditions that of all the others.

It is evident then that the only means to assure to the second law real meaning consists in this, that the idea of irreversibility be made independent of any relationship to man and especially of all technical relations.

Now the idea of irreversibility harks back to the idea of entropy; for a process is irreversible when it is connected with an increase of entropy. The problem is hereby referred back to a proper improvement of the definition of entropy. In accordance with the original definition of Clausius, the entropy is measured by means of a certain reversible process, and the weakness of this definition rests upon the fact that many such reversible processes, strictly speaking all, are not capable of being carried out in practice. With some reason it may be objected that we have here to do, not with an actual process and an actual physicist, but only with ideal processes, so-called thought experiments, and with an ideal physicist who operates with all the experimental methods with absolute accuracy. But at this point the difficulty is encountered: How far do the physicist's ideal measurements of this sort suffice? It may be understood, by passing to the limit, that a gas is compressed by a pressure which is equal to the pressure of the gas, and is heated by a heat reservoir which possesses the same temperature as the gas, but, for example, that a saturated vapor shall be transformed through isothermal compression in a reversible manner to a liquid without at any time a part of the vapor being condensed, as in certain thermodynamic considerations is supposed, must certainly appear doubtful. Still more striking, however, is the liberty as regards thought experiments, which in physical chemistry is granted the theorist. With his semi-permeable membranes, which in reality are only realizable under certain special conditions and then only with a certain approximation, he separates in a reversible manner, not only all possible varieties of molecules, whether or not they are in stable or unstable conditions, but he also separates the oppositely charged ions from one another and from the undissociated molecules, and he is disturbed, neither by the enormous electrostatic forces which resist such a separation, nor by the circumstance that in reality, from the beginning of the separation, the molecules become in part dissociated while the ions in part again combine. But such ideal processes are necessary throughout in order to make possible the comparison of the entropy of the undissociated molecules with the entropy of the dissociated molecules; for the law of thermodynamic equilibrium does not permit in general of derivation in any other way, in case one wishes to retain pure thermodynamics as a basis. It must be considered remarkable that all these ingenious thought processes have so well found confirmation of their results in experience, as is shown by the examples considered by us in the last lecture.

If now, on the other hand, one reflects that in all these results every reference to the possibility of actually carrying out each ideal process has disappeared—there are certainly left relations between directly measurable quantities only, such as temperature, heat effect, concentration, etc.—the presumption forces itself upon one that perhaps the introduction as above of such ideal processes is at bottom a round-about method, and that the peculiar import of the principle of increase of entropy with all its consequences can be evolved from the original idea of irreversibility or, just as well, from the impossibility of perpetual motion of the second kind, just as the principle of conservation of energy has been evolved from the law of impossibility of perpetual motion of the first kind.

This step: to have completed the emancipation of the entropy idea from the experimental art of man and the elevation of the second law thereby to a real principle, was the scientific life's work of Ludwig Boltzmann. Briefly stated, it consisted in general of referring back the idea of entropy to the idea of probability. Thereby is also explained, at the same time, the significance of the above auxiliary term used by me; “preference” of nature for a definite state. Nature prefers the more probable states to the less probable, because in nature processes take place in the direction of greater probability. Heat goes from a body at higher temperature to a body at lower temperature because the state of equal temperature distribution is more probable than a state of unequal temperature distribution.

Through this conception the second law of thermodynamics is removed at one stroke from its isolated position, the mystery concerning the preference of nature vanishes, and the entropy principle reduces to a well understood law of the calculus of probability.

The enormous fruitfulness of so “objective” a definition of entropy for all domains of physics I shall seek to demonstrate in the following lectures. But today we have principally to do with the proof of its admissibility; for on closer consideration we shall immediately perceive that the new conception of entropy at once introduces a great number of questions, new requirements and difficult problems. The first requirement is the introduction of the atomic hypothesis into the system of physics. For, if one wishes to speak of the probability of a physical state, i. e., if he wishes to introduce the probability for a given state as a definite quantity into the calculation, this can only be brought about, as in cases of all probability calculations, by referring the state back to a variety of possibilities; i. e., by considering a finite number of a priori equally likely configurations (complexions) through each of which the state considered may be realized. The greater the number of complexions, the greater is the probability of the state. Thus, e. g., the probability of throwing a total of four with two ordinary six-sided dice is found through counting the complexions by which the throw with a total of four may be realized. Of these there are three complexions:

On the other hand, the throw of two is only realized through a single complexion. Therefore, the probability of throwing a total of four is three times as great as the probability of throwing a total of two.

Now, in connection with the physical state under consideration, in order to be able to differentiate completely from one another the complexions realizing it, and to associate it with a definite reckonable number, there is obviously no other means than to regard it as made up of numerous discrete homogeneous elements—for in perfectly continuous systems there exist no reckonable elements—and hereby the atomistic view is made a fundamental requirement. We have, therefore, to regard all bodies in nature, in so far as they possess an entropy, as constituted of atoms, and we therefore arrive in physics at the same conception of matter as that which obtained in chemistry for so long previously.

But we can immediately go a step further yet. The conclusions reached hold, not only for thermodynamics of material bodies, but also possess complete validity for the processes of heat radiation, which are thus referred back to the second law of thermodynamics. That radiant heat also possesses an entropy follows from the fact that a body which emits radiation into a surrounding diathermanous medium experiences a loss of heat and, therefore, a decrease of entropy. Since the total entropy of a physical system can only increase, it follows that one part of the entropy of the whole system, consisting of the body and the diathermanous medium, must be contained in the radiated heat. If the entropy of the radiant heat is to be referred back to the notion of probability, we are forced, in a similar way as above, to the conclusion that for radiant heat the atomic conception possesses a definite meaning. But, since radiant heat is not directly connected with matter, it follows that this atomistic conception relates, not to matter, but only to energy, and hence, that in heat radiation certain energy elements play an essential rôle. Even though this conclusion appears so singular and even though in many circles today vigorous objection is strongly urged against it, in the long run physical research will not be able to withhold its sanction from it, and the less, since it is confirmed by experience in quite a satisfactory manner. We shall return to this point in the lectures on heat radiation. I desire here only to mention that the novelty involved by the introduction of atomistic conceptions into the theory of heat radiation is by no means so revolutionary as, perhaps, might appear at the first glance. For there is, in my opinion at least, nothing which makes necessary the consideration of the heat processes in a complete vacuum as atomic, and it suffices to seek the atomistic features at the source of radiation, i. e., in those processes which have their play in the centres of emission and absorption of radiation. Then the Maxwellian electrodynamic differential equations can retain completely their validity for the vacuum, and, besides, the discrete elements of heat radiation are relegated exclusively to a domain which is still very mysterious and where there is still present plenty of room for all sorts of hypotheses.

Returning to more general considerations, the most important question comes up as to whether, with the introduction of atomistic conceptions and with the reference of entropy to probability, the content of the principle of increase of entropy is exhaustively comprehended, or whether still further physical hypotheses are required in order to secure the full import of that principle. If this important question had been settled at the time of the introduction of the atomic theory into thermodynamics, then the atomistic views would surely have been spared a large number of conceivable misunderstandings and justifiable attacks. For it turns out, in fact—and our further considerations will confirm this conclusion—that there has as yet nothing been done with atomistics which in itself requires much more than an essential generalization, in order to guarantee the validity of the second law.

We must first reflect that, in accordance with the central idea laid down in the first lecture, the second law must possess validity as an objective physical law, independently of the individuality of the physicist. There is nothing to hinder us from imagining a physicist—we shall designate him a “microscopic” observer—whose senses are so sharpened that he is able to recognize each individual atom and to follow it in its motion. For this observer each atom moves exactly in accordance with the elementary laws which general dynamics lays down for it, and these laws allow, so far as we know, of an inverse performance of every process. Accordingly, here again the question is neither one of probability nor of entropy and its increase. Let us imagine, on the other hand, another observer, designated a “macroscopic” observer, who regards an ensemble of atoms as a homogeneous gas, say, and consequently applies the laws of thermodynamics to the mechanical and thermal processes within it. Then, for such an observer, in accordance with the second law, the process in general is an irreversible process. Would not now the first observer be justified in saying: “The reference of the entropy to probability has its origin in the fact that irreversible processes ought to be explained through reversible processes. At any rate, this procedure appears to me in the highest degree dubious. In any case, I declare each change of state which takes place in the ensemble of atoms designated a gas, as reversible, in opposition to the macroscopic observer.” There is not the slightest thing, so far as I know, that one can urge against the validity of these statements. But do we not thereby place ourselves in the painful position of the judge who declared in a trial the correctness of the position of each separately of two contending parties and then, when a third contends that only one of the parties could emerge from the process victorious, was obliged to declare him also correct? Fortunately we find ourselves in a more favorable position. We can certainly mediate between the two parties without its being necessary for one or the other to give up his principal point of view. For closer consideration shows that the whole controversy rests upon a misunderstanding—a new proof of how necessary it is before one begins a controversy to come to an understanding with his opponent concerning the subject of the quarrel. Certainly, a given change of state cannot be both reversible and irreversible. But the one observer connects a wholly different idea with the phrase “change of state” than the other. What is then, in general, a change of state? The state of a physical system cannot well be otherwise defined than as the aggregate of all those physical quantities, through whose instantaneous values the time changes of the quantities, with given boundary conditions, are uniquely determined. If we inquire now, in accordance with the import of this definition, of the two observers as to what they understand by the state of the collection of atoms or the gas considered, they will give quite different answers. The microscopic observer will mention those quantities which determine the position and the velocities of all the individual atoms. There are present in the simplest case, namely, that in which the atoms may be considered as material points, six times as many quantities as atoms, namely, for each atom the three coordinates and the three velocity components, and in the case of combined molecules, still more quantities. For him the state and the progress of a process is then first determined when all these various quantities are individually given. We shall designate the state defined in this way the “micro-state.” The macroscopic observer, on the other hand, requires fewer data. He will say that the state of the homogeneous gas considered by him is determined by the density, the visible velocity and the temperature at each point of the gas, and he will expect that, when these quantities are given, their time variations and, therefore, the progress of the process, to be completely determined in accordance with the two laws of thermo-dynamics, and therefore accompanied by an increase in entropy. In this connection he can call upon all the experience at his disposal, which will fully confirm his expectation. If we call this state the “macro-state,” it is clear that the two laws: “the micro-changes of state are reversible” and “the macro-changes of state are irreversible,” lie in wholly different domains and, at any rate, are not contradictory.

But now how can we succeed in bringing the two observers to an understanding? This is a question whose answer is obviously of fundamental significance for the atomic theory. First of all, it is easy to see that the macro-observer reckons only with mean values; for what he calls density, visible velocity and temperature of the gas are, for the micro-observer, certain mean values, statistical data, which are derived from the space distribution and from the velocities of the atoms in an appropriate manner. But the micro-observer cannot operate with these mean values alone, for, if these are given at one instant of time, the progress of the process is not determined throughout; on the contrary: he can easily find with given mean values an enormously large number of individual values for the positions and the velocities of the atoms, all of which correspond with the same mean values and which, in spite of this, lead to quite different processes with regard to the mean values. It follows from this of necessity that the micro-observer must either give up the attempt to understand the unique progress, in accordance with experience, of the macroscopic changes of state—and this would be the end of the atomic theory—or that he, through the introduction of a special physical hypothesis, restrict in a suitable manner the manifold of micro-states considered by him. There is certainly nothing to prevent him from assuming that not all conceivable micro-states are realizable in nature, and that certain of them are in fact thinkable, but never actually realized. In the formularization of such a hypothesis, there is of course no point of departure to be found from the principles of dynamics alone; for pure dynamics leaves this case undetermined. But on just this account any dynamical hypothesis, which involves nothing further than a closer specification of the micro-states realized in nature, is certainly permissible. Which hypothesis is to be given the preference can only be decided through comparison of the results to which the different possible hypotheses lead in the course of experience.

In order to limit the investigation in this way, we must obviously fix our attention only upon all imaginable configurations and velocities of the individual atoms which are compatible with determinate values of the density, the velocity and the temperature of the gas, or in other words: we must consider all the micro-states which belong to a determinate macro-state, and must investigate the various kinds of processes which follow in accordance with the fixed laws of dynamics from the different micro-states. Now, precise calculation has in every case always led to the important result that an enormously large number of these different micro-processes relate to one and the same macro-process, and that only proportionately few of the same, which are distinguished by quite special exceptional conditions concerning the positions and the velocities of neighboring atoms, furnish exceptions. Furthermore, it has also shown that one of the resulting macro-processes is that which the macroscopic observer recognizes, so that it is compatible with the second law of thermodynamics.

Here, manifestly, the bridge of understanding is supplied. The micro-observer needs only to assimilate in his theory the physical hypothesis that all those special cases in which special exceptional conditions exist among the neighboring configurations of interacting atoms do not occur in nature, or, in other words, that the micro-states are in elementary disorder. Then the uniqueness of the macroscopic process is assured and with it, also, the fulfillment of the principle of increase of entropy in all directions.

Therefore, it is not the atomic distribution, but rather the hypothesis of elementary disorder, which forms the real kernel of the principle of increase of entropy and, therefore, the preliminary condition for the existence of entropy. Without elementary disorder there is neither entropy nor irreversible process. Therefore, a single atom can never possess an entropy; for we cannot speak of disorder in connection with it. But with a fairly large number of atoms, say $$100$$ or $$1,000$$, the matter is quite different. Here, one can certainly speak of a disorder, in case that the values of the coordinates and the velocity components are distributed among the atoms in accordance with the laws of accident. Then it is possible to calculate the probability for a given state. But how is it with regard to the increase of entropy? May we assert that the motion of $$100$$ atoms is irreversible? Certainly not; but this is only because the state of $$100$$ atoms cannot be defined in a thermodynamic sense, since the process does not proceed in a unique manner from the standpoint of a macro-observer, and this requirement forms, as we have seen above, the foundation and preliminary condition for the definition of a thermodynamic state.

If one therefore asks: How many atoms are at least necessary in order that a process may be considered irreversible?, the answer is: so many atoms that one may form from them definite mean values which define the state in a macroscopic sense. One must reflect that to secure the validity of the principle of increase of entropy there must be added to the condition of elementary disorder still another, namely, that the number of the elements under consideration be sufficiently large to render possible the formation of definite mean values. The second law has a meaning for these mean values only; but for them, it is quite exact, just as exact as the law of the calculus of probability, that the mean value, so far as it may be defined, of a sufficiently large number of throws with a six-sided die, is $$3\tfrac{1}{2}$$.

These considerations are, at the same time, capable of throwing light upon questions such as the following: Does the principle of increase of entropy possess a meaning for the so-called Brownian molecular movement of a suspended particle? Does the kinetic energy of this motion represent useful work or not? The entropy principle is just as little valid for a single suspended particle as for an atom, and therefore is not valid for a few of them, but only when there is so large a number that definite mean values can be formed. That one is able to see the particles and not the atoms makes no material difference; because the progress of a process does not depend upon the power of an observing instrument. The question with regard to useful work plays no rôle in this connection; strictly speaking, this possesses, in general, no objective physical meaning. For it does not admit of an answer without reference to the scheme of the physicist or technician who proposes to make use of the work in question. The second law, therefore, has fundamentally nothing to do with the idea of useful work (cf. first lecture).

But, if the entropy principle is to hold, a further assumption is necessary, concerning the various disordered elements,—an assumption which tacitly is commonly made and which we have not previously definitely expressed. It is, however, not less important than those referred to above. The elements must actually be of the same kind, or they must at least form a number of groups of like kind, e. g., constitute a mixture in which each kind of element occurs in large numbers. For only through the similarity of the elements does it come about that order and law can result in the larger from the smaller. If the molecules of a gas be all different from one another, the properties of a gas can never show so simple a law-abiding behavior as that which is indicated by thermodynamics. In fact, the calculation of the probability of a state presupposes that all complexions which correspond to the state are a priori equally likely. Without this condition one is just as little able to calculate the probability of a given state as, for instance, the probability of a given throw with dice whose sides are unequal in size. In summing up we may therefore say: the second law of thermodynamics in its objective physical conception, freed from anthropomorphism, relates to certain mean values which are formed from a large number of disordered elements of the same kind.

The validity of the principle of increase of entropy and of the irreversible progress of thermodynamic processes in nature is completely assured in this formularization. After the introduction of the hypothesis of elementary disorder, the microscopic observer can no longer confidently assert that each process considered by him in a collection of atoms is reversible; for the motion occurring in the reverse order will not always obey the requirements of that hypothesis. In fact, the motions of single atoms are always reversible, and thus far one may say, as before, that the irreversible processes appear reduced to a reversible process, but the phenomenon as a whole is nevertheless irreversible, because upon reversal the disorder of the numerous individual elementary processes would be eliminated. Irreversibility is inherent, not in the individual elementary processes themselves, but solely in their irregular constitution. It is this only which guarantees the unique change of the macroscopic mean values.

Thus, for example, the reverse progress of a frictional process is impossible, in that it would presuppose elementary arrangement of interacting neighboring molecules. For the collisions between any two molecules must thereby possess a certain distinguishing character, in that the velocities of two colliding molecules depend in a definite way upon the place at which they meet. In this way only can it happen that in collisions like directed velocities ensue and, therefore, visible motion.

Previously we have only referred to the principle of elementary disorder in its application to the atomic theory of matter. But it may also be assumed as valid, as I wish to indicate at this point, on quite the same grounds as those holding in the case of matter, for the theory of radiant heat. Let us consider, e. g., two bodies at different temperatures between which exchange of heat occurs through radiation. We can in this case also imagine a microscopic observer, as opposed to the ordinary macroscopic observer, who possesses insight into all the particulars of electromagnetic processes which are connected with emission and absorption, and the propagation of heat rays. The microscopic observer would declare the whole process reversible because all electrodynamic processes can also take place in the reverse direction, and the contradiction may here be referred back to a difference in definition of the state of a heat ray. Thus, while the macroscopic observer completely defines a monochromatic ray through direction, state of polarization, color, and intensity, the microscopic observer, in order to possess a complete knowledge of an electromagnetic state, necessarily requires the specification of all the numerous irregular variations of amplitude and phase to which the most homogeneous heat ray is actually subject. That such irregular variations actually exist follows immediately from the well known fact that two rays of the same color never interfere, except when they originate in the same source of light. But until these fluctuations are given in all particulars, the micro-observer can say nothing with regard to the progress of the process. He is also unable to specify whether the exchange of heat radiation between the two bodies leads to a decrease or to an increase of their difference in temperature. The principle of elementary disorder first furnishes the adequate criterion of the tendency of the radiation process, i. e., the warming of the colder body at the expense of the warmer, just as the same principle conditions the irreversibility of exchange of heat through conduction. However, in the two cases compared, there is indicated an essential difference in the kind of the disorder. While in heat conduction the disordered elements may be represented as associated with the various molecules, in heat radiation there are the numerous vibration periods, connected with a heat ray, among which the energy of radiation is irregularly distributed. In other words: the disorder among the molecules is a material one, while in heat radiation it is one of energy distribution. This is the most important difference between the two kinds of disorder; a common feature exists as regards the great number of uncoordinated elements required. Just as the entropy of a body is defined as a function of the macroscopic state, only when the body contains so many atoms that from them definite mean values may be formed, so the entropy principle only possesses a meaning with regard to a heat ray when the ray comprehends so many periodic vibrations, i. e., persists for so long a time, that a definite mean value for the intensity of the ray may be obtained from the successive irregular fluctuating amplitudes.

Now, after the principle of elementary disorder has been introduced and accepted by us as valid throughout nature, the fundamental question arises as to the calculation of the probability of a given state, and the actual derivation of the entropy therefrom. From the entropy all the laws of thermodynamic states of equilibrium, for material substances, and also for energy radiation, may be uniquely derived. With regard to the connection between entropy and probability, this is inferred very simply from the law that the probability of two independent configurations is represented by the product of the individual probabilities:
 * $$\begin{align}&{\color{White}.(00)}\qquad&&

W = W_{1} \cdot W_{2}, \end{align}$$ while the entropy $$S$$ is represented by the sum of the individual entropies:
 * $$\begin{align}&{\color{White}.(00)}\qquad&&

S = S_{1} + S_{2}. \end{align}$$ Accordingly, the entropy is proportional to the logarithm of the probability:
 * $$\begin{align}&(12){\color{White}.}\qquad&&

S = k \log W. \end{align}$$ $$k$$ is a universal constant. In particular, it is the same for atomic as for radiation configurations, for there is nothing to prevent us assuming that the configuration designated by $$1$$ is atomic, while that designated by $$2$$ is a radiation configuration. If $$k$$ has been calculated, say with the aid of radiation measurements, then $$k$$ must have the same value for atomic processes. Later we shall follow this procedure, in order to utilize the laws of heat radiation in the kinetic theory of gases. Now, there remains, as the last and most difficult part of the problem, the calculation of the probability $$W$$ of a given physical configuration in a given macroscopic state. We shall treat today, by way of preparation for the quite general problem to follow, the simple problem: to specify the probability of a given state for a single moving material point, subject to given conservative forces. Since the state depends upon $$6$$ variables: the $$3$$ generalized coordinates $$\varphi_{1}$$, $$\varphi_{2}$$, $$\varphi_{3}$$, and the three corresponding velocity components $$\dot{\varphi}_{1}$$, $$\dot{\varphi}_{2}$$, $$\dot{\varphi}_{3}$$, and since all possible values of these $$6$$ variables constitute a continuous manifold, the probability sought is, that these $$6$$ quantities shall lie respectively within certain infinitely small intervals, or, if one thinks of these $$6$$ quantities as the rectilinear orthogonal coordinates of a point in an ideal six-dimensional space, that this ideal “state point” shall fall within a given, infinitely small “state domain.” Since the domain is infinitely small, the probability will be proportional to the magnitude of the domain and therefore proportional to
 * $$\begin{align}&{\color{White}.(00)}\qquad&&

\int d\varphi_{1} \cdot d\varphi_{2} \cdot d\varphi_{3} \cdot d\dot{\varphi}_{1} \cdot d\dot{\varphi}_{2} \cdot d\dot{\varphi}_{3}. \end{align}$$

But this expression cannot serve as an absolute measure of the probability, because in general it changes in magnitude with the time, if each state point moves in accordance with the laws of motion of material points, while the probability of a state which follows of necessity from another must be the same for the one as the other. Now, as is well known, another integral quite similarly formed, may be specified in place of the one above, which possesses the special property of not changing in value with the time. It is only necessary to employ, in addition to the general coordinates $$\varphi_{1}$$, $$\varphi_{2}$$, $$\varphi_{3}$$, the three so-called momenta $$\psi_{1}$$, $$\psi_{2}$$, $$\psi_{3}$$, in place of the three velocities $$\dot{\varphi}_{1}$$, $$\dot{\varphi}_{2}$$, $$\dot{\varphi}_{3}$$ as the determining coordinates of the state. These are defined in the following way:


 * $$\begin{align}&{\color{White}.(00)}\qquad&&

\psi_{1} = \left(\frac{\partial H}{\partial \dot{\varphi}_{1}}\right)_{\varphi},\quad \psi_{2} = \left(\frac{\partial H}{\partial \dot{\varphi}_{2}}\right)_{\varphi},\quad \psi_{3} = \left(\frac{\partial H}{\partial \dot{\varphi}_{3}}\right)_{\varphi}, \end{align}$$ wherein $$H$$ denotes the kinetic potential (Helmholz). Then, in Hamiltonian form, the equations of motion are:


 * $$\begin{align}&{\color{White}.(00)}\qquad&&

\dot{\psi}_{1} = \frac{d\psi_{1}}{dt} = -\left(\frac{\partial E}{\partial \varphi_{1}}\right)_{\psi},\ \cdots,\quad \dot{\varphi}_{1} = \frac{d\varphi_{1}}{dt} = \left(\frac{\partial E}{\partial \psi_{1}}\right)_{\varphi},\ \cdots, \end{align}$$ ($$E$$ is the energy), and from these equations follows the “condition of incompressibility”:
 * $$\begin{align}&{\color{White}.(00)}\qquad&&

\frac{\partial \dot{\varphi}_{1}}{\partial \varphi_{1}} + \frac{\partial \dot{\psi}_{1}}{\partial \psi_{1}} + \cdots = 0. \end{align}$$ Referring to the six-dimensional space represented by the coordinates $$\varphi_{1}$$, $$\varphi_{2}$$, $$\varphi_{3}$$, $$\psi_{1}$$, $$\psi_{2}$$, $$\psi_{3}$$, this equation states that the magnitude of an arbitrarily chosen state domain, viz.:


 * $$\begin{align}&{\color{White}.(00)}\qquad&&

\int d\varphi_{1} \cdot d\varphi_{2} \cdot d\varphi_{3} \cdot d\psi_{1} \cdot d\psi_{2} \cdot d\psi_{3} \end{align}$$ does not change with the time, when each point of the domain changes its position in accordance with the laws of motion of material points. Accordingly, it is made possible to take the magnitude of this domain as a direct measure for the probability that the state point falls within the domain.

From the last expression, which can be easily generalized for the case of an arbitrary number of variables, we shall calculate later the probability of a thermodynamic state, for the case of radiant energy as well as that for material substances.