Erwin Schrödinger is perhaps the most complex figure in twentieth-century discussions of quantum mechanical uncertainty
, ontological chance
, and the statistical interpretation
of quantum mechanics.
In his early career, Schrödinger was a great exponent of fundamental chance
in the universe. He followed his teacher Franz S. Exner
, who was himself a colleague of the great Ludwig Boltzmann
at the University of Vienna. Boltzmann used intrinsic randomness in molecular collisions (molecular chaos) to derive the increasing entropy of the Second Law of Thermodynamics. The macroscopic irreversibility
of entropy increase depends on Boltzmann's molecular chaos which depends on the randomness in microscopic irreversibility
Before the twentieth century, most physicists, mathematicians, and philosophers believed that the chance described by the calculus of probabilities was actually completely determined. The "bell curve" or "normal distribution" of random outcomes was itself so consistent that they argued for underlying deterministic laws governing individual events. They thought that we simply lack the knowledge necessary to make exact predictions for these individual events. Pierre-Simon Laplace was first to see in his "calculus of probabilities" a universal law that determined the motions of everything from the largest astronomical objects to the smallest particles. In a Laplacian world, there is only one possible future.
On the other hand, in his inaugural lecture at Zurich in 1922, Schrödinger argued that evidence did not justify our assumptions that physical laws were deterministic and strictly causal
. His inaugural lecture was modeled on that of Franz Serafin Exner
in Vienna in 1908.
"Exner's assertion amounts to this: It is quite possible that Nature's laws are of thoroughly statistical character. The demand for an absolute law in the background of the statistical law — a demand which at the present day almost everybody considers imperative — goes beyond the reach of experience. Such a dual foundation for the orderly course of events in Nature is in itself improbable. The burden of proof falls on those who champion absolute causality, and not on those who question it. For a doubtful attitude in this respect is to-day by far the more natural."
Several years later, Schrödinger presented a paper on "Indeterminism in Physics" to the June, 1931 Congress of A Society for Philosophical Instruction in Berlin.
"Fifty years ago it was simply a matter of taste or philosophic prejudice whether the preference was given to determinism or indeterminism. The former was favored by ancient custom, or possibly by an a priori belief. In favor of the latter it could be urged that this ancient habit demonstrably rested on the actual laws which we observe functioning in our surroundings. As soon, however, as the great majority or possibly all of these laws are seen to be of a statistical nature, they cease to provide a rational argument for the retention of determinism.
"If nature is more complicated than a game of chess, a belief to which one tends to incline, then a physical system cannot be determined by a finite number of observations. But in practice a finite number of observations is all that we can make. All that is left to determinism is to believe that an infinite accumulation of observations would in principle enable it completely to determine the system. Such was the standpoint and view of classical physics, which latter certainly had a right to see what it could make of it. But the opposite standpoint has an equal justification: we are not compelled to assume that an infinite number of observations, which cannot in any case be carried out in practice, would suffice to give us a complete determination."
Despite these strong arguments against determinism, just after he completed the wave mechanical formulation of quantum mechanics in June 1926 (the year Exner died), Schrödinger began to side with the determinists, including especially Max Planck
and Albert Einstein
(who in 1916 had discovered that ontological chance
is involved in the emission of radiation).
Schrödinger's wave equation is a continuous
function that evolves smoothly in time, in sharp contrast to the discrete, discontinuous, and indeterministic "quantum jumps" of the Born-Heisenberg matrix mechanics. His wave equation seemed to Schrödinger to restore the continuous and deterministic nature of classical mechanics and dynamics. And it suggests that we may visualize
particles as wave packets moving in spacetime, which was very important to Schrödinger. By contrast, Bohr and Heisenberg and their Copenhagen Interpretation
of quantum mechanics insisted that visualization of quantum events is simply not possible. Einstein agreed with Schrödinger that visualization (Anschaulichkeit
) should be the goal of describing reality.
, Werner Heisenberg
's mentor and the senior partner in the team that created matrix mechanics, shocked Schrödinger with the interpretation of the wave function as a "probability amplitude."
The motions of particles are indeterministic and probabilistic, even if the equation of motion for the probability is deterministic.
It is true, said Born, that the wave function itself evolves deterministically, but its significance is that it predicts only the probability of finding an atomic particle somewhere. When and where particles would appear - to an observer or to an observing system like a photographic plate - was completely and irreducibly random, he said. Born credited Einstein for the idea that the relationship between waves and particles
is that the waves give us the probability of finding a particle, but this "statistical interpretation" of the wave function came to be known as "Born's Rule.".
Einstein had seen clearly for many years that quantum transitions involve chance
, that quantum jumps are random, but he did not want to believe it. Although the Schrödinger equation of motion is itself continuous and deterministic, it is impossible to restore continuous deterministic behavior to material particles and return physics to strict causality
. Even more than Einstein, Schrödinger hated this idea and never accepted it, despite the great success of quantum mechanics, which today uses Schrödinger's wave functions to calculate Heisenberg's matrix elements for atomic transition probabilities and all atomic properties.
Discouraged, Schrödinger wrote to his friend Willie Wien in August 1926
"[That discontinuous quantum jumps]...offer the greatest conceptual difficulty for the achievement of a classical theory is gradually becoming even more evident to me."...[yet] today I no longer like to assume with Born that an individual process of this kind is "absolutely random." i.e., completely undetermined. I no longer believe today that this conception (which I championed so enthusiastically four years ago) accomplishes much. From an offprint of Born's work in the Zeitsch f. Physik I know more or less how he thinks of things: the waves must be strictly causally determined through field laws, the wavefunctions on the other hand have only the meaning of probabilities for the actual motions of light- or material-particles."
Why did Schrödinger not simply welcome Born's absolute chance? It provides strong evidence that Boltzmann's assumption of chance in atomic collisions (molecular disorder) was completely justified. Boltzmann's idea that entropy is statistically irreversible depends on microscopic irreversibility. Exner thought chance is absolute, but did not live to see how fundamental it was to physics. And the early Epicurean idea that atoms sometimes "swerve" could be replaced by the insight that atoms are always swerving randomly - when they interact with other atoms and especially with radiation, as Einstein (reluctantly) found in 1916.
Could it be that senior scientists like Max Planck and Einstein were so delighted with Schrödinger's work that it turned his head? Planck, universally revered as the elder statesman of physics, invited Schrödinger to Berlin to take Planck's chair as the most important lecturer in physics at a German university. And Schrödinger shared Einstein's goal to develop a unified (continuous and deterministic) field theory. Schrödinger won the Nobel prize in 1933. But how different our thinking about absolute chance would be if perhaps the greatest theoretician of quantum mechanics had accepted random quantum jumps in 1926?
In his vigorous debates with Neils Bohr
and Werner Heisenberg
, Schrödinger attacked the probabilistic Copenhagen interpretation of his wave function with a famous thought experiment (which was actually based on another Einstein suggestion) called Schrödinger's Cat
Schrödinger was very pleased to read the Einstein-Podolsky-Rosen paper in 1935. He immediately wrote to Einstein in support of an attack on Bohr, Born, and Heisenberg and their "dogmatic" quantum mechanics.
"I was very happy that in the paper just published in P.R. you have evidently
caught dogmatic q.m. by the coat-tails...My interpretation is that we
do not have a q.m. that is consistent with relativity theory, i.e., with a
finite transmission speed of all influences. We have only the analogy
of the old absolute mechanics . . . The separation process is not at all
encompassed by the orthodox scheme.'
Einstein had said in 1927 at the Solvay conference that nonlocality (faster-than-light signaling between particles in a space-like separation) seemed to violate relativity in the case of a single-particle wave function with non-zero probabilities of finding the particle at more than one place. What instantaneous "action-at-a-distance" prevents particles from appearing at more than one place, Einstein oddly asked. [The answer, one particle becoming two particles never appears in nature. That would violate the most fundamental conservation laws.]
In his 1935 EPR paper, Einstein cleverly introduced two particles instead of one, and a two-particle wave function that describes both particles. The particles are identical, indistinguishable, and with indeterminate positions, although EPR wanted to describe them as widely separated, one "here" and measurable "now" and the other distant and to be measured "later."
Here we must explain the asymmetry
that Einstein, and Schrödinger, have mistakenly introduced into a perfectly symmetric situation, making entanglement such a mystery.
Schrödinger challenged Einstein's idea that two systems that had previously interacted can be treated as separated systems, and that a two-particle wave function ψ12 can be factored into a product of separated wave functions for each system, ψ1 and ψ2.
Einstein called this his "separability principle (Trennungsprinzip
). The particles cannot separate, until another quantum interaction separates them. Schrödinger published a famous paper defining his idea of "entanglement
" in August of 1935. It began:
When two systems, of which we know the states by their respective representatives,
enter into temporary physical interaction due to known forces between
them, and when after a time of mutual influence the systems separate again, then
they can no longer be described in the same way as before, viz. by endowing each
of them with a representative of its own. I would not call that one but rather the
characteristic trait of quantum mechanics, the one that enforces its entire
departure from classical lines of thought. By the interaction the two representatives
(or ψ-functions) have become entangled.
To disentangle them we must
gather further information by experiment, although we knew as much as anybody
could possibly know about all that happened. Of either system, taken
separately, all previous knowledge may be entirely lost, leaving us but one
privilege: to restrict the experiments to one only of the two systems. After reestablishing
one representative by observation, the other one can be inferred
simultaneously. In what follows the whole of this procedure will be called the
Attention has recently [viz., EPR] been called to the obvious but very disconcerting fact
that even though we restrict the disentangling measurements to one system, the
representative obtained for the other system is by no means independent of the
particular choice of observations which we select for that purpose and which by
the way are entirely arbitrary. It is rather discomforting that the theory should
allow a system to be steered or piloted into one or the other type of state at the
experimenter's mercy in spite of his having no access to it. This paper does not
aim at a solution of the paradox, it rather adds to it, if possible.
They can also be disentangled, or decohered
, by interaction with the environment (other particles). An experiment by a human observer is not necessary
In the following year, Schrödinger looked more carefully at Einstein's assumption that the entangled system could be separated enough to be regarded as two systems with independent wave functions:
Years ago I pointed out that when two systems separate far enough
to make it possible to experiment on one of them without interfering with the
other, they are bound to pass, during the process of separation, through stages
which were beyond the range of quantum mechanics as it stood then. For it
seems hard to-imagine a complete separation, whilst the systems are still so
close to each other, that, from the classical point of view, their interaction could
still be described as an unretarded actio in distans. And ordinary quantum
mechanics, on account of its thoroughly unrelativistic character, really only deals
with the actio in distans case. The whole system (comprising in our case both
systems) has to be small enough to be able to neglect the time that light takes
to travel across the system, compared with such periods of the system as are
essentially involved in the changes that take place...
It seems worth noticing that the paradox could be avoided by a very simple
assumption, namely if the situation after separating were described by the
expansion [ψ (x,y) = Σ ak gk(x) fk(y), as assumed in EPR], but with the additional statement that the knowledge of the
phase relations between the complex constants ak has been entirely lost in consequence
of the process of separation.
When some interaction, like a measurement, causes a separation, the two-particle wave function Ψ12 collapses, the system decoheres into the product Ψ1Ψ2, losing their phase relation so there is no more interference, and we have a mixed state rather than a pure state.
This would mean that not only the parts,
but the whole system, would be in the situation of a mixture, not of a pure state.
It would not preclude the possibility of determining the state of the first system
by suitable measurements in the second one or vice versa. But it would utterly
eliminate the experimenters influence on the state of that system which he does
Schrödinger says that the entangled system may become disentangled (Einstein's separation) and yet some perfect correlations between later measurements might remain. Note that the entangled system could simply decohere
as a result of interactions with the environment, as proposed by decoherence theorists. The perfectly correlated results of Bell-inequality
experiments might nevertheless be preserved, depending on the interaction.
And of course they will be separated by a measurement of either particle, for example, by Alice or Bob in the case of Bell's Theorem.
Following David Bohm
's version of EPR, John Bell
considered two spin-1/2 particles in an entangled state with total spin zero. We can rewrite Schrödinger's separation equation above as
| ψ > = (1/√2) | + - > - (1/√2) | - + >
This is a superposition of two states, either of which conserves total spin zero. The minus sign ensures the state is anti-symmetric, changing sign under interchange of identical electrons.
Schrödinger does not mention conservation principles
, the deep reason for the perfect correlations between various observables, i.e., conservation of mass, energy, momentum, angular momentum, and in Bell's case spin.
Let's assume that Alice makes a measurement of a spin component of one particle, say the x-component. First, her measurement projects the entangled system into either the | + - > or
| - + > state. Alice randomly measures + (spin up) or - (spin down). A succession of such measurements produces the bit string with "true randomness" that is needed for use as a quantum key code in quantum cryptography.
Whenever Alice measures spin up, Bob measures spin down, but that is if and only if
he measures the same x-component. If Bob measures at any other angle, the perfect anti-correlation that distributes quantum key pairs to Alice and Bob would be lost.
Bell's inequality was a study of how these perfect correlations fall off as a function of the angle between measurements by Alice and Bob. Bell predicted local hidden variables would produce a linear
function of this angle, whereas, he said, quantum mechanics should produce a dependence on the cosine of this angle. As the angle changes, admixtures of other states will be found, for example | + + > in which Bob also measures spin up, or one where Bob detects no particle.
Bell wrote that "Since we can predict in advance the result of measuring any chosen component of σ2
, by previously measuring the same component of σ1
, it follows that the result of any such measurement must actually be predetermined."
But note that these values were not determined (they did not even exist according to the Copenhagen Interpretation
) before Alice's measurement. According to Werner Heisenberg
and Pascual Jordan
, the spin components are created
by the measurements in the x-direction. According to Paul Dirac
, Alice's random x-component value depended on "Nature's choice."
In 1952, Schrödinger wrote two influential articles
in the British Journal for the Philosophy of Science
denying quantum jumping. These papers greatly influenced generations of quantum collapse
deniers, including John Bell
, John Wheeler
, Wojciech Zurek
, and H. Dieter Zeh
On Determinism and Free Will
In Schrödinger's mystical epilogue to his essay What Is Life?
(1944), he "proves God and immortality at a stroke" but leaves us in the dark about free will
As a reward for the serious trouble I have taken to expound the purely scientific aspects of our problem sine ira et, studio, I beg leave to add my own, necessarily subjective, view of the philosophical implications.
According to the evidence put forward in the preceding pages the space-time events in the body of a living being which correspond to the activity of its mind, to its self-conscious or any other actions, are (considering also their complex structure and the accepted statistical explanation of physico-chemistry) if not strictly deterministic at any rate statistico-deterministic.
To the physicist I wish to emphasize that in my opinion, and contrary to the opinion upheld in some quarters, quantum indeterminacy plays no biologically relevant role in them, except perhaps by enhancing their purely accidental character in such events as meiosis, natural and X-ray-induced mutation and so on — and this is in any case obvious and well recognized.
For the sake of argument, let me regard this as a fact, as I believe every unbiased biologist would, if there were not the well-known, unpleasant feeling about 'declaring oneself to be a pure mechanism'. For it is deemed to contradict Free Will as warranted by direct introspection.
But immediate experiences in themselves, however various and disparate they be, are logically incapable of contradicting each other. So let us see whether we cannot draw the correct, non-contradictory conclusion from the following two premises:
(i) My body functions as a pure mechanism according to the Laws of Nature.
(ii) Yet I know, by incontrovertible direct experience, that I am directing its motions, of which I foresee the effects, that may be fateful and all-important, in which case I feel and take full responsibility for them.
The only possible inference from these two facts is, I think, that I — I in the widest meaning of the word, that is to say, every conscious mind that has ever said or felt 'I' — am the person, if any, who controls the 'motion of the atoms' according to the Laws of Nature.
Within a cultural milieu (Kulturkreis) where certain conceptions (which once had or still have a wider meaning amongst other peoples) have been limited and specialized, it is daring to give to this conclusion the simple wording that it requires. In Christian terminology to say: 'Hence I am God Almighty' sounds both blasphemous and lunatic. But please disregard these connotations for the moment and consider whether the above inference is not the closest a biologist can get to proving God and immortality at one stroke.
In itself, the insight is not new. The earliest records to my knowledge date back some 2,500 years or more. From the early great Upanishads the recognition ATHMAN = BRAHMAN (the personal self equals the omnipresent, all-comprehending eternal self) was in Indian thought considered, far from being blasphemous, to represent the quintessence of deepest insight into the happenings of the world. The striving of all the scholars of Vedanta was, after having learnt to pronounce with their lips, really to assimilate in their minds this grandest of all thoughts.
Again, the mystics of many centuries, independently, yet in perfect harmony with each other (somewhat like the particles in an ideal gas) have described, each of them, the unique experience of his or her life in terms that can be condensed in the phrase: DEUS FACTUS SUM (I have become God).
To Western ideology the thought has remained a stranger, in spite of Schopenhauer and others who stood for it and in spite of those true lovers who, as they look into each other's eyes, become aware that their thought and their joy are numerically one — not merely similar or identical; but they, as a rule, are emotionally too busy to indulge in clear thinking, in which respect they very much resemble the mystic.
Allow me a few further comments. Consciousness is never experienced in the plural, only in the singular. Even in the pathological cases of split consciousness or double personality the two persons alternate, they are never manifest simultaneously. In a dream we do perform several characters at the same time, but not indiscriminately: we are one of them; in him we act and speak directly, while we often eagerly await the answer or response of another person, unaware of the fact that it is we who control his movements and his speech just as much as our own.
How does the idea of plurality (so emphatically opposed by the Upanishad writers) arise at all? Consciousness finds itself intimately connected with, and dependent on, the physical state of a limited region of matter, the body. (Consider the changes of mind during the development of the body, as puberty, ageing, dotage, etc., or consider the effects of fever, intoxication, narcosis, lesion of the brain and so on.) Now, there is a great plurality of similar bodies. Hence the pluralization of consciousnesses or minds seems a very suggestive hypothesis. Probably all simple, ingenuous people, as well as the great majority of Western philosophers, have accepted it.
It leads almost immediately to the invention of souls, as many as there are bodies, and to the question whether they are mortal as the body is or whether they are immortal and capable of existing by themselves. The former alternative is distasteful, while the latter frankly forgets, ignores or disowns the facts upon which the plurality hypothesis rests. Much sillier questions have been asked: Do animals also have souls? It has even been questioned whether women, or only men, have souls.
Such consequences, even if only tentative, must make us suspicious of the plurality hypothesis, which is common to all official Western creeds. Are we not inclining to much greater nonsense, if in discarding their gross superstitions we retain their naive idea of plurality of souls, but 'remedy' it by declaring the souls to be perishable, to be annihilated with the respective bodies?
The only possible alternative is simply to keep to the immediate experience that consciousness is a singular of which the plural is unknown; that there is only one thing and that what seems to be a plurality is merely a series of different aspects of this one thing, produced by a deception (the Indian MAJA); the same illusion is produced in a gallery of mirrors, and in the same way Gaurisankar and Mt Everest turned out to be the same peak seen from different valleys.
There are, of course, elaborate ghost-stories fixed in our minds to hamper our acceptance of such simple recognition. E.g. it has been said that there is a tree there outside my window but I do not really see the tree. By some cunning device of which only the initial, relatively simple steps are explored, the real tree throws an image of itself into my consciousness, and that is what I perceive. If you stand by my side and look at the same tree, the latter manages to throw an image into your soul as well. I see my tree and you see yours (remarkably like mine), and what the tree in itself is we do not know. For this extravagance Kant is responsible. In the order of ideas which regards consciousness as a singulare tantum it is conveniently replaced by the statement that there is obviously only one tree and all the image business is a ghost-story.
Yet each of us has the indisputable impression that the sum total of his own experience and memory forms a unit, quite distinct from that of any other person. He refers to it as 'I'. What is this 'I?
If you analyse it closely you will, I think, find that it is just a little bit more than a collection of single data (experiences and memories), namely the canvas upon which they are collected. And you will, on close introspection, find that what you really mean by 'I' is that ground-stuff upon which they are collected. You may come to a distant country, lose sight of all your friends, may all but forget them; you acquire new friends, you share life with them as intensely as you ever did with your old ones. Less and less important will become the fact that, while living your new life, you still recollect the old one. 'The youth that was I', you may come to speak of him in the third person, indeed the protagonist of the novel you are reading is probably nearer to your heart, certainly more intensely alive and better known to you. Yet there has been no intermediate break, no death. And even if a skilled hypnotist succeeded in blotting out entirely all your earlier reminiscences, you would not find that he had killed you. In no case is there a loss of personal existence to deplore.
Nor will there ever be.
NOTE TO THE EPILOGUE
The point of view taken here levels with what Aldous Huxley has recently — and very appropriately — called The Perennial Philosophy. His beautiful book (London, Chatto and Windus, 1946) is singularly fit to explain not only the state of affairs, but also why it is so difficult to grasp and so liable to meet with opposition.
Order, Disorder, and Entropy
Chapter 6 of What Is Life?
Nec corpus mentem ad cogitandum, nec mens corpus ad motum, neque ad quietem, nec ad aliquid (si quid est) aliud determinare potent.' SPINOZA, Ethics, Pt III, Prop.2
A REMARKABLE GENERAL CONCLUSION
FROM THE MODEL
Let me refer to the phrase on p. 62, in which I tried to explain that the molecular picture of the gene made it at least conceivable that the miniature code should be in one-to-one correspondence with a highly complicated and specified plan of development and should somehow contain the means of putting it into operation. Very well then, but how does it do this? How are we going to turn 'conceivability' into true understanding?
Delbruck's molecular model, in its complete generality, seems to contain no hint as to how the hereditary substance works. Indeed, I do not expect that any detailed information on this question is likely to come from physics in the near future. The advance is proceeding and will, I am sure, continue to do so, from biochemistry under the guidance of physiology and genetics.
No detailed information about the functioning of the genetical mechanism can emerge from a description of its structure so general as has been given above. That is obvious. But, strangely enough, there is just one general conclusion to be obtained from it, and that, I confess, was my only motive for writing this book.
From Delbruck's general picture of the hereditary substance it emerges that living matter, while not eluding the 'laws of physics' as established up to date, is likely to involve 'other laws of physics' hitherto unknown, which, however, once they have been revealed, will form just as integral a part of this science as the former.
, not other laws of physics, is the key feature distinguishing life from physics and chemistry
ORDER BASED ON ORDER
This is a rather subtle line of thought, open to misconception in more than one respect. All the remaining pages are concerned with making it clear. A preliminary insight, rough but not altogether erroneous, may be found in the following considerations:
It has been explained in chapter I that the laws of physics, as we know them, are statistical laws.1 They have a lot to do with the natural tendency of things to go over into disorder.
But, to reconcile the high durability of the hereditary substance with its minute size, we had to evade the tendency to disorder by 'inventing the molecule', in fact, an unusually large molecule which has to be a masterpiece of highly differentiated order, safeguarded by the conjuring rod of quantum theory. The laws of chance are not invalidated by this 'invention', but their outcome is modified. The physicist is familiar with the fact that the classical laws of physics are modified by quantum theory, especially at low temperature. There are many instances of this. Life seems to be one of them, a particularly striking one. Life seems to be orderly and lawful behaviour of matter, not based exclusively on its tendency to go over from order to disorder, but based partly on existing order that is kept up.
To the physicist — but only to him — I could hope to make my view clearer by saying: The living organism seems to be a macroscopic system which in part of its behaviour approaches to that purely mechanical (as contrasted with thermodynamical) conduct to which all systems tend, as the temperature approaches the absolute zero and the molecular disorder is removed.
The non-physicist finds it hard to believe that really the ordinary laws of physics, which he regards as the prototype of inviolable precision, should be based on the statistical tendency of matter to go over into disorder. I have given examples in chapter I. The general principle involved is the famous Second Law of Thermodynamics (entropy principle) and its equally famous statistical foundation. On pp. 69-74 I will try to sketch the bearing of the entropy principle on the large-scale behaviour of a living organism — forgetting at the moment all that is known about chromosomes, inheritance, and so on.
LIVING MATTER EVADES THE DECAY TO
What is the characteristic feature of life? When is a piece of matter said to be alive? When it goes on 'doing something', moving, exchanging material with its environment, and so forth, and that for a much longer period than we would expect an inanimate piece of matter to 'keep going' under similar circumstances. When a system that is not alive is isolated or placed in a uniform environment, all motion usually comes to a standstill very soon as a result of various kinds of friction; differences of electric or chemical potential are equalized, substances which tend to form a chemical compound do so, temperature becomes uniform by heat conduction. After that the whole system fades away into a dead, inert lump of matter. A permanent state is reached, in which no observable events occur. The physicist calls this the state of thermodynamical equilibrium, or of 'maximum entropy'.
Practically, a state of this kind is usually reached very rapidly. Theoretically, it is very often not yet an absolute equilibrium, not yet the true maximum of entropy. But then the final approach to equilibrium is very slow. It could take anything between hours, years, centuries, To give an example – one in which the approach is still fairly rapid: if a glass filled with pure water and a second one filled with sugared water are placed together in a hermetically closed case at constant temperature, it appears at first that nothing happens, and the impression of complete equilibrium is created. But after a day or so it is noticed that the pure water, owing to its higher vapour pressure, slowly evaporates and condenses on the solution. The latter overflows. Only after the pure water has totally evaporated has the sugar reached its aim of being equally distributed among all the liquid water available.
These ultimate slow approaches to equilibrium could never be mistaken for life, and we may disregard them here. I have referred to them in order to clear myself of a charge of inaccuracy.
IT FEEDS ON 'NEGATIVE ENTROPY'
It is by avoiding the rapid decay into the inert state of `equilibrium' that an organism appears so enigmatic; so much so, that from the earliest times of human thought some special non-physical or supernatural force (vis viva, entelechy) was claimed to be operative in the organism, and in some quarters is still claimed.
How does the living organism avoid decay? The obvious answer is: By eating, drinking, breathing and (in the case of plants) assimilating. The technical term is metabolism. The Greek word (μεταβάλλειν) means change or exchange. Exchange of what? Originally the underlying idea is, no doubt, exchange of material. (E.g. the German for metabolism is Stoffwechsel.) That the exchange of material should be the essential thing is absurd. Any atom of nitrogen, oxygen, sulphur, etc., is as good as any other of its kind; what could be gained by exchanging them? For a while in the past our curiosity was silenced by being told that we feed upon energy. In some very advanced country (I don't remember whether it was Germany or the U.S.A. or both) you could find menu cards in restaurants indicating, in addition to the price, the energy content of every dish. Needless to say, taken literally, this is just as absurd. For an adult organism the energy content is as stationary as the material content. Since, surely, any calorie is worth as much as any other calorie, one cannot see how a mere exchange could help.
It is for this reason (my italics) that we decided to coin the words Ergo
for negative entropy (or roughly free energy), and Ergodic
for processes that can reduce the entropy locally (transferring away positive energy to allow the local reduction to be a stable information structure)
What then is that precious something contained in our food which keeps us from death? That is easily answered. Every process, event, happening – call it what you will; in a word, everything that is going on in Nature means an increase of the entropy of the part of the world where it is going on. Thus a living organism continually increases its entropy – or, as you may say, produces positive entropy – and thus tends to approach the dangerous state of maximum entropy, which is death. It can only keep aloof from it, i.e. alive, by continually drawing from its environment negative entropy – which is something very positive as we shall immediately see. What an organism feeds upon is negative entropy. Or, to put it less paradoxically, the essential thing in metabolism is that the organism succeeds in freeing itself from all the entropy it cannot help producing while alive.
WHAT IS ENTROPY?
What is entropy? Let me first emphasize that it is not a hazy concept or idea, but a measurable physical quantity just like the length of a rod, the temperature at any point of a body, the heat of fusion of a given crystal or the specific heat of any given substance. At the absolute zero point of temperature (roughly – 273°C) the entropy of any substance is zero. When you bring the substance into any other state by slow, reversible little steps (even if thereby the substance changes its physical or chemical nature or splits up into two or more parts of different physical or chemical nature) the entropy increases by an amount which is computed by dividing every little portion of heat you had to supply in that procedure by the absolute temperature at which it was supplied – and by summing up all these small contributions. To give an example, when you melt a solid, its entropy increases by the amount of the heat of fusion divided by the temperature at the melting-point. You see from this, that the unit in which entropy is measured is cal./°C (just as the calorie is the unit of heat or the centimetre the unit of length).
THE STATISTICAL MEANING OF ENTROPY
I have mentioned this technical definition simply in order to remove entropy from the atmosphere of hazy mystery that frequently veils it. Much more important for us here is the bearing on the statistical concept of order and disorder, a connection that was revealed by the investigations of Boltzmann and Gibbs in statistical physics. This too is an exact quantitative connection, and is expressed by
entropy = k log D,
where k is the so-called Boltzmann constant 3.2983 x 10-24 cal./°C), and D a quantitative measure of the atomistic disorder of the body in question. To give an exact explanation of this quantity D in brief non-technical terms is well-nigh impossible. The disorder it indicates is partly that of heat motion, partly that which consists in different kinds of atoms or molecules being mixed at random, instead of being neatly separated, e.g. the sugar and water molecules in the example quoted above. Boltzmann's equation is well illustrated by that example. The gradual 'spreading out' of the sugar over all the water available increases the disorder D, and hence (since the logarithm of D increases with D) the entropy. It is also pretty clear that any supply of heat increases the turmoil of heat motion, that is to say, increases D and thus increases the entropy; it is particularly clear that this should be so when you melt a crystal, since you thereby destroy the neat and permanent arrangement of the atoms or molecules and turn the crystal lattice into a continually changing random distribution.
An isolated system or a system in a uniform environment (which for the present consideration we do best to include as a part of the system we contemplate) increases its entropy and more or less rapidly approaches the inert state of maximum entropy. We now recognize this fundamental law of physics to be just the natural tendency of things to approach the chaotic state (the same tendency that the books of a library or the piles of papers and manuscripts on a writing desk display) unless we obviate it. (The analogue of irregular heat motion, in this case, is our handling those objects now and again without troubling to put them back in their proper places.)
ORGANIZATION MAINTAINED BY EXTRACTING
’ORDER' FROM THE ENVIRONMENT
How would we express in terms of the statistical theory the marvellous faculty of a living organism, by which it delays the decay into thermodynamical equilibrium (death)? We said before: 'It feeds upon negative entropy', attracting, as it were, a stream of negative entropy upon itself, to compensate the entropy increase it produces by living and thus to maintain itself on a stationary and fairly low entropy level.
If D is a measure of disorder, its reciprocal, 1/D, can be regarded as a direct measure of order. Since the logarithm of 1/D is just minus the logarithm of D, we can write Boltzmann's equation thus:
– (entropy) = k log (1/D).
Hence the awkward expression 'negative entropy' can be replaced by a better one: entropy, taken with the negative sign, is itself a measure of order. Thus the device by which an organism maintains itself stationary at a fairly high level of orderliness ( = fairly low level of entropy) really consists in continually sucking orderliness from its environment. This conclusion is less paradoxical than it appears at first sight. Rather could it be blamed for triviality. Indeed, in the case of higher animals we know the kind of orderliness they feed upon well enough, viz. the extremely well-ordered state of matter in more or less complicated organic compounds, which serve them as foodstuffs. After utilizing it they return it in a very much degraded form — not entirely degraded, however, for plants can still make use of it. (These, of course, have their most powerful supply of 'negative entropy' in the sunlight.)
is still a better word for negative entropy, as shown by Claude Shannon a few years after Schrödinger. And cosmic information creation is required for the sunlight.
NOTE TO CHAPTER 6
Some important papers by Schrödinger:
What Is A Law Of Nature? (1922 Inaugural Lecture at University of Zurich)
Indeterminism In Physics (1931 Lecture to Society for Philosophical Instruction, Berlin)
Discussion of Probability between Separated Systems (Entanglement Paper), Proceedings of the Cambridge Physical Society 1935, 31, issue 4, pp.555-563
The Present Status of Quantum Mechanics ("Cat" Paper), translation of Die Naturwissenschaften 1935, 23, Issue 48: original German Part I, Part II, Part III
Indeterminism and Free Will (Nature, Vol 138, No 3479, July 4, 1936 pp. 13-14)
Probability Relations between Separated Systems, Proceedings of the Cambridge Physical Society 1936, 32, issue 2, pp.446-452
Excerpts from What Is Life?: Chapter 3, Mutations, Chapter 4, Quantum-Mechanical Evidence, Chapter 7, Is Life Based on the Laws of Physics? (PDFs)
Are There Quantum Jumps?, Part I (The British Journal for the Philosophy of Science 3.10 (1952): 109-123.
Are There Quantum Jumps?, Part II (The British Journal for the Philosophy of Science 3.11 (1952): 233-242.