John Stewart Bell
In 1964 John Bell analyzed David Bohm's 1952 suggestion for "hidden variables" added to the 1935 "thought experiments" of Einstein, Podolsky, and Rosen (EPR) which could make them into real experiments. Bell put limits on local "hidden variables" in the form of what he called an "inequality," the violation of which would confirm standard quantum mechanics. Some thinkers, mostly philosophers of science rather than working quantum physicists, think that the work of Bohm and Bell has restored the determinism in physics that Einstein had wanted and that Bohm and/or Bell had discovered the "local elements of reality" that Einstein hoped for in EPR. But Bell himself came to the conclusion that local "hidden variables" will never be found that give the same results as quantum mechanics. This has come to be known as Bell's Theorem. All theories that reproduce the predictions of quantum mechanics will be "nonlocal," Bell concluded. Nonlocality is an element of physical reality and it has produced some remarkable new applications of quantum physics, including quantum cryptography and quantum computing. Bohm proposed an improvement on the original EPR experiment (which measured continuous position and momentum variables). Bohm's reformulation of quantum mechanics postulates (undetectable) deterministic positions and trajectories for atomic particles, where the instantaneous collapse happens in a new "quantum potential" field that can move faster than light speed. But it is still a "nonlocal" theory. So Bohm (and Bell) believed that nonlocal "hidden variables" might exist, and that new information can come into existence at remote "space-like separations" at speeds faster then light, if not instantaneously. This is the idea of entanglement. The original EPR paper was based on a question of Einstein's about two electrons fired in opposite directions from a central source with equal velocities. Einstein imagined them starting from a distance at t0 and approaching one another with high velocities, then for a short time interval from t1 to t1 + Δt in contact with one another, where experimental measurements could be made on the momenta, after which they separate. Now at a later time t2 it would be possible to make a measurement of electron 1's position and would therefore know the position of electron 2 without measuring it explicitly. Einstein used the conservation of linear momentum to "know" the symmetric position of the other electron. This knowledge implies information about the remote electron that is available instantly. Einstein called this "spooky action-at-a-distance." It might better be called "knowledge-at-a-distance." Bohm and his colleague Yakir Aharonov in 1957 proposed a new EPR-like thought experiment using two electrons that are prepared in an initial state of known total spin zero. Instead of measuring continuous variables position and momentum as in EPR, Bohm measures the discrete property of electron spin. If one electron spin is 1/2 in the up direction and the other is spin down or -1/2, the total spin is zero. The underlying physical law of importance is still a conservation law, in this case the conservation of spin angular momentum.
ψ12 = (1/√2) [ ψ+ (1) ψ- (2) - ψ- (1) ψ+ ] (2)We can simplify the notation
| ψ12 > = 1/√2) | + - > - 1/√2) | - + > (2a)Note that this combination preserves the total electron spin as zero and it offers no preferred spatial direction. Note also that under exchange of the two indistinguishable fermions, the antisymmetric wave function changes its sign, thus the minus sign in the above equations. The spherical symmetry is broken when one observer (freely) chooses a direction to measure a spin component of either particle. Erwin Schrödinger described this moment as "disentangling" the particles. In his 1964 paper "On the Einstein-Podolsky-Rosen Paradox," Bell made the case for nonlocality.
The paradox of Einstein, Podolsky and Rosen was advanced as an argument that quantum mechanics could not be a complete theory but should be supplemented by additional variables. These additional variables were to restore to the theory causality and locality. In this note that idea will be formulated mathematically and shown to be incompatible with the statistical predictions of quantum mechanics. It is the requirement of locality, or more precisely that the result of a measurement on one system be unaffected by operations on a distant system with which it has interacted in the past, that creates the essential difficulty. There have been attempts to show that even without such a separability or locality requirement no 'hidden variable' interpretation of quantum mechanics is possible. These attempts have been examined [by Bell] elsewhere and found wanting. Moreover, a hidden variable interpretation of elementary quantum theory has been explicitly constructed [by Bohm]. That particular interpretation has indeed a gross non-local structure. This is characteristic, according to the result to be proved here, of any such theory which reproduces exactly the quantum mechanical predictions. With the example advocated by Bohm and Aharonov, the EPR argument is the following. Consider a pair of spin one-half particles formed somehow in the singlet spin state and moving freely in opposite directions. Measurements can be made, say by Stern-Gerlach magnets, on selected components of the spins σ1 and σ2. If measurement of the component σ1 • a, where a is some unit vector, yields the value + 1 then, according to quantum mechanics, measurement of σ2 • a must yield the value — 1 and vice versa. [Here Bell is conserving total spin.] Now we make the hypothesis, and it seems one at least worth considering, that if the two measurements are made at places remote from one another the orientation of one magnet does not influence the result obtained with the other. Since we can predict in advance the result of measuring any chosen component of σ2, by previously measuring the same component of σ1, it follows that the result of any such measurement must actually be predetermined. Since the initial quantum mechanical wave function does not determine the result of an individual measurement, this predetermination implies the possibility of a more complete specification of the state.Bell describes explicitly how the "measurement of the component σ1 • a, where a is some unit vector, yields the value + 1 then, according to quantum mechanics, measurement of σ2 • a must yield the value — 1 and vice versa." He also says "since we can predict in advance the result of measuring any chosen component of σ2, by previously measuring the same component of σ1, it follows that the result of any such measurement must actually be predetermined." But Schrödinger, who knew more about two-particle wave functions than anyone, explains that while the two particles are entangled (with total spin 0), any measurement disentangles them, while it conserves the total spin zero in the measurement direction. If Alice measures the electron spin of particle 1 in the x-direction as +ℏ/2, then Bob will measure a perfectly anti-correlated -ℏ/2 for particle 2. Note that since it was quantum random whether the two particle state would be projected into | + - > or into - + >, successive measurements by Alice and Bob will generate two perfectly anti-correlated strings of + and - or 0 and 1. This is exactly what is needed for the keys needed in quantum cryptography. Each individual string is random, independent and identically distributed random variables. And the strings have been generated in separated locations over a secure communications channel that cannot be eavesdropped, the ideal for quantum key distribution (QKD). A decade later, Bell titled his 1976 review of the first tests of his theorem about his predicted inequalities, "Einstein-Podolsky-Rosen Experiments." He described his talk as about the "foundations of quantum mechanics," and it was the early days of a movement by a few scientists and many philosophers of science to challenge the "orthodox" quantum mechanics. They particularly attacked the Copenhagen Interpretation, with its notorious speculations about the role of the "conscious observer" and its attacks on physical reality, especially the claim that objects have no properties until they are measured. From the earliest presentations in the late 1920's of the ideas of the supposed "founders" of quantum mechanics, Einstein had deep misgivings of the work going on in Copenhagen, although he never doubted the calculating power of their new mathematical methods, and he came to accept the statistical (indeterministic) nature of quantum physics, which he himself had reluctantly discovered in his 1916 study of the atomic emission of light quanta. He described their work as "incomplete" because it is based on the statistical results of many experiments so it can only make probabilistic predictions about individual experiments. Nevertheless, Einstein hoped to visualize what is going on in an underlying "objective reality." Bell was deeply sympathetic to Einstein's hopes for a return to the "local reality" of classical physics. He identified the EPR paper's title, "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?" as a search for new variables to provide the completeness. Bell thought David Bohm's "hidden variables' were one way to achieve this, though Einstein had called Bohm's approach "too cheap," probably because Bohm included "quantum potentials" traveling faster than light speed, an obvious violation of Einstein's special theory of relativity. In his 1976 review, Bell wrote...
I have been invited to speak on “foundations of quantum mechanics”... The area in question is that of Einstein, Podolsky, and Rosen. Suppose for example, that protons of a few MeV energy are incident on a hydrogen target. Occasionally one will scatter, causing a target proton to recoil. Suppose (Fig. 1) that we have counter telescopes T1 and T2 which register when suitable protons are going towards distant counters C1 and C2. With ideal arrangements, registering of both T1 and T2 will then imply registering of both C1 and C2 after appropriate time decays [delays?]. Suppose next that C1 and C2 are preceded by filters that pass only particles of given polarization, say those with spin projection +1 along the z axis. Then one or both of C1and C2 may fail to register. Indeed for protons of suitable energy one and only one of these counters will register on almost every suitable occasion — i.e., those occasions certified as suitable by telescopes T1 and T2. This is because proton-proton scattering at large angle and low energy, say a few MeV, goes mainly in S wave. But the antisymmetry of the final wave function then requires the antisymmetric singlet spin state. In this state, when one spin is found “up” the other is found “down”. This follows formally from the quantum expectation valueSince Bell's original work, many other physicists have defined other "Bell inequalities" and developed increasingly sophisticated experiments to test them. Most recent tests have used oppositely polarized photons coming from a central source. Here, it is the total photon spin of zero that is conserved.
A variant of EPR’s argument was given by Bohm and Aharonov (1957), formulated in terms of discrete states. He considered a pair of spatially separated spin-1/2 particles produced somehow in a singlet state, for example, by dissociation of the spin-0 system... Suppose that one measures the spin of particle 1 along the x axis. The outcome is not predetermined by the description [wave function] Ψ12. But from it, one can predict that if particle 1 is found to have its spin parallel to the x axis, then particle 2 will be found to have its spin antiparallel to the x axis if the x component of its spin is also measured. Thus, an experimenter can arrange the apparatus in such a way that he can predict the value of the x component of spin of particle 2 presumably without interacting with it (if there is no action-at-a-distance).Clauser and Shimony are wrong to conclude that measuring one spin component would render spin components in all directions definite. If all three x, y, z components of spin had definite values of 1/2, the resultant vector (the diagonal of a cube with side 1/2) would be 3½/2. This is impossible. Spin is always quantized at ℏ/2. The unmeasured components are in a linear combination of + ℏ/2 and - ℏ/2 (with average value zero!). Although Bell's Theorem is one of the foundational documents in the "Foundations of Quantum Mechanics," it is cited much more often than the confirming experiments are explained, because they are quite complicated. The most famous explanations are given in terms of analogies, with flashing lights, dice throws, or card games. See Likewise, he can arrange the apparatus so that he can predict any other component of the spin of particle 2. The conclusion of the argument is that all components of spin of each particle are definite, which of course is not so in the quantum-mechanical description. Hence, a hidden-variables theory seems to be required. David Mermin. What is needed is an explanation describing exactly what happens to the quantum particles and their statistics. The most important experiments were likely those done by John Clauser, Michael Horne, Abner Shimony, and Richard Holt (known collectively as CHSH) and later by Alain Aspect, who did even more sophisticated tests.
Experimental ResultsWith the exception of some of Holt's early results that were found to be erroneous, no evidence has so far been found of any failure of standard quantum mechanics. And as experimental accuracy has improved by orders of magnitude, quantum physics has correspondingly been confirmed to one part in 1018, and the speed of the any information transfer between particles has a lower limit of 106 times the speed of light. There has been no evidence for local "hidden variables." Bell Theorem tests usually add what Bell called "filters," polarization analyzers whose polarization angles can be set, sometimes at high speeds between the so-called "first" and "second" measurements.
On David Bohm's "Impossible" Pilot WaveJohn Bell reflected on Bohm's Pilot Wave in 1987...
Why is the pilot wave picture ignored in textbooks? Should it not be taught, not as the only way, but as an antidote to the prevailing complacency? To show that vagueness, subjectivity, and indeterminism are not forced on us by experimental facts, but by deliberate theoretical choice?
Bohm’s 1952 papers on quantum mechanics were for me a revelation. The elimination of indeterminism was very striking. But more important, it seemed to me, was the elimination of any need for a vague division of the world into “system” on the one hand, and “apparatus” or “observer” on the other. I have always felt since that people who have not grasped the ideas of those papers ... and unfortunately they remain the majority ... are handicapped in any discussion of the meaning of quantum mechanics. A preliminary account of these notions was entitled “Quantum field theory without observers, or observables, or measurements, or systems, or apparatus, or wavefunction collapse, or anything like that”. This could suggest to some that the issue in question is a philosophical one. But I insist that my concern is strictly professional. I think that conventional formulations of quantum theory, and of quantum field theory in particular, are unprofessionally vague and ambiguous. Professional theoretical physicists ought to be able to do better. Bohm has shown us a way.
SuperdeterminismDuring a mid-1980's interview by BBC Radio 3 organized by P. C. W. Davies and J. R. Brown, Bell proposed the idea of a "superdeterminism" that could explain the correlation of results in two-particle experiments without the need for faster-than-light signaling. The two experiments need only have been pre-determined by causes reaching both experiments from an earlier time.
I was going to ask whether it is still possible to maintain, in the light of experimental experience, the idea of a deterministic universe? You know, one of the ways of understanding this business is to say that the world is super-deterministic. That not only is inanimate nature deterministic, but we, the experimenters who imagine we can choose to do one experiment rather than another, are also determined. If so, the difficulty which this experimental result creates disappears. Free will is an illusion - that gets us out of the crisis, does it? That's correct. In the analysis it is assumed that free will is genuine, and as a result of that one finds that the intervention of the experimenter at one point has to have consequences at a remote point, in a way that influences restricted by the finite velocity of light would not permit. If the experimenter is not free to make this intervention, if that also is determined in advance, the difficulty disappears.Bell's superdeterminism would deny the important "free choice" of the experimenter (originally suggested by Niels Bohr and Werner Heisenberg) and later explored by John Conway and Simon Kochen. Conway and Kochen claim that the experimenters' free choice requires that atoms must have free will, something they call their Free Will Theorem. Following John Bell's idea, Nicholas Gisin and Antoine Suarez argue that something might be coming from "outside space and time" to correlate results in their own experimental tests of Bell's Theorem. Roger Penrose and Stuart Hameroff have proposed causes coming "backward in time" to achieve the perfect EPR correlations, as has philosopher Huw Price.
A Preferred Frame?A little later in the same BBC interview, Bell suggested that a preferred frame of reference might help to explain nonlocality and entanglement.
[Davies] Bell's inequality is, as I understand it, rooted in two assumptions: the first is what we might call objective reality - the reality of the external world, independent of our observations; the second is locality, or non-separability, or no faster-than-light signalling. Now, Aspect's experiment appears to indicate that one of these two has to go. Which of the two would you like to hang on to? [Bell] Well, you see, I don't really know. For me it's not something where I have a solution to sell! For me it's a dilemma. I think it's a deep dilemma, and the resolution of it will not be trivial; it will require a substantial change in the way we look at things. But I would say that the cheapest resolution is something like going back to relativity as it was before Einstein, when people like Lorentz and Poincare thought that there was an aether - a preferred frame of reference - but that our measuring instruments were distorted by motion in such a way that we could not detect motion through the aether. Now, in that way you can imagine that there is a preferred frame of reference, and in this preferred frame of reference things do go faster than light. But then in other frames of reference when they seem to go not only faster than light but backwards in time, that is an optical illusion.The standard explanation of entangled particles usually begins with an observer A, often called Alice, and a distant observer B, known as Bob. Between them is a source of two entangled particles. The two-particle wave function describing the indistinguishable particles cannot be separated into a product of two single-particle wave functions. The problem of faster-than-light signaling arises when Alice is said to measure particle A and then puzzle over how Bob's (later) measurements of particle B can be perfectly correlated, when there is not enough time for any "influence" to travel from A to B. Now as John Bell knew very well, there are frames of reference moving with respect to the laboratory frame of the two observers in which the time order of the events can be reversed. In some moving frames Alice measures first, but in others Bob measures first. Back in the 1960's, C. W. Rietdijk and Hilary Putnam argued that physical determinism could be proved to be true by considering the experiments and observers A and B in a "spacelike" separation and moving at high speed with respect to one another. Roger Penrose developed a similar argument in his book The Emperor's New Mind. It is called the Andromeda Paradox. If there is a preferred frame of reference, surely it is the one in which the origin of the two entangled particles is at rest. Assuming that Alice and Bob are also at rest in this frame and equidistant from the origin, we arrive at the simple picture in which any measurement that causes the two-particle wave function to collapse makes both particles appear simultaneously at determinate places (just what is needed to conserve energy, momentum, angular momentum, and spin). Because a "preferred frame" has an important use in special relativity, where all inertial frames are equivalent, we might call this frame a "special frame."
How Mysterious Is Entanglement?Some commentators say that nonlocality and entanglement are a "second revolution" in quantum mechanics, "the greatest mystery in physics," or "science's strangest phenomenon," and that quantum physics has been "reborn." They usually quote Erwin Schrödinger as saying
"I consider [entanglement] not as one, but as the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought."Schrödinger knew that his two-particle wave function Ψ12 could not have the same simple interpretation as the single particle, which can be visualized in ordinary 3-dimensional configuration space. And he is right that entanglement apparently exhibits a richer form of the apparent "action-at-a-distance" and nonlocality that Einstein had already identified in the collapse of the single particle wave function. But the main difference is that two particles acquire new properties instead of one, and they appear to do it instantaneously (at faster than light speeds), just as in the case of a single-particle measurement, the probability of finding that particular single particle anywhere else is instantaneously zero. Nonlocality and entanglement are thus just another manifestation of Richard Feynman's "only" mystery. In both single-particle and two-particle cases paradoxes appear only when we attempt to describe individual particles following specific paths to measurement by observer A (and/or observer B). We cannot know the specific paths at every instant without measurements. But Einstein has told us that at every instant the particles are conserving momentum, despite our lack of knowledge between individual experiments. We can ask what happens if Bob is not at the same distance from the origin as Alice, but farther away. When Alice detects the particle (with say spin up), at that instant the other particle also becomes determinate (with spin down) at the same distance on the other side of the origin. It now continues, in that determinate state, to Bob's measuring apparatus.
If measurement of the component σ1 • a, where a is some unit vector, yields the value + 1 then, according to quantum mechanics, measurement of σ2 • a must yield the value — 1 and vice versa... Since we can predict in advance the result of measuring any chosen component of σ2, by previously measuring the same component of σ1, it follows that the result of any such measurement must actually be predetermined.Since the collapse of the two-particle wave function is indeterminate, nothing is pre-determined, although σ2 is indeed determined to have opposite sign (to conserve spin momentum) once σ1 is measured. Here Bell is describing the "following" measurement to be in the same direction as the "previous" measurement. In Bell's description, Bob is measuring "the same component" as Alice, meaning that he measures at the same angle as Alice. If Bob should measure in a different spin direction from Alice (a different spin component), his measurements will lose their perfect correlation, slowly at first for a small angle. As the angle between their measurements increases, the correlation falls off as the square of the cosine of the angle. Oddly, Bell's inequality for local hidden variables predicts a linear falloff with angle. Supporters of the Copenhagen Interpretation claim that the properties of particles (like angular or linear momentum) do not exist until they are measured. It was Pascual Jordan who claimed the measurement creates the value of a property. This is true when the preparation of the state is in an unknown linear combination (superposition) of quantum states. In our case, the entangled particles have been prepared in a superposition of states, but both of them have total spin zero.
ψ12 = (1/√2) [ ψ+ (1) ψ- (2) - ψ- (1) ψ+ (2) ]So whichever of these two states is created by the preparation, it will put the two particles in opposite spin states, randomly + - or - + , but still supporting Bell's view, that they will be perfectly (anti-)correlated when measured at exactly the same angle (measuring the same spin component). Wolfgang Pauli called it a "measurement of the first kind" when a system is prepared in a state and if measured again, will be certainly found in the same state. (This is the basis for the quantum zeno effect.) Since our two electrons have been prepared with one spin up and the other down, what could possibly cause them to change, for example, to both spins in the same direction, or as Copenhagen claims, simply to have both spins no longer definite until the next measurement? As long as nothing interferes with either entangled particle as they travel to the distant detectors, they will be found to be still perfectly correlated, if (and only if) they are measured at the same angle. Otherwise, the correlations should fall off as the square of the cosine of the angle difference.
We can illustrate the straight-line predictions of Bell's inequalities for local hidden variables, the cosine curves predicted by quantum mechanics and conservation of angular momentum, and the odd "kinks" at angles 0°, 90°, 180°, and 270°, with what is called a "Popescu-Rorhlich box." The "PR Box" shows Bell’s local hidden variables prediction as four straight lines of the inner square. The circular region of quantum mechanics correlations are found outside Bell's straight lines, "violating" his inequalities. Quantum mechanics and Bell's inequalities meet at the corners, where Bell's predictions show a distinctly non-physical right-angle that Bell called a "kink." All experimental results have been found to lie along the curved quantum predictions called the "Tsirelson bound."
In 1976, Bell gave us this diagram of the "kinks" in his local hidden variables inequality. He says,
Unlike the quantum correlation, which is stationary in θ at θ = 0, at the hidden variable correlation must have a kink thereIn his famous 1981 article on "Bertlmann's Socks," Bell explains that the predictions for his "ad hoc" model are linear in the angle difference |a - b|, and he notes the fact that his inequality only agrees with the quantum predictions at the corners of the square of linear predictions above, and not at intermediate angles.
To account then for the Einstein-Podolsky-Rosen-Bohm correlations we have only to assume that the two particles emitted by the source have oppositely directed magnetic axes. Then if the magnetic axis of one particle is more nearly along (than against) one Stern-Gerlach field, the magnetic axes of the other particle will be more nearly against (than along) a parallel Stern- Gerlach field. So when one particle is deflected up, the other is deflected down, and vice versa. There is nothing whatever problematic or mind-boggling about these correlations, with parallel Stern-Gerlach analyzers, from the Einsteinian point of view. So far so good. But now go a little further than before, and consider non-parallel Stern-Gerlach magnets. Let the first be rotated away from some standard position, about the particle line of flight, by an angle a. Let the second be rotated likewise by an angle b. Then if the magnetic axis of either particle separately is randomly oriented, but if the axes of the particles of a given pair are always oppositely oriented, a short calculation gives for the probabilities of the various possible results, in the ad hoc model,... P(up, down) = P(down, up) = 1/2 - |a-b|/2π where ‘up’ and ‘down’ are defined with respect to the magnetic fields of the two magnets. However, a quantum mechanical calculation gives P(up, down) = P(down, up) = 1/2 - 1/2(sin(a - b)/2)2 [= 1/2(cos(a - b)/2)2] Thus the ad hoc model does what is required of it (i.e., reproduces quantum mechanical results) only at (a — b) = 0, (a - b) = π/2 and (a — b) = π, but not at intermediate angles.The dependence on the square of the cosine is the so-called "law of Malus" for crossed polarizers as pointed out by Abner Shimony in his Stanford Encyclopedia article on Bell's Theorem. Paul Dirac taught his "principle of superposition" with crossed polarizers in his 1930 textbook The Principles of Quantum Mechanics.
Can Perfect Correlations Be Explained by Conservation Laws?We find that David Bohm, Eugene Wigner, and even John Bell used conservation of angular momentum (or particle spin) to tell us that if one spin-1/2 electron is measured up, the other must be down. Just as Albert Einstein used conservation of linear momentum in his development of the EPR Paradox. David Bohm and Yakir Aharonov wrote in 1957,
We consider a molecule of total spin zero consisting of two atoms, each of spin one-half. The wave function of the system is thereforeEugene Wigner wrote in 1962
If a measurement of the momentum of one of the particles is carried out — the possibility of this is never questioned — and gives the result p, the state vector of the other particle suddenly becomes a (slightly damped) plane wave with the momentum -p. This statement is synonymous with the statement that a measurement of the momentum of the second particle would give the result -p, as follows from the conservation law for linear momentum. The same conclusion can be arrived at also by a formal calculation of the possible results of a joint measurement of the momenta of the two particles.One can go even further: instead of measuring the linear momentum of one particle, one can measure its angular momentum about a fixed axis. If this measurement yields the value mℏ, the state vector of the other particle suddenly becomes a cylindrical wave for which the same component of the angular momentum is -mℏ. This statement is again synonymous with the statement that a measurement of the said component of the angular momentum of the second particle certainly would give the value -mℏ. This can be inferred again from the conservation law of the angular momentum (which is zero for the two particles together) or by means of a formal analysis. John Bell wrote in 1964,
With the example advocated by Bohm and Aharonov, the EPR argument is the following. Consider a pair of spin one-half particles formed somehow in the singlet spin state and moving freely in opposite directions. Measurements can be made, say by Stern-Gerlach magnets, on selected components of the spins σ1 and σ2. If measurement of the component σ1 • a, where a is some unit vector, yields the value + 1 then, according to quantum mechanics, measurement of σ2 • a must yield the value — 1 and vice versa. Now we make the hypothesis, and it seems one at least worth considering, that if the two measurements are made at places remote from one another the orientation of one magnet does not influence the result obtained with the other.Just like Bohm and Wigner, Bell is implicitly using the conservation of total spin. Since we can predict in advance the result of measuring any chosen component of σ2, by previously measuring the same component of σ1, it follows that the result of any such measurement must actually be predetermined. Since the initial quantum mechanical wave function does not determine the result of an individual measurement, this predetermination implies the possibility of a more complete specification of the state. Albert Einstein made the same argument in 1933, shortly before EPR, though with conservation of linear momentum, asking Leon Rosenfeld,
Suppose two particles are set in motion towards each other with the same, very large, momentum, and they interact with each other for a very short time when they pass at known positions. Consider now an observer who gets hold of one of the particles, far away from the region of interaction, and measures its momentum: then, from the conditions of the experiment, he will obviously be able to deduce the momentum of the other particle. If, however, he chooses to measure the position of the first particle, he will be able tell where the other particle is.Supporters of the Copenhagen Interpretation claim that the properties of the particles (like angular or linear momentum) do not exist until they are measured. It was Pascual Jordan who claimed the measurement creates the value of a property. This is true when the preparation of the state is in an unknown linear combination (superposition) of quantum states. And in our case, quantum mechanics describes the entangled particles as prepared in a superposition of two-particle states, but note that both of the states have total spin zero.
ψ12 = (1/√2) [ ψ+(1) ψ-(2) - ψ-(1) ψ+(2)] (1)Now this initial entangled state is spherically symmetric and rotationally invariant. It has no preferred spin direction that could "pre-determine" the directions that will be found by Alice and Bob, as Bell described. The preferred direction is created by Alice's measurement, or by Bob's should he measure first in the "special frame" in which Alice and Bob are "at rest" and equidistant from the location of the initial entanglement. Let's assume that Alice measures first and gets spin +1/2. The prepared state has been projected (randomly) into ψ+(1) ψ-(2). But most important, Alice's measurement establishes the angle of her spin measurement - the angle of her Stern-Gerlach magnet in the x,y plane. Werner Heisenberg says it is her free choice to measure the x-component. As the Copenhagen Interpretation describes this , Alice brings this x-component property into existence. (This was Pascual Jordan's contribution to the interpretation.) There was no x- or y-component in the rotationally invariant prepared entanglement. Paul Dirac pointed out that the actual value for the property depend's on what he calls "Nature's choice." The initial prepared state (1) might equally have collapsed into ψ-(2). This is the source of the quantum randomness which is critically important for quantum encryption. Whichever of the two states is projected by Alice's measurement, it breaks the original symmetry, and puts the two particles in opposite spin states, randomly + - or - +, supporting the views of Bohm, Wigner, and Bell, that particles will be perfectly (anti-)correlated when measured. In our example, since Alice measured the x-component of spin as +1/2, Bob will necessarily (and because of conservation of angular momentum) measure the x-component as -1/2. As we saw above, Wolfgang Pauli called it a "measurement of the first kind" when a system is prepared in a state, so that when measured, it will certainly be found in the same state. As long as nothing interferes with either entangled particle as they travel to the distant detectors (though perhaps decoherence?), they will be found to be perfectly correlated if (and only if) they are measured at the same angle (in our case, the x-component). Otherwise. the correlations should fall off as the square of the cosine of the angle difference. It is strange that Bell accepted an inequality that predicts correlations fall off with angle as a non-physical straight-line function with "kinks." In any case, conservation laws tell us that when either particle is measured, we know instantly those properties of the other particle, including its location equidistant from, but on the opposite side of, the entangling interaction, and all other conserved properties such as spin. But this is not "action-at-a-distance." It's just "knowledge-at-a-distance." A more recent (2005) study showing that correlations in Bell tests is the result of conservation of angular momentum is "Correlation functions, Bell's inequalities and the fundamental conservation laws," by C. S. Unnikrishnan of the Tata Institute in India. He also discusses the odd "kinks" in Bell's linear predictions of correlations compared to the conservation law curve.
No "Hidden Variables," but Perhaps "Hidden Constants?"We find no need for "hidden variables," whether local or non-local. But we might say that the conservation laws give us "hidden constants." Conservation of a particular property is often described as a "constant of the motion." These constants might be viewed as "local," in that they travel along with particles at all times, or as "global," in that they are a property of the two-particle probability amplitude wave function Ψ12 as it spreads out in space. This agrees with Bohm, and especially with Bell, who says that the spin of particle 2 is "predetermined" to be found up if particle 1 is measured to be down. But recall that the Copenhagen Interpretation says we cannot know a spin property until it is measured. So some claim that the spins are in an unknown combination of spin down and spin up until the measurements. It is this that suggests the possibility that both spins might be found in the same direction, violating conservation laws. Although electron spins in this situation are never found to be the same when measured in the same direction, the Copenhagen view gave rise to the idea of a hidden variable as some sort of signal that could travel to particle 2 after the measurement of particle 1, causing it to change its spin to be opposite that of particle 1. What sort of signal might this be? And what mechanism exists in a bare electron that receives the signal and then causes the electron to change its spin without an external force of some kind? Clearly, Wigner's explicit conservation of angular momentum, and the implicit claims of Bohm and Bell that the electron spins were prepared (entangled) in opposite states, give us the simplest and clearest explanations of the entanglement mystery. The intuitive idea that the particles were prepared with spins opposite can be interpreted as the "common cause" of the correlations. Despite accepting that a particular value of some "observables" can only be "known" by a measurement (knowledge is an epistemological problem) Einstein asked whether the particle actually (really, ontologically) has a path and position, even other properties, before we measure it? His answer was yes. So Einstein would likely agree with Wigner, Bohm, and with Bell to assume that the two particles have opposite spins from the time of their entangling interaction. Two "hidden constants" of the motion, one spin up, one down, completely explain the fact of perfect correlations of opposing spins. That "Nature's" initial choice of up-down versus down-up is quantum random explains why the bit strings created by Alice and Bob can be used in quantum encryption. Quantum keys are distributed over a secure communications channel that cannot be "tapped" by an eavesdropper without destroying the perfect correlation of the pair of bit strings.
Principle Theories and Constructivist TheoriesIn his 1933 essay, "On the Method of Theoretical Physics," Albert Einstein argued that the greatest physical theories would be built on "principles," not on constructions derived from physical experience. His theory of special relativity was based on the principle of relativity, that the laws of physics are the same in all inertial frames, along with the constant velocity of light in all frames. Our explanation of entanglement as the result of "hidden constants" of the motion is based on conservation principles, which, as Emmy Noether showed, are based on still deeper principles of symmetry. This principle theory explaining entanglement is also supported by the empirical evidence that entangled electron spins are always found in opposite directions, conserving the angular momentum. Einstein would have approved.
In 1987, Bell contributed an article entitled Are There Quantum Jumps? to a centenary volume for Erwin Schrödinger. Schrödinger strenuously denied quantum jumps or collapses of the wave function. Bell's title was inspired by two articles with the same title written by Schrödinger in 1952 (Part I, Part II). Just a year before Bell's death in 1990, physicists assembled for a conference on 62 Years of Uncertainty (referring to Werner Heisenberg's 1927 principle of indeterminacy). John Bell's contribution to this conference was an article called "Against Measurement." In it he attacked Max Born's statistical interpretation of quantum mechanics (which Born acknowledged was based on an original suggestion of Albert Einstein). And Bell praised the new ideas of GianCarlo Ghirardi and his colleagues, Alberto Rimini and Tomaso Weber:
In the beginning, Schrödinger tried to interpret his wavefunction as giving somehow the density of the stuff of which the world is made. He tried to think of an electron as represented by a wavepacket — a wave-function appreciably different from zero only over a small region in space. The extension of that region he thought of as the actual size of the electron — his electron was a bit fuzzy. At first he thought that small wavepackets, evolving according to the Schrödinger equation, would remain small. But that was wrong. Wavepackets diffuse, and with the passage of time become indefinitely extended, according to the Schrödinger equation. But however far the wavefunction has extended, the reaction of a detector to an electron remains spotty. So Schrödinger's 'realistic' interpretation of his wavefunction did not survive. Then came the Born interpretation. The wavefunction gives not the density of stuff, but gives rather (on squaring its modulus) the density of probability. Probability of what exactly? Not of the electron being there, but of the electron being found there, if its position is 'measured.' Why this aversion to 'being' and insistence on 'finding'? The founding fathers were unable to form a clear picture of things on the remote atomic scale. They became very aware of the intervening apparatus, and of the need for a 'classical' base from which to intervene on the quantum system. And so the shifty split. The kinematics of the world, in this orthodox picture, is given a wavefunction (maybe more than one?) for the quantum part, and classical variables — variables which have values — for the classical part: (Ψ(t, q, ...), X(t),...). The Xs are somehow macroscopic. This is not spelled out very explicitly. The dynamics is not very precisely formulated either. It includes a Schrödinger equation for the quantum part, and some sort of classical mechanics for the classical part, and 'collapse' recipes for their interaction. It seems to me that the only hope of precision with the dual (Ψ, X) kinematics is to omit completely the shifty split, and let both Ψ and x refer to the world as a whole. Then the Xs must not be confined to some vague macroscopic scale, but must extend to all scales. In the picture of de Broglie and Bohm, every particle is attributed a position x(t). Then instrument pointers — assemblies of particles have positions, and experiments have results. The dynamics is given by the world Schrödinger equation plus precise 'guiding' equations prescribing how the x(t)s move under the influence of Ψ. Particles are not attributed angular momenta, energies, etc., but only positions as functions of time. Peculiar 'measurement' results for angular momenta, energies, and so on, emerge as pointer positions in appropriate experimental setups. Considerations of KG [Kurt Gottfried] and vK [N. G. van Kampen] type, on the absence (FAPP) [For All Practical Purposes] of macroscopic interference, take their place here, and an important one, is showing how usually we do not have (FAPP) to pay attention to the whole world, but only to some subsystem and can simplify the wave-function... FAPP. The Born-type kinematics (Ψ, X) has a duality that the original 'density of stuff' picture of Schrödinger did not. The position of the particle there was just a feature of the wavepacket, not something in addition. The Landau—Lifshitz approach can be seen as maintaining this simple non-dual kinematics, but with the wavefunction compact on a macroscopic rather than microscopic scale. We know, they seem to say, that macroscopic pointers have definite positions. And we think there is nothing but the wavefunction. So the wavefunction must be narrow as regards macroscopic variables. The Schrödinger equation does not preserve such narrowness (as Schrödinger himself dramatised with his cat). So there must be some kind of 'collapse' going on in addition, to enforce macroscopic narrowness. In the same way, if we had modified Schrödinger's evolution somehow we might have prevented the spreading of his wavepacket electrons. But actually the idea that an electron in a ground-state hydrogen atom is as big as the atom (which is then perfectly spherical) is perfectly tolerable — and maybe even attractive. The idea that a macroscopic pointer can point simultaneously in different directions, or that a cat can have several of its nine lives at the same time, is harder to swallow. And if we have no extra variables X to express macroscopic definiteness, the wavefunction itself must be narrow in macroscopic directions in the configuration space. This the Landau—Lifshitz collapse brings about. It does so in a rather vague way, at rather vaguely specified times. In the Ghirardi—Rimini—Weber scheme (see the contributions of Ghirardi, Rimini, Weber, Pearle, Gisin and Diosi presented at 62 Years of Uncertainty, Erice, Italy, 5-14 August 1989) this vagueness is replaced by mathematical precision. The Schrödinger wavefunction even for a single particle, is supposed to be unstable, with a prescribed mean life per particle, against spontaneous collapse of a prescribed form. The lifetime and collapsed extension are such that departures of the Schrödinger equation show up very rarely and very weakly in few-particle systems. But in macroscopic systems, as a consequence of the prescribed equations, pointers very rapidly point, and cats are very quickly killed or spared. The orthodox approaches, whether the authors think they have made derivations or assumptions, are just fine FAPP — when used with the good taste and discretion picked up from exposure to good examples. At least two roads are open from there towards a precise theory, it seems to me. Both eliminate the shifty split. The de Broglie—Bohm-type theories retain, exactly, the linear wave equation, and so necessarily add complementary variables to express the non-waviness of the world on the macroscopic scale. The GRW-type theories have nothing in the kinematics but the wavefunction. It gives the density (in a multidimensional configuration space!) of stuff. To account for the narrowness of that stuff in macroscopic dimensions, the linear Schrödinger equation has to be modified, in this GRW picture by a mathematically prescribed spontaneous collapse mechanism. The big question, in my opinion, is which, if either, of these two precise pictures can be redeveloped in a Lorentz invariant way.CERN in Geneva...All historical experience confirms that men might not achieve the possible if they had not, time and time again, reached out for the impossible. (Max Weber) ...we do not know where we are stupid until we stick our necks out. (R. P. Feynman)
organized by Antoine Suarez, director of the
Center for Quantum Philosophy. There are links on the CERN website to the
video of this talk, and to a transcription. In this talk, Bell summarizes the situation as follows:
It just is a fact that quantum mechanical predictions and experiments, in so far as they have been done, do not agree with [my] inequality. And that's just a brutal fact of nature...that's just the fact of the situation; the Einstein program fails, that's too bad for Einstein, but should we worry about that? I cannot say that action at a distance is required in physics. But I can say that you cannot get away with no action at a distance. You cannot separate off what happens in one place and what happens in another. Somehow they have to be described and explained jointly.Bell gives three reasons for not worrying.
So as a solution of this situation, I think we cannot just say 'Oh oh, nature is not like that.' I think you must find a picture in which perfect correlations are natural, without implying determinism, because that leads you back to nonlocality. And also in this independence as far as our individual experiences goes, our independence of the rest of the world is also natural. So the connections have to be very subtle, and I have told you all that I know about them. Thank you.The work of GianCarlo Ghirardi that Bell endorsed is a scheme that makes the wave function collapse by adding small (order of 10-24) nonlinear and stochastic terms to the linear Schrödinger equation. GRW can not predict when and where their collapse occurs (it is simply random), but the contact with macroscopic objects such as a measuring apparatus (with the order of 1024 atoms) makes the probability of collapse of order unity. Information physics removes Bell's "shifty split" without "hidden variables" or making ad hoc non-linear additions like those of Ghirardi-Rimini-Weber to the linear Schrödinger equation. The "moment" at which the boundary between quantum and classical worlds occurs is the moment that irreversible observable information enters the universe. So we can now look at John Bell's drawing of possible locations for his "shifty split" and identify the correct moment - when irreversible information enters the universe.
In the information physics solution to the problem of measurement, the timing and location of Bell's "shifty split" (the "cut" or "Schnitt" of Heisenberg and von Neumann) are identified with the interaction between quantum system and classical apparatus that leaves the apparatus in an irreversible stable state providing information to the observer. As Bell may have seen, it is therefore not a "measurement" by a conscious observer that is needed to "collapse" wave functions. It is the irreversible interaction of the quantum system with another system, whether quantum or approximately classical. The interaction must be one that changes the information about the system. And that means a local entropy decrease and overall entropy increase to make the information stable enough to be observed by an experimenter and therefore be a measurement.
ReferencesAgainst Measurement (PDF) Beables for Quantum Field Theory (PDF) On the Einstein-Podolsky-Rosen Paradox (PDF) On the Impossible Pilot Wave (PDF) Are There Quantum Jumps? (PDF, Excerpt) BBC Interview (PDF, Excerpt) Epistemological Letters "Quantum Generalizations of Bell's Inequality,"
by C.S.Tsirel'son, 1980 "Correlation functions, Bell's inequalities and the fundamental conservation laws,"
by C. S. Unnikrishnan, Tata Institute, 2005 "Why the Tsirelson Bound?,"
by Jeffrey Bub, 2012 "Why the Tsirelson Bound? Bub's Question and Fuchs' Desideratum,"
by Stuckey, et al., 2019