John Stewart Bell
In 1964 John Bell showed how the 1935 "thought experiments" of Einstein, Podolsky, and Rosen (EPR) could be made into real experiments. He put limits on local "hidden variables" that might restore a deterministic physics in the form of what he called an "inequality," the violation of which would confirm standard quantum mechanics. Some thinkers, mostly philosophers of science rather than working quantum physicists, think that Bell's work has restored the determinism in physics that Einstein had wanted and that Bell recovered the "local elements of reality" that Einstein hoped for. But Bell himself came to the conclusion that local "hidden variables" will never be found that give the same results as quantum mechanics. This has come to be known as Bell's Theorem. All theories that reproduce the predictions of quantum mechanics will be "nonlocal," Bell concluded. Nonlocality is an element of physical reality and it has produced some remarkable new applications of quantum physics, including quantum cryptography and quantum computing. Bell based his thught of real experiments on the 1952 ideas of David Bohm. Bohm proposed an improvement on the original EPR experiment (which measured position and momentum). Bohm's reformulation of quantum mechanics postulates (undetectable) deterministic positions and trajectories for atomic particles, where the instantaneous collapse happens in a new "quantum potential" field that can move faster than light speed. But it is still a "nonlocal" theory. So Bohm (and Bell) believed that nonlocal "hidden variables" might exist, and that some form of information could come into existence at remote "space-like separations" at speeds faster then light, if not instantaneously. The original EPR paper was based on a question of Einstein's about two electrons fired in opposite directions from a central source with equal velocities. Einstein imagined them starting from a distance at t0 and approaching one another with high velocities, then for a short time interval from t1 to t1 + Δt in contact with one another, where experimental measurements could be made on the momenta, after which they separate. Now at a later time t2 it would be possible to make a measurement of electron 1's position and would therefore know the position of electron 2 without measuring it explicitly. Einstein used the conservation of linear momentum to "know" the symmetric position of the other electron. This knowledge implies information about the remote electron that is available instantly. Einstein called this "spooky action-at-a-distance." It might better be called "knowledge-at-a-distance." Bohm's 1952 thought experiment used two electrons that are prepared in an initial state of known total spin. If one electron spin is 1/2 in the up direction and the other is spin down or -1/2, the total spin is zero. The underlying physical law of importance is still a conservation law, in this case the conservation of spin angular momentum.
The paradox of Einstein, Podolsky and Rosen was advanced as a argument that quantum mechanics could not be a complete theory but should be supplemented by additional variables. These additional variables were to restore to the theory causality and locality. In this note that idea will be formulated mathematically and shown to be incompatible with the statistical predictions of quantum mechanics. It is the requirement of locality, or more precisely that the result of a measurement on one system be unaffected by operations on a distant system with which it has interacted in the past, that creates the essential difficulty. There have been attempts to show that even without such a separability or locality requirement no 'hidden variable' interpretation of quantum mechanics is possible. These attempts have been examined [by Bell] elsewhere and found wanting. Moreover, a hidden variable interpretation of elementary quantum theory has been explicitly constructed [by Bohm]. That particular interpretation has indeed a gross non-local structure. This is characteristic, according to the result to be proved here, of any such theory which reproduces exactly the quantum mechanical predictions. With the example advocated by Bohm and Aharonov, the EPR argument is the following. Consider a pair of spin one-half particles formed somehow in the singlet spin state and moving freely in opposite directions. Measurements can be made, say by Stern-Gerlach magnets, on selected components of the spins σ1 and σ2. If measurement of the component σ1 • a, where a is some unit vector, yields the value + 1 then, according to quantum mechanics, measurement of σ2 • a must yield the value — 1 and vice versa. Now we make the hypothesis, and it seems one at least worth considering, that if the two measurements are made at places remote from one another the orientation of one magnet does not influence the result obtained with the other. Since we can predict in advance the result of measuring any chosen component of σ2, by previously measuring the same component of σ1, it follows that the result of any such measurement must actually be predetermined. Since the initial quantum mechanical wave function does not determine the result of an individual measurement, this predetermination implies the possibility of a more complete specification of the state.Bell titled his 1976 review of the first tests of his theorem about his predicted inequalities, "Einstein-Podolsky-Rosen Experiments." He described his talk as about the "foundations of quantum mechanics," and it was the early days of a movement by a few scientists and many philosophers of science to challenge the "orthodox" quantum mechanics. They particularly attacked the Copenhagen Interpretation, with its notorious speculations about the role of the "conscious observer" and its attacks on physical reality. From the earliest presentations in the late 1920's of the ideas of the supposed "founders" of quantum mechanics, Einstein had deep misgivings of the work going on in Copenhagen, although he never doubted the calculating power of their new mathematical methods, and he came to accept the statistical (indeterministic) nature of quantum physics, which he himself had reluctantly discovered. He described their work as "incomplete" because it is based on the statistical results of many experiments so only makes probabilistic predictions about individual experiments. Nevertheless, Einstein hoped to visualize what is going on in an underlying "objective reality." Bell was deeply sympathetic to Einstein's hopes for a return to the "local reality" of classical physics. He identified the EPR paper's title, "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?" as a search for new variables to provide the completeness. Bell thought David Bohm's "hidden variables' were one way to achieve this, though Einstein had called Bohm's approach "too cheap," probably because Bohm included vector potentials traveling faster than light speed, an apparent violation of Einstein's special theory of relativity.
I have been invited to speak on “foundations of quantum mechanics”... The area in question is that of Einstein, Podolsky, and Rosen. Suppose for example, that protons of a few MeV energy are incident on a hydrogen target. Occasionally one will scatter, causing a target proton to recoil. Suppose (Fig. 1) that we have counter telescopes T1 and T2 which register when suitable protons are going towards distant counters C1 and C2. With ideal arrangements, registering of both T1 and T2 will then imply registering of both C1 and C2 after appropriate time decays [delays?]. Suppose next that C1 and C2 are preceded by filters that pass only particles of given polarization, say those with spin projection +1 along the z axis. Then one or both of C1and C2 may fail to register. Indeed for protons of suitable energy one and only one of these counters will register on almost every suitable occasion — i.e., those occasions certified as suitable by telescopes T1 and T2. This is because proton-proton scattering at large angle and low energy, say a few MeV, goes mainly in S wave. But the antisymmetry of the final wave function then requires the antisymmetric singlet spin state. In this state, when one spin is found “up” the other is found “down”. This follows formally from the quantum expectation valueSince Bell's original work, many other physicists have defined other "Bell inequalities" and developed increasingly sophisticated experiments to test them. Most recent tests have used oppositely polarized photons coming from a central source. It is the total photon spin of zero that is conserved.
A variant of EPR’s argument was given by Bohm (1951), formulated in terms of discrete states. He considered a pair of spatially separated spin-1/2 particles produced somehow in a singlet state, for example, by dissociation of the spin-0 system... Suppose that one measures the spin of particle 1 along the x axis. The outcome is not predetermined by the description [wave function] Ψ. But from it, one can predict that if particle 1 is found to have its spin parallel to the x axis, then particle 2 will be found to have its spin antiparallel to the x axis if the x component of its spin is also measured. Thus, the experimenter can arrange his apparatus in such a way that he can predict the value of the x component of spin of particle 2 presumably without interacting with it (if there is no action-at-a-distance).If all three x, y, z components of spin had definite values of 1/2, the resultant vector (the diagonal of a cube with side 1/2) would be 3½/2. This is impossible. Spin is always quantized at ℏ/2. Unmeasured components are in a linear combination of + ℏ/2 and - ℏ/2 (average value zero!). The concept of "local" hidden variables is then the simultaneous existence of positive definite values of σy and σz (both equal to + / - ℏ/2) at the same time σx has measured value ℏ/2! Although Bell's Theorem is one of the foundational documents in the "Foundations of Quantum Mechanics," it is cited much more often than the confirming experiments are explained, because they are quite complicated. The most famous explanations are given in terms of analogies, with flashing lights, dice throws, or card games. What is needed is an explanation describing what happens to the quantum particles and their statistics. The most important experiments were likely that done by Likewise, he can arrange the apparatus so that he can predict any other component of the spin of particle 2. The conclusion of the argument is that all components of spin of each particle are definite, which of course is not so in the quantum-mechanical description. Hence, a hidden-variables theory seems to be required. John Clauser, Michael Horne, Abner Shimony, and Richard Holt (known collectively as CHSH) and later by Alain Aspect, who did more sophisticated tests.
Experimental ResultsWith the exception of some of Holt's early results that were found to be erroneous, no evidence has so far been found of any failure of standard quantum mechanics. And as experimental accuracy has improved by orders of magnitude, quantum physics has correspondingly been confirmed to one part in 1018, and the speed of the probability information transfer between particles has a lower limit of 106 times the speed of light. There has been no evidence for local "hidden variables." Bell Theorem tests always add what Bell called "filters," polarization analyzers whose polarization angles can be set, sometimes at high speeds between the so-called "first" and "second" measurements. Notice that this represents an interaction that alters the underlying physics, projecting the particles into unpredictable states.
On David Bohm's "Impossible" Pilot Wave
Why is the pilot wave picture ignored in textbooks? Should it not be taught, not as the only way, but as an antidote to the prevailing complacency? To show that vagueness, subjectivity, and indeterminism are not forced on us by experimental facts, but by deliberate theoretical choice?
Bohm’s 1952 papers on quantum mechanics were for me a revelation. The elimination of indeterminism was very striking. But more important, it seemed to me, was the elimination of any need for a vague division of the world into “system” on the one hand, and “apparatus” or “observer” on the other. I have always felt since that people who have not grasped the ideas of those papers ... and unfortunately they remain the majority ... are handicapped in any discussion of the meaning of quantum mechanics. A preliminary account of these notions was entitled “Quantum field theory without observers, or observables, or measurements, or systems, or apparatus, or wavefunction collapse, or anything like that”. This could suggest to some that the issue in question is a philosophical one. But I insist that my concern is strictly professional. I think that conventional formulations of quantum theory, and of quantum field theory in particular, are unprofessionally vague and ambiguous. Professional theoretical physicists ought to be able to do better. Bohm has shown us a way.
SuperdeterminismDuring a mid-1980's interview by BBC Radio 3 organized by P. C. W. Davies and J. R. Brown, Bell proposed the idea of a "superdeterminism" that could explain the correlation of results in two-particle experiments without the need for faster-than-light signaling. The two experiments need only have been pre-determined by causes reaching both experiments from an earlier time.
I was going to ask whether it is still possible to maintain, in the light of experimental experience, the idea of a deterministic universe? You know, one of the ways of understanding this business is to say that the world is super-deterministic. That not only is inanimate nature deterministic, but we, the experimenters who imagine we can choose to do one experiment rather than another, are also determined. If so, the difficulty which this experimental result creates disappears. Free will is an illusion - that gets us out of the crisis, does it? That's correct. In the analysis it is assumed that free will is genuine, and as a result of that one finds that the intervention of the experimenter at one point has to have consequences at a remote point, in a way that influences restricted by the finite velocity of light would not permit. If the experimenter is not free to make this intervention, if that also is determined in advance, the difficulty disappears.Bell's superdeterminism would deny the important "free choice" of the experimenter (originally suggested by Niels Bohr and Werner Heisenberg) and later explored by John Conway and Simon Kochen. Conway and Kochen claim that the experimenters' free choice requires that atoms must have free will, something they call their Free Will Theorem. Following John Bell's idea, Nicholas Gisin and Antoine Suarez argue that something might be coming from "outside space and time" to correlate results in their own experimental tests of Bell's Theorem. Roger Penrose and Stuart Hameroff have proposed causes coming "backward in time" to achieve the perfect EPR correlations, as has philosopher Huw Price.
A Preferred Frame?A little later in the same BBC interview, Bell suggested that a preferred frame of reference might help to explain nonlocality and entanglement.
[Davies] Bell's inequality is, as I understand it, rooted in two assumptions: the first is what we might call objective reality - the reality of the external world, independent of our observations; the second is locality, or non-separability, or no faster-than-light signalling. Now, Aspect's experiment appears to indicate that one of these two has to go. Which of the two would you like to hang on to? [Bell] Well, you see, I don't really know. For me it's not something where I have a solution to sell! For me it's a dilemma. I think it's a deep dilemma, and the resolution of it will not be trivial; it will require a substantial change in the way we look at things. But I would say that the cheapest resolution is something like going back to relativity as it was before Einstein, when people like Lorentz and Poincare thought that there was an aether - a preferred frame of reference - but that our measuring instruments were distorted by motion in such a way that we could not detect motion through the aether. Now, in that way you can imagine that there is a preferred frame of reference, and in this preferred frame of reference things do go faster than light. But then in other frames of reference when they seem to go not only faster than light but backwards in time, that is an optical illusion.The standard explanation of entangled particles usually begins with an observer A, often called Alice, and a distant observer B, known as Bob. Between them is a source of two entangled particles. The two-particle wave function describing the indistinguishable particles cannot be separated into a product of two single-particle wave functions. The problem of faster-than-light signaling arises when Alice is said to measure particle A and then puzzle over how Bob's (later) measurements of particle B can be perfectly correlated, when there is not enough time for any "influence" to travel from A to B. Now as John Bell knew very well, there are frames of reference moving with respect to the laboratory frame of the two observers in which the time order of the events can be reversed. In some moving frames Alice measures first, but in others Bob measures first. Back in the 1960's, C. W. Rietdijk and Hilary Putnam argued that physical determinism could be proved to be true by considering the experiments and observers A and B in a "spacelike" separation and moving at high speed with respect to one another. Roger Penrose developed a similar argument in his book The Emperor's New Mind. It is called the Andromeda Paradox. If there is a preferred frame of reference, surely it is the one in which the origin of the two entangled particles is at rest. Assuming that Alice and Bob are also at rest in this frame and equidistant from the origin, we arrive at the simple picture in which any measurement that causes the two-particle wave function to collapse makes both particles appear simultaneously at determinate places (just what is needed to conserve energy, momentum, angular momentum, and spin). Because a "preferred frame" has an important use in special relativity, where all inertial frames are equivalent, we might call this frame a "special frame."
How Mysterious Is Entanglement?Some commentators say that nonlocality and entanglement are a "second revolution" in quantum mechanics, "the greatest mystery in physics," or "science's strangest phenomenon," and that quantum physics has been "reborn." They usually quote Erwin Schrödinger as saying
"I consider [entanglement] not as one, but as the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought."Schrödinger knew that his two-particle wave function Ψ12 could not have the same simple interpretation as the single particle, which can be visualized in ordinary 3-dimensional configuration space. And he is right that entanglement apparently exhibits a richer form of the "action-at-a-distance" and nonlocality that Einstein had already identified in the collapse of the single particle wave function. But the main difference is that two particles acquire new properties instead of one, and they appear to do it instantaneously (at faster than light speeds), just as in the case of a single-particle measurement the probability of finding that particular single particle anywhere else is now zero. Nonlocality and entanglement are thus just another manifestation of Richard Feynman's "only" mystery. In both single-particle and two-particle cases paradoxes appear only when we attempt to describe individual particles following specific paths to measurement by observer A (and/or observer B). Wee cannot know the specific paths at every instant without measurements. But Einstein has told us that at every instant the particles are conserving linear momentum and electron spin, despite our lack of knowledge during individual experiments. We can ask what happens if Bob is not at the same distance from the origin as Alice, but farther away. When Alice detects the particle (with say spin up), at that instant the other particle also becomes determinate (with spin down) at the same distance on the other side of the origin. It now continues, in that determinate state, to Bob's measuring apparatus.
If measurement of the component σ1 • a, where a is some unit vector, yields the value + 1 then, according to quantum mechanics, measurement of σ2 • a must yield the value — 1 and vice versa... Since we can predict in advance the result of measuring any chosen component of σ2, by previously measuring the same component of σ1, it follows that the result of any such measurement must actually be predetermined.Since the collapse of the two-particle wave function is indeterminate, nothing is pre-determined, although σ2 is indeed determined to have opposite sign (to conserve spin momentum) once σ1 is measured. Here Bell is describing The "following" measurement to be in the same direction as the "previous" measurement. In Bell's termss, Bob is measuring "the same component" as Alice. If Bob should measure even a fraction of a second after Alice, but measure in a different spin direction (a different component), his measurements will be found to be 50/50, up and down, or + and -. To recap our picture of entanglement measurements:
1) the position measurements are always symmetric and equidistant from the central source as Einstein clearly explained, in order to conserve linear momentum.
2) Spin momentum is also conserved, without any hidden variables or communications between the particles. Here the relationship is antisymmetric, as required under interchange of the indistinguishable electrons.
And 3), we can imagine the two spin vectors precessing as the electrons travel through coordinate space, always preserving opposite directions to conserve momentum. Without measurements, no observer can ever know these values. But measurements will always produce one of the six possible outcomes predicted above, confirming the quantum statistics.
In 1987, Bell contributed an article to a centenary volume for Erwin Schrödinger entitled
Are There Quantum Jumps? Schrödinger denied such jumps or any collapses of the wave function. Bell's title was inspired by two articles with the same title by Schrödinger in 1952 (Part I, Part II). Just a year before Bell's death in 1990, physicists assembled for a conference on 62 Years of Uncertainty (referring to Werner Heisenberg's 1927 principle of indeterminacy). John Bell's contribution to the conference was an article called "Against Measurement." In it he attacked Max Born's statistical interpretation of quantum mechanics. And he praised the new ideas of GianCarlo Ghirardi and his colleagues, Alberto Rimini and Tomaso Weber:
In the beginning, Schrödinger tried to interpret his wavefunction as giving somehow the density of the stuff of which the world is made. He tried to think of an electron as represented by a wavepacket — a wave-function appreciably different from zero only over a small region in space. The extension of that region he thought of as the actual size of the electron — his electron was a bit fuzzy. At first he thought that small wavepackets, evolving according to the Schrödinger equation, would remain small. But that was wrong. Wavepackets diffuse, and with the passage of time become indefinitely extended, according to the Schrödinger equation. But however far the wavefunction has extended, the reaction of a detector to an electron remains spotty. So Schrödinger's 'realistic' interpretation of his wavefunction did not survive. Then came the Born interpretation. The wavefunction gives not the density of stuff, but gives rather (on squaring its modulus) the density of probability. Probability of what exactly? Not of the electron being there, but of the electron being found there, if its position is 'measured.' Why this aversion to 'being' and insistence on 'finding'? The founding fathers were unable to form a clear picture of things on the remote atomic scale. They became very aware of the intervening apparatus, and of the need for a 'classical' base from which to intervene on the quantum system. And so the shifty split. The kinematics of the world, in this orthodox picture, is given a wavefunction (maybe more than one?) for the quantum part, and classical variables — variables which have values — for the classical part: (Ψ(t, q, ...), X(t),...). The Xs are somehow macroscopic. This is not spelled out very explicitly. The dynamics is not very precisely formulated either. It includes a Schrödinger equation for the quantum part, and some sort of classical mechanics for the classical part, and 'collapse' recipes for their interaction. It seems to me that the only hope of precision with the dual (Ψ, x) kinematics is to omit completely the shifty split, and let both Ψ and x refer to the world as a whole. Then the xs must not be confined to some vague macroscopic scale, but must extend to all scales. In the picture of de Broglie and Bohm, every particle is attributed a position x(t). Then instrument pointers — assemblies of particles have positions, and experiments have results. The dynamics is given by the world Schrödinger equation plus precise 'guiding' equations prescribing how the x(t)s move under the influence of Ψ. Particles are not attributed angular momenta, energies, etc., but only positions as functions of time. Peculiar 'measurement' results for angular momenta, energies, and so on, emerge as pointer positions in appropriate experimental setups. Considerations of KG [Kurt Gottfried] and vK [N. G. van Kampen] type, on the absence (FAPP) [For All Practical Purposes] of macroscopic interference, take their place here, and an important one, is showing how usually we do not have (FAPP) to pay attention to the whole world, but only to some subsystem and can simplify the wave-function... FAPP. The Born-type kinematics (Ψ, X) has a duality that the original 'density of stuff' picture of Schrödinger did not. The position of the particle there was just a feature of the wavepacket, not something in addition. The Landau—Lifshitz approach can be seen as maintaining this simple non-dual kinematics, but with the wavefunction compact on a macroscopic rather than microscopic scale. We know, they seem to say, that macroscopic pointers have definite positions. And we think there is nothing but the wavefunction. So the wavefunction must be narrow as regards macroscopic variables. The Schrödinger equation does not preserve such narrowness (as Schrödinger himself dramatised with his cat). So there must be some kind of 'collapse' going on in addition, to enforce macroscopic narrowness. In the same way, if we had modified Schrödinger's evolution somehow we might have prevented the spreading of his wavepacket electrons. But actually the idea that an electron in a ground-state hydrogen atom is as big as the atom (which is then perfectly spherical) is perfectly tolerable — and maybe even attractive. The idea that a macroscopic pointer can point simultaneously in different directions, or that a cat can have several of its nine lives at the same time, is harder to swallow. And if we have no extra variables X to express macroscopic definiteness, the wavefunction itself must be narrow in macroscopic directions in the configuration space. This the Landau—Lifshitz collapse brings about. It does so in a rather vague way, at rather vaguely specified times. In the Ghirardi—Rimini—Weber scheme (see the contributions of Ghirardi, Rimini, Weber, Pearle, Gisin and Diosi presented at 62 Years of Uncertainty, Erice, Italy, 5-14 August 1989) this vagueness is replaced by mathematical precision. The Schrödinger wavefunction even for a single particle, is supposed to be unstable, with a prescribed mean life per particle, against spontaneous collapse of a prescribed form. The lifetime and collapsed extension are such that departures of the Schrödinger equation show up very rarely and very weakly in few-particle systems. But in macroscopic systems, as a consequence of the prescribed equations, pointers very rapidly point, and cats are very quickly killed or spared. The orthodox approaches, whether the authors think they have made derivations or assumptions, are just fine FAPP — when used with the good taste and discretion picked up from exposure to good examples. At least two roads are open from there towards a precise theory, it seems to me. Both eliminate the shifty split. The de Broglie—Bohm-type theories retain, exactly, the linear wave equation, and so necessarily add complementary variables to express the non-waviness of the world on the macroscopic scale. The GRW-type theories have nothing in the kinematics but the wavefunction. It gives the density (in a multidimensional configuration space!) of stuff. To account for the narrowness of that stuff in macroscopic dimensions, the linear Schrödinger equation has to be modified, in this GRW picture by a mathematically prescribed spontaneous collapse mechanism. The big question, in my opinion, is which, if either, of these two precise pictures can be redeveloped in a Lorentz invariant way.CERN in Geneva...All historical experience confirms that men might not achieve the possible if they had not, time and time again, reached out for the impossible. (Max Weber) ...we do not know where we are stupid until we stick our necks out. (R. P. Feynman)
organized by Antoine Suarez, director of the
Center for Quantum Philosophy. There are links on the CERN website to the
video of this talk, and to a transcription. In this talk, Bell summarizes the situation as follows:
It just is a fact that quantum mechanical predictions and experiments, in so far as they have been done, do not agree with [my] inequality. And that's just a brutal fact of nature...that's just the fact of the situation; the Einstein program fails, that's too bad for Einstein, but should we worry about that? I cannot say that action at a distance is required in physics. But I can say that you cannot get away with no action at a distance. You cannot separate off what happens in one place and what happens in another. Somehow they have to be described and explained jointly.Bell gives three reasons for not worrying.
So as a solution of this situation, I think we cannot just say 'Oh oh, nature is not like that.' I think you must find a picture in which perfect correlations are natural, without implying determinism, because that leads you back to nonlocality. And also in this independence as far as our individual experiences goes, our independence of the rest of the world is also natural. So the connections have to be very subtle, and I have told you all that I know about them. Thank you.The work of GianCarlo Ghirardi that Bell endorsed is a scheme that makes the wave function collapse by adding small (order of 10-24) nonlinear and stochastic terms to the linear Schrödinger equation. GRW can not predict when and where their collapse occurs (it is simply random), but the contact with macroscopic objects such as a measuring apparatus (with the order of 1024 atoms) makes the probability of collapse of order unity. Information physics removes Bell's "shifty split" without "hidden variables" or making ad hoc non-linear additions like those of Ghirardi-Rimini-Weber to the linear Schrödinger equation. The "moment" at which the boundary between quantum and classical worlds occurs is the moment that irreversible observable information enters the universe. So we can now look at John Bell's diagram of possible locations for his "shifty split" and identify the correct moment - when irreversible information enters the universe.
In the information physics solution to the problem of measurement, the timing and location of Bell's "shifty split" (the "cut" or "Schnitt" of Heisenberg and von Neumann) are identified with the interaction between quantum system and classical apparatus that leaves the apparatus in an irreversible stable state providing information to the observer. As Bell may have seen, it is therefore not a "measurement" by a conscious observer that is needed to "collapse" wave functions. It is the irreversible interaction of the quantum system with another system, whether quantum or approximately classical. The interaction must be one that changes the information about the system. And that means a local entropy decrease and overall entropy increase to make the information stable enough to be observed by an experimenter and therefore be a measurement.
ReferencesAgainst Measurement (PDF) Beables for Quantum Field Theory (PDF) On the Einstein-Podolsky-Rosen Paradox (PDF) On the Impossible Pilot Wave (PDF) Are There Quantum Jumps? (PDF, Excerpt) BBC Interview (PDF, Excerpt)