
GianCarlo Ghirardi
GianCarlo Ghirardi is the intellectual leader of a group of physicists who want to modify the linear Schrödinger equation, adding nonlinear stochastic terms to cause the "collapse" of the wave function during a quantum measurement.
Ghirardi and his colleagues were motivated by the mysteries of nonlocality and entanglement that appear in EinsteinPodolskyRosen experiments, and especially by the enthusiastic support of John Bell for their work.
Bell's Theorem and Inequalities pointed the way to the many physical experiments that have confirmed the standard model of quantum mechanics, but Bell and others were never satisfied that standard quantum mechanics could "explain" what is "really" going on in the microscopic world (despite the extraordinary accuracy of the theory).
The most famous proposal for a solution to these mysteries was Albert Einstein's original criticism that quantum mechanics was "incomplete" and that additional information was needed to restore his intuition of "local reality." David Bohm pursued the search for "hidden variables" that could restore locality and determinism to physics, but the many experimental tests that followed Bell's suggestions have generally ruled out any such local hidden variables.
Ghirardi's work is focused more on an explanation for the sudden "collapse" of the wave function. Bell had criticized the work of John von Neumann, who could not locate the boundary between the quantum system and the classical measuring apparatus. Werner Heisenberg called this a "cut" or "Schnitt" between quantum and classical worlds. Von Neumann said the cut could be located anywhere from the atomic system up to the mind of the conscious observer, injecting an element of subjectivity into the understanding. John Bell called this movable boundary a "shifty split," and illustrated it with a sketch (below).
Ghirardi (with his principal colleagues, Alberto Rimini and Tullio Weber) looks to explain the collapse that accompanies measurement as a consequence of additional random terms which they add to the Schrödinger equation.
The GRW scheme represents a proposal aimed to overcome the difficulties of quantum mechanics discussed by John Bell in his article "Against Measurement." The GRW model is based on the acceptance of the fact that the Schrödinger dynamics, governing the evolution of the wavefunction, has to be modified by the inclusion of stochastic and nonlinear effects. Obviously these modifications must leave practically unaltered all standard quantum predictions about microsystems.
To be more specific, the GRW theory admits that the wavefunction, besides evolving through the standard Hamiltonian dynamics, is subjected, at random times, to spontaneous processes corresponding to localisations in space of the microconstituents of any physical system. The mean frequency of the localisations is extremely small, and the localisation width is large on an atomic scale. As a consequence no prediction of standard quantum formalism for microsystems is changed in any appreciable way.
The merit of the model is in the fact that the localisation mechanism is such that its frequency increases as the number of constituents of a composite system increases. In the case of a macroscopic object (containing an Avogadro number of constituents) linear superpositions of states describing pointers `pointing simultaneously in different directions' are dynamically suppressed in extremely short times. As stated by John Bell, in GRW Schrödinger's cat is not both dead and alive for more than a split second'.
(Speakable and Unspeakable in Quantum Mechanics, Second Edition, p.229)
John Bell attacked Max Born's statistical interpretation quantum mechanics and praised GRW in his 1989 "Against Measurement" article:
In the beginning, Schrödinger tried to interpret his wavefunction as giving somehow the density of the stuff of which the world is made. He tried to think of an electron as represented by a wavepacket — a wavefunction appreciably different from zero only over a small region in space. The extension of that region he thought of as the actual size of the electron — his electron was a bit fuzzy. At first he thought that small wavepackets, evolving according to the Schrödinger equation, would remain small. But that was wrong. Wavepackets diffuse, and with the passage of time become indefinitely extended, according to the Schrödinger equation. But however far the wavefunction has extended, the reaction of a detector to an electron remains spotty. So Schrödinger's 'realistic' interpretation of his wavefunction did not survive.
Then came the Born interpretation. The wavefunction gives not the density of stuff, but gives rather (on squaring its modulus) the density of probability. Probability of what exactly? Not of the electron being there, but of the electron being found there, if its position is 'measured*.
Why this aversion to 'being' and insistence on 'finding'? The founding fathers were unable to form a clear picture of things on the remote atomic scale. They became very aware of the intervening apparatus, and of the need for a 'classical' base from which to intervene on the quantum system. And so the shifty split.
The kinematics of the world, in this orthodox picture, is given
a wavefunction (maybe more than one?) for the quantum part, and
classical variables — variables which have values — for the classical part: (Ψ(t, q, ...), X(t),...). The Xs are somehow macroscopic. This is not spelled out very explicitly. The dynamics is not very precisely formulated either. It includes a Schrödinger equation for the quantum part, and some sort of classical mechanics for the classical part, and 'collapse' recipes for their interaction.
It seems to me that the only hope of precision with the dual (Ψ, x) kinematics is to omit completely the shifty split, and let both Ψ and x refer to the world as a whole. Then the xs must not be confined to some vague macroscopic scale, but must extend to all scales. In the picture of de Broglie and Bohm, every particle is attributed a position x(t). Then instrument pointers — assemblies of particles have positions, and experiments have results. The dynamics is given by the world Schrödinger equation plus precise 'guiding' equations prescribing how the x(t)s move under the influence of Ψ. Particles are not attributed angular momenta, energies, etc., but only positions as functions of time. Peculiar 'measurement' results for angular momenta, energies, and so on, emerge as pointer positions in appropriate experimental setups. Considerations of KG [Kurt Gottfried] and vK [Norman van Kampen] type, on the absence (FAPP) [For All Practical Purposes] of macroscopic interference, take their place here, and an important one, in showing how usually we do not have (FAPP) to pay attention to the whole world, but only to some subsystem and can simplify the wavefunction... FAPP.
The Borntype kinematics (Ψ, X) has a duality that the original 'density of stuff' picture of Schrödinger did not. The position of the particle there was just a feature of the wavepacket, not something in addition. The Landau—Lifshitz approach can be seen as maintaining this simple nondual kinematics, but with the wavefunction compact on a macroscopic rather than microscopic scale. We know, they seem to say, that macroscopic pointers have definite positions. And we think there is nothing but the wavefunction. So the wavefunction must be narrow as regards macroscopic variables. The Schrödinger equation does not preserve such narrowness (as Schrödinger himself dramatised with his cat). So there must be some kind of 'collapse' going on in addition, to enforce macroscopic narrowness. In the same way, if we had modified Schrödinger's evolution somehow we might have prevented the spreading of his wavepacket electrons. But actually the idea that an electron in a groundstate hydrogen atom is as big as the atom (which is then perfectly spherical) is perfectly tolerable — and maybe even attractive. The idea that a macroscopic pointer can point simultaneously in different directions, or that a cat can have several of its nine lives at the same time, is harder to swallow. And if we have no extra variables X to express macroscopic definiteness, the wavefunction itself must be narrow in macroscopic directions in the configuration space. This the Landau—Lifshitz collapse brings about. It does so in a rather vague way, at rather vaguely specified times.
In the Ghirardi—Rimini—Weber scheme (see the contributions of Ghirardi, Rimini, Weber, Pearle, Gisin and Diosi presented at 62 Years of Uncertainty, Erice, 514 August 1989) this vagueness is replaced by mathematical precision. The Schrödinger wavefunction even for a single particle, is supposed to be unstable, with a prescribed mean life per particle, against spontaneous collapse of a prescribed form. The lifetime and collapsed extension are such that departures of the Schrödinger equation show up very rarely and very weakly in fewparticle systems. But in macroscopic systems, as a consequence of the prescribed equations, pointers very rapidly point, and cats are very quickly killed or spared.
The orthodox approaches, whether the authors think they have made derivations or assumptions, are just fine FAPP — when used with the good taste and discretion picked up from exposure to good examples. At least two roads are open from there towards a precise theory, it seems to me. Both eliminate the shifty split. The de Broglie—Bohmtype theories retain, exactly, the linear wave equation, and so necessarily add complementary variables to express the nonwaviness of the world on the macroscopic scale. The GRWtype theories have nothing in their kinematics but the wavefunction. It gives the density (in a multidimensional configuration space!) of stuff. To account for the narrowness of that stuff in macroscopic dimensions, the linear Schrödinger equation has to be modified, in this GRW picture by a mathematically prescribed spontaneous collapse mechanism.
The big question, in my opinion, is which, if either, of these two precise pictures can be redeveloped in a Lorentz invariant way.
...All historical experience confirms that men might not achieve the possible if they had not, time and time again, reached out for the impossible. (Max Weber)
...we do not know where we are stupid until we stick our necks out. (R. P. Feynman)
(Speakable and Unspeakable in Quantum Mechanics, Second Edition, pp.227230)
Information physics locates Bell's "shifty split" without making GRW ad hoc additions to the linear Schrödinger equation. The "moment" at which the boundary between quantum and classical worlds occurs is the moment that irreversible observable information enters the universe.
Although GRW make the wave function collapse, and their mathematics is "precise," they still can not predict exactly when the collapse occurs. It is simply random, with the probability very high in the presence of macroscopic objects. In the information physics solution to the problem of measurement, the timing and location of the Heisenberg "cut" are identified with the interaction between quantum system and classical apparatus that leaves the apparatus in an irreversible stable state providing information to the observer.
GianCarlo Ghirardi on Measurement
In his elegantly written and nicely illustrated 2005 book, Sneaking a Look at God's Cards, Ghirardi starts his discussion of the measurement problem by noting that the principle of superposition of states means that some observables lack a precise expectation value, so we can speak only of the probability of outcomes. The most characteristic example, he says, is the diagonally polarized photon in a linear combination of vertical and horizontal polarization states discussed in the case of Dirac's Three Polarizers.
 d > = ( 1/√2)  v > + ( 1/√2)  h >
Ghiradi asks whether a macroscopic system can be in such a superposition (think of Schrödinger's live and dead cats), which he describes as
 ? > = ( 1/√2)  here > + ( 1/√2)  there > (15.2)
He says (p.348)
The conclusion is obvious but unsettling: if we admit that the theory (or rather, simply, the superposition principle) has universal validity, and thus governs as well the behavior of macrosystems, it is inescapable to accept that, in principle, macrosystems too can be found in superpositions of states corresponding to precise and different macroscopic properties, with the consequence that these systems cannot legitimately be thought to possess any of these properties. In our specific case, a macroscopic object "is capable of not having the property of being in some place."
With this conclusion, Ghirardi begins his discussion of measurement.
15.2. THE QUANTUM THEORY OF MEASUREMENT
To illustrate how inevitably we are confronted — once the limitless validity of the principle of superposition is assumed — with the practical possibility of occurrence of states like 15.2, let us now take a look at the thorniest and most controversial problem of the theory, the process of measurement. The reader needs to be warned that the problem cannot be avoided, and that it inevitably involves macroscopic systems. In fact, owing to the huge difference of scale between human beings and the microscopic systems we want to study, any attempt to obtain information about them requires a process of amplification that strictly correlates the microscopic properties to situations that are macroscopically perceivable and hence distinguishable to our perception. The problem is traditionally approached with reference to what is technically known as von Neumann's ideal measurement process, named from the scientist who first formulated it in precise terms.
For this purpose we can refer to experiments of the kind discussed in chapters 3 and 4, in which photons with definite states of polarization were sent into a birefringent crystal, and we enrich the analysis by including the dynamics of the detecting apparatus. In the discussion of those chapters we simply said that "the detector detects (does not detect) the arrival of a photon," insofar as this was the only relevant information for the analysis we were then making. Now, however, we must inquire into the precise sense of this assertion and investigate the physical processes to which reference is made in the cases of concrete experiments carried out in laboratories. A reasonable and sufficiently realistic description of what happens (as pointed out in chapter 6) would be the following: the purpose of a measurement is to infer something about the system being measured from the outcome of an appropriate experiment. It follows that the interaction between microsystem and measuring apparatus should bring about a macroscopic change of the apparatus in such a way that, by observing the state of the apparatus after the process, we would be able to obtain the desired information. Instead of what we did in the foregoing chapters, i.e., consider two different detectors placed, respectively, on the ordinary and extraordinary beams, this time we will imagine the apparatus as a box (Figure 15.1) with two regions on it, equally sensitive to incoming photons, and we will suppose that the box includes an instrument with a macroscopic pointer that can be in three positions, designated as —1, 0, and +1.
FIGURE 15.1. Schematic representation of a measuring apparatus with the pointer in position 0, which indicates that it is ready to register the arrival of a photon either in the upper shaded region U or in the lower shaded region L. In response, the pointer of the apparatus will move, once the interaction takes place, to position —1 or +1, respectively.
The zero position of the pointer is the state of the apparatus before it has interacted with a photon and can be designated as "the state of the apparatus ready to carry out a measurement." We will also have to suppose that the interaction of the apparatus with the photon is such as to permit us to recognize from the final position of the pointer, if the photon struck the apparatus in the spot L (for lower), corresponding to the ordinary ray (+1), or the spot U (for upper), corresponding to the extraordinary (1).
To take up the problem it will be convenient to analyze first of all the case in which a vertically polarized photon is sent into a birefringent crystal placed in front of the apparatus. As we know, the photon will be propagated along the ordinary ray and will end up striking the region L of the apparatus. Alternatively, we will consider a horizontally polarized photon that we know will with certainty follow the extraordinary ray and will end up striking region U of the apparatus (Fig. 15.2a,b).
FIGURE 15.2. Schematic representation of the effect obtained by activating an ideal measuring apparatus by states having the precise polarizations that the apparatus is set up to reveal.
We designate as  A_{0} >,  A_{+1} >, and  A_{1} > the states of the apparatus in which the pointer points, respectively, to the three indicated positions. For simplicity we will suppose, with von Neumann, that the photon passes through the apparatus without changing its state.
The quantum evolution of the two states can be represented as follows:
 V, A_{0} > =>  V, A_{+1} >, (15.5a)
 H, A_{0} > =>  H, A_{1} >, (15.5b)
in which the final states describe, respectively, a vertically polarized photon correlated with the state in which the pointer of the apparatus indicates +1, and a horizontally polarized photon correlated with the state in which the apparatus indicates 1.
We are now prepared to show how the problem of the quantum theory of measurement arises. First, since the two processes described above involve physical systems that are completely normal for the theory in question (the apparatus is composed of electrons, protons, neutrons, etc.), we must inevitably assume that the whole evolution symbolized by the arrows in the formulas above is nothing other than the Schrödinger evolution for the system "photon + apparatus." But the fundamental characteristic of quantum evolution, once again, consists in its linear character: the evolution of a state which is the linear superposition of two initial states is the same as the superposition of what has evolved out of the same initial states. We now prepare an initial state that is a linear combination of the two initial states considered above: for this purpose it is enough to send a photon with a polarization plane of 45° into a birefringent crystal. Then, for the initial state we have the following:
 Initial > = ( 1/√2) [  V > +  H > ]  A_{0} >
= ( 1/√2) [  V, A_{0} > +  H, A_{0} > ] (15.6)
Now this, thanks to the linear character of the dynamics, will evolve into the same linear combination of the two final states of the equations (15.5a,b):
 Initial > = ( 1/√2) [  V, A_{+1} > +  H, A_{1} > ] (15.7)
The final state (Figure 15.3) is a superposition of two states that are macroscopically different, since in the first of the two, the macroscopic pointer of the apparatus points to +1, and in the second it points to 1. The argument shows in a simple way how to realize a state of the type considered in the preceding section.
FIGURE 15.3. In the case when the apparatus is set off by a polarization state that is the superposition of the states that it is programmed to register, and under the assumption that the measuring process is governed by the linear laws of the theory, one must conclude that the final state does not correspond to an apparatus with the pointer in a definite position. The quantum mechanical ambiguity associated with the superposition becomes, as it were, transmitted to the macroscopic system.
A few observations:
1. As discussed in Section 15.1, it is extremely problematic to find some physically intelligible meaning for state (15.7).
2. This state is an entangled state of the photon and the apparatus and as such it is not legitimate to attribute definite polarization properties to the photon or definite positions to the pointer of the apparatus.
3. The treatment is oversimplified. The states of the apparatus that appear in the preceding formulas concern a macroscopic system, and, as we have already pointed out, the specification for the position of the pointer is not enough to characterize them completely. To be rigorous we would have to do what we did at first, namely, add points that would refer to all the other degrees of freedom of a system as complex as this. Nevertheless, as observed in the preceding section, this does not have any consequences for what concerns the fact that any assertion about the position of the pointer is illegitimate, and it is even not permitted us to think that it has a precise position. The relevant physical implications of the enormous complexity of the apparatus will emerge when we pose the problem of how to "verify in the laboratory" that in fact, at the end, there is a superposition of macroscopically distinguishable states. But this does not change in the least the objective "given" that such a superposition must be considered present whenever it is assumed that we have at our disposal an apparatus that reliably allows the identification of vertical or horizontal polarization of a photon, and that the interaction between the measured system and the measuring apparatus is a process that obeys the general laws of the theory.
4. As already mentioned, the difficulties that appear do not derive from our idealizations or simplifications introduced into the interpretation. This has been emphasized repeatedly in the literature by John Stewart Bell and Bernard d'Espagnat among others. To my knowledge, the most mathematically rigorous demonstration of the fact that the very possibility of using measuring apparatuses with a high degree of reliability implies the indefiniteness of some of their macroscopic properties, appears in a paper which I recently wrote in collaboration with my doctoral student A. Bassi.
15.3. QUANTUM EVOLUTION AND THE REDUCTION OF THE WAVE PACKET
I would now like to direct the reader's attention to the way the orthodox interpretation gets around the difficulties we have outlined in the foregoing sections. Why would that interpretation not have to face the embarrassing occurrence of superpositions of states that are distinguishable at the macroscopic level? In order to understand this point, we need to recall that the theory in its general formulation includes a postulate that becomes operative every time a process of measurement is carried out, and that is the postulate of the reduction (or collapse) of the wave packet. Before we carried out our analysis, every reader, when faced with the situation summarized by the equation (15.7), would have replied correctly about the ultimate situation of the experiment, asserting that in an entirely casual and unpredictable way there would occur either the situation with the photon polarized vertically and the apparatus registering +1, or the situation in which the photon is polarized horizontally, with the apparatus showing 1. And this is in fact what we would experience if we actually looked at the apparatus. No physicist would be embarrassed about this: going into the laboratory, and looking at the pointer on the gauge, he (or she) would find it definitely in one position or the other.
It is consequently very tempting to say that the problem we are discussing is a pseudoproblem: the theory has already given us a solution that does not cause any embarrassing situation, and our own experience confirms the correctness of such an assessment. And so what can be wrong with this simple, clear argument? The problem emerges from the analysis itself: the application of a principle (linearity), whose unlimited validity is assumed by the theory, to the description of the measuring process, leads inevitably to the conclusion that the final state of system + apparatus is the embarrassing state that appears on the right side of equation (15.7), while the postulate of the reduction of the wave packet says that the final state is something else — namely, one of the nonproblematic terms of the sum (for each of which it can be said that the pointer of the apparatus is in a definite position). The conclusion is clear: the postulate of the reduction of the wave packet logically contradicts the hypothesis that the evolution of the system we are investigating is in fact governed by quantum mechanics. In other words, the theory is not in a position to explain how it could ever come about, in an interaction between a system and a measuring apparatus, that the peculiar process leading to a definite outcome could emerge — that is, how an apparatus could ever behave in the manner it is supposed to do!
It is important to emphasize the radical differences in the description of the measuring process when we use the evolution equation of the theory or when we resort to the postulate of the reduction of the wave packet. As we have repeatedly observed and as is clearly illustrated by the equations (15.6) and (15.7), the quantum evolution is perfectly deterministic and linear: the initial state determines the final state without any ambiguity, and the sums of the initial states evolve into the corresponding sums of the final states. By contrast, the reduction of the wave packet is a fundamentally stochastic and nonlinear physical process: in general, the outcomes of measurement are unforeseeable, and since the relative probabilities depend upon the square of the wave function, phenomena of interference present themselves, which in turn imply, as we know well, that the probabilities for the various possible outcomes in the case of a superposition are not the sum of the probabilities associated with the terms of the superposition itself. Furthermore, while the quantum dynamics is perfectly reversible, as in the classical case, the process of the reduction of the wave packet is fundamentally irreversible.
We can ask ourselves then: does this singular fact, this internal inconsistency of the theory represent in itself an insurmountable difficulty? Of course not: it simply leads us to conclude that we have to allow for the fact that there are systems in nature that do not obey the laws of the theory. Like any physical theory, even quantum mechanics will have a limited field of application. And this represents in a certain sense (although many ambiguities remain) the orthodox position: two principles of evolution must be adopted: one governing all the processes that involve interactions between microsystems which is described by Schrödinger's equation; the other which is to be used to describe measurement processes and is accounted for by the postulate of the reduction of the wave packet. The problem that remains, and which is of no small account, is that of succeeding in identifying in a nonambiguous way the boundary line between these two levels of the real, which require two essentially different and irreconcilable physical descriptions.
We have now reached the point where we can face the socalled problem of the macroobjectification of properties: how, when, and under what conditions do definite macroscopic properties emerge (in accordance with our daily experience) for systems that, when all is said and done, we have no good reasons for thinking are fundamentally different from the microsystems of which they are composed? To appreciate fully the relevance of this question, and to prepare ourselves to analyze the multiple and interesting proposals advanced as a way of getting around such difficulties, it will be useful to begin by deepening our analysis.
15.6. THE AMBIGUOUS BOUNDARY
What, in essence, is the crucial problem we are facing? As we repeatedly emphasized in the early chapters of the book, the theory has been formulated in such a way as not to speak in general of properties possessed by systems but only of the probabilities of obtaining certain outcomes if measurements are carried out that intend to identify the values of the properties we are interested in. But if the theory has a universal validity an endless process begins: in order to ascertain the properties of a system we will have to make it interact with an apparatus, and this will react differently according to the potentialities of the system (except in the case where the system is in a state yielding a certain measurement result), and then in their turn the potential outcomes of the measurements will not be realized, and will become real only if a measurement is carried out to ascertain them, and so on, until what end is reached? This reasoning can in principle be extended until it involves the entire universe, and it implies that even the universe would have only potentialities. But then the question is, "Who measures the universe, and brings it out of the limbo of the potentialities?"
It is time to mention another fact, until now only treated marginally, which makes the problem even more serious. It is absolutely true (unless we completely change our perspective and adopt some rather fascinating but sciencefiction positions to be discussed later) that measurements have outcomes and therefore that at a certain stage a reduction actually takes place, i.e., that a passage from the potentialities to the actualities occurs. Indeed, for every process of measurement that interests us, at a certain level there must certainly intervene some observer whose perceptions are definite. I would like to call the attention of the reader to the fact that if we think, for example, that the last link in the von Neumann chain is a conscious observer, then the two states  Z_{+1} > and  Z_{1} > that appear in our equations are simply abbreviations for states that could be expressed more appropriately as follows:
 the conscious observer sees a macroscopic gauge that points to +1 >
and
 the conscious observer sees a macroscopic gauge that points to 1 >,
or alternatively
 the conscious observer reads the word VERTICAL on the computer
screen>
and
the conscious observer reads the word HORIZONTAL
on the computer screen >.
We do not have any sensate experience of a situation where there are the potentialities of having read VERTICAL and of having read HORIZONTAL instead of a very precise actualization of one of these alternatives. We know very well that every time we watch one of the computers of the ensemble we read something definite and we do not end up in a state of mental confusion in which we have "potentially" read both expressions, leaving it ambiguous what we have actually seen.
This simple observation tells us that at a certain point between the level of microscopic events that are doubtlessly governed by the principle of superposition (microsystems show the effects of interference) and the level of the perception on the part of a conscious observer, the linearity has to be violated. Where do we locate this boundary? This is a crucial question for the theory, the problem that, as Bell says in the quotation given at the beginning of this chapter, makes it impossible for anyone to know exactly what the theory is saying about any specific situation.
Some qualifications are in order. When we speak of a precise boundary, nobody is pretending that there exists some perfectly defined criterion of demarcation that allows us to say that up to here quantum mechanics is valid; from this point on, the reduction of the wave packet takes over. By a "precise boundary" is meant that the theory ought to contain at least some parameter that defines a scale that in turn would permit us to evaluate when it is legitimate to use the linear equations, and when it is only an approximation to use them, and when it is just plain erroneous to use them. An example should clarify this idea. Let us consider Newtonian mechanics and the theory of relativity. In this case it is easy to identify a precise parameter: the universal constant that is so characteristic of the theory of relativity, namely, the speed of light. This serves extraordinarily well for defining the (limited) area of validity of classical mechanics. If a body has a velocity much less than the velocity of light, Newton's theory is appropriate, but as the velocity increases it becomes increasingly less precise and fails completely when it comes to describing bodies that travel at velocities nearing that of light. We then can ask ourselves: what parameter plays a role in quantum mechanics analogous to the speed of light in classical mechanics? The simple and clear answer is that nobody has as yet been able to find it. The foregoing arguments might seem to point toward the number of particles as a possible candidate, but that does not work: there are macroscopic systems that require a quantum treatment to be correctly described. Superconductors could be mentioned, which show typical "tunnel effects" involving a macroscopic number of constituents. But it must be emphasized that to account for the internal structural properties of even a simple macroscopic crystal, or the behavior of electronic chips, or the functioning of transistors, etc., a quantum treatment and the principle of superposition are indispensable.
I would like to conclude this section by underlining the essential role that the ambiguity about the "boundary" has played in the debate over the conceptual basis of the theory. I will then sum up the matter through reference to a stimulating picture drawn by John Bell for one of his last lectures: an image that has the advantage of going right to the core of the problem.
As regards the first point, it should be enough to recall the debate between Bohr and Einstein that we analyzed in detail in chapter 7. The reader may recall the escape strategy Bohr used to defend his position against Einstein's observations that a precise measurement of the state of a macroscopic object (see Figures 7.4 and 7.6) would lead to a violation of the uncertainty principle. Bohr's point consisted in asserting that it is only decisive that, in contrast to the proper measuring instruments [?], these bodies [i.e., moving diaphragms or pointers of an apparatus balance], together with the particles, would in such a case constitute the system to which the quantum mechanical formalism has to be applied. But what, in Bohr's view, would make the diaphragm or the pointer different from other systems used to determine the states of these objects—that is the real mystery. We have indicated that if Einstein had insisted on his requirement of attributing definite properties up to an appropriate point (of what we can call the von Neumann chain) he would have constrained Bohr to accept that the entire universe requires a quantum treatment. While discussing this debate previously, I referred in passing to the fact that the same von Neumann and Wigner were led to relocate the boundary between quantum and classical, between reversible and irreversible, at the act of perception on the part of a conscious observer. But it must be noted that even this last solution is not without ambiguity. In fact, the problem is simply transferred to the problem of defining precisely what is meant by a conscious observer, a concept the present state of our knowledge does not permit us to define in an unambiguous way.
I would now like to comment on the picture drawn by J. S. Bell and presented by him at two conferences: the first at the University of California at Los Angeles in March 1988 on the occasion of the seventieth birthday of the Nobel Prize winner Julian Schwinger, the second at Trieste in November 1989, on the occasion of the twentyfifth anniversary of the founding of the International Center for Theoretical Physics by the International Atomic Energy Agency at Vienna. Bell analyzed the process of diffraction, through a slit, of a beam containing a certain number N of electrons, and the formation of the image on a photographic film placed behind the slit. He observed that it makes no sense to treat the electrons as if they were punctifom bodies that follow precise trajectories; they must be described by means of a diffracting wave function (Figure 15.4).
FIGURE 15.4. Where are we to place the boundary between the vague microscopic world and the precise world of our sensory experience? Here is an image John Bell proposed to illustrate this deep question.
Since the number of the electrons is very large but still finite, what we see on the film will be a series of N black dots that correspond to the positions in which they are, so to speak, "revealed" by that object, which can be thought of as a measuring device. In making this assertion, we have made a logical jump from the language of wave functions, and of the potentialities of a microsystem, to the language of the reality of the dots. But we know very well (recall Figure 3.10) that because the wave function of each electron is appreciably different from zero at all the points of the figure of diffraction, in reality, if we were to treat the photographic film as a quantum system (and what would be wrong in doing so?), we would have a linear superposition of states. In one of them, for example, each of the N electrons is in a precise position among the infinite that are possible, and the film is in the state in which the activated grains of silver bromide are really those that correspond to the positions indicated. But there exists an infinity of other possible outcomes of this process, each one of which would correspond to a different distribution of the N electrons in the central region of the spot and to a different collection of activated grains of the emulsion. If we were truly interested in describing the process, we would have to elaborate the photographic process in detail, and look at the entire electron + film system as a genuinely quantum system. This would mean that we could no longer speak of spots in precise positions, but only of the potentiality that the film would be exposed at certain N points rather than at others. We have dislodged the boundary from the microcosm of the electron beam to the macrocosm of the film, as shown in the second stage of the figure. But we cannot stop here. When we watch the film, our own perceptive apparatus enters the picture. But is there any reason to think that our eye's retina is not also a physical system, and as such is subject to the linear laws of the theory? If we want to describe which signals actually reach the brain, we would once more have to relocate the boundary between the vague quantum world and the world of definite events, and we are then led to place it between the optic nerve and the brain. But even the brain is a physical system constituted of protons, neutrons, and electrons, and traversed by electrochemical reactions and the rest—processes that we have no reason to think are not governed by our formalism. And thus it makes perfect sense to relocate the boundary once again between the brain and the mind, as in the last sketch of the figure.
For Teachers
To hide this material, click on the Normal link.
For Scholars
To hide this material, click on the Teacher or Normal link.
Normal  Teacher  Scholar
