Roger Penrose thinks that new physical phenomena, as yet unobserved, may be responsible for consciousness
and free will
In particular, he has developed a theory of "correct" quantum gravity, later called "objective reduction," that allows the superposition of quantum states to collapse into a single state without the randomness or indeterminacy of standard quantum mechanics.
Penrose thinks the mysteries of consciousness and free will can be explained by quantum mysteries
In his 1994 book The Emperor's New Mind
, considers the idea that the unconscious mind in generating alternative possibilities
for original thoughts.
What, then, is my view as to the role of the unconscious in inspirational thought? I admit that the issues are not so clear as I would like them to be. This is an area where the unconscious seems indeed to be playing a vital role, and I must concur with the view that unconscious processes are important. I must agree, also, that it cannot be that the unconscious mind is simply throwing up ideas at random. There must be a powerfully impressive selection process that allows the conscious mind to be disturbed only by ideas that 'have a chance'. I would suggest that these criteria for selection — largely 'aesthetic' ones, of some sort — have been already strongly influenced by conscious desiderata (like the feeling of ugliness that would accompany mathematical thoughts that are inconsistent with already established general principles).
In relation to this, the question of what constitutes genuine originality should be raised. It seems to me that there are two factors involved, namely a 'putting-up' and a 'shooting-down' process. I imagine that the putting-up could be largely unconscious and the shooting-down largely conscious. Without an effective putting-up process, one would have no new ideas at all. But, just by itself, this procedure would have little value. One needs an effective procedure for forming judgements, so that only those ideas with a reasonable chance of success will survive. In dreams, for example, unusual ideas may easily come to mind, but only very rarely do they survive the critical judgements of the wakeful consciousness. (For my own part, I have never had a successful scientific idea in a dreaming state, while others, such as the chemist Kekule in his discovery of the structure of benzene, may have been more fortunate.) In my opinion, it is the conscious shooting-down (judgement) process that is central to the issue of originality, rather than the unconscious putting-up process; but I am aware that many others might hold to a contrary view.
In his 1989 book The Emperor's New Mind
, Penrose suggests a two-stage process
but is skeptical of the value of randomness in the first step. His thinking follows that of Jacques Hadamard
and Henri Poincaré
, who he has just discussed in the previous pages.
Penrose is very concerned about determinism
(in which the future is completely determined) and a form of "strong" determinism in which every event in the universe has been pre-determined
from the beginning of the universe.
He calls the deterministic evolution of the Schrödinger equation of motion U
, and the random collapse (or reduction) of the wave function R
. CQG is his theory of "correct quantum gravity."
Determinism and strong determinism
So far I have said little about the question of 'free will' which is normally taken to be the fundamental issue of the active part of the mind—body problem. Instead, I have concentrated on my suggestion that there is an essential non-algorithmic aspect to the role of conscious action. Normally, the issue of free will is discussed in relation to determinism in physics. Recall that in most of our SUPERB theories there is a clear-cut determinism, in the sense that if the state of the system is known at any one time, then it is completely fixed at all later (or indeed earlier) times by the equations of the theory. In this way there seems to be no room for 'free will' since the future behaviour of a system seems to be totally determined by the physical laws.
Even the U part of quantum mechanics has this completely deterministic character. However, the R 'quantum-jump' is not deterministic, and it introduces a completely random element into the time-evolution. Early on, various people leapt at the possibility that here might be a role for free will, the action of consciousness perhaps having some direct effect on the way that an individual quantum system might jump. But if R is really random, then it is not a great deal of help either, if we wish to do something positive with our free wills.
My own point of view, although it is not very well formulated in this respect, would be that some new procedure (CQG) takes over at the quantum—classical borderline which interpolates between U and R (each of which are now regarded as approximations), and that this new procedure would contain an essentially non-algorithmic element. This would imply that the future would not be computable from the present, even though it might be determined by it. I have tried to be clear in distinguishing the issue of computability from that of determinism, in my discussions in Chapter 5. It seems to me to be quite plausible that CQG might be a deterministic but non-computable theory.
Sometimes people take the view that even with classical (or U-quantum) determinism there is no effective determinism, because the initial conditions cannot ever be well-enough known that the future could actually be computed. Sometimes very small changes in the initial conditions can lead to very large differences in the final outcome. This is what happens, for example, in the phenomenon known as 'chaos' in a (classical) deterministic system — an example being the uncertainty of weather prediction. However, it is very hard to believe that this kind of classical uncertainty can be what allows us our (illusion of?) free will. The future behaviour would still be determined, right from the big bang, even though we would be unable to calculate it.
The same objection might be raised against my suggestion that a lack of computability might be intrinsic to the dynamical laws — now assumed to be non-algorithmic in character — rather than to our lack of information concerning initial conditions. Even though not computable, the future would, on this view, still be completely fixed by the past — all the way back to the big bang. In fact, I am not being so dogmatic as to insist that CQG ought to be deterministic but non-computable. My guess would be that the sought-for theory would have a more subtle description than that. I am only asking that it should contain non-algorithmic elements of some essential kind.
To close this section, I should like to remark on an even more extreme view that one might hold towards the issue of determinism. This is what I have referred to as strong determinism. According to strong determinism, it is not just a matter of the future being determined by the past; the entire history of the universe is fixed, according to some precise mathematical scheme, for all time. Such a viewpoint might have some appeal if one is inclined to identify the Platonic world with the physical world in some way, since Plato's world is fixed once and for all, with no 'alternative possibilities' for the universe! (I sometimes wonder whether Einstein might have had such a scheme in mind when he wrote 'What I'm really interested in is whether God could have made the world in a different way; that is, whether the necessity of logical simplicity leaves any freedom at all!' (letter to Ernst Strauss; see Kuznetsov 1977, p. 285).
In his 1997 book Shadows of the Mind
, Penrose speculated further that free will might result from a dualistic
mind influencing the random R
process. This was the "interactionist" view of neuroscientist John Eccles
and philosopher Karl Popper
§1.11. The issue of 'responsibility' raises deep philosophical questions concerning the ultimate causes of our behaviour. It might well be argued that each of our actions is ultimately determined by our inheritance and by our environment — or else by those numerous chance factors that continually affect our lives. Are not all of these influences 'beyond our control', and therefore things for which we cannot ultimately be held responsible? Is the matter of 'responsibility' merely one of convenience of terminology, or is there actually something else — a 'self lying beyond all such influences — which exerts a control over our actions? The legal issue of 'responsibility' seems to imply that there is indeed, within each one of us, some kind of an independent 'self with its own responsibilities — and, by implication, rights — whose actions are not attributable to inheritance, environment, or chance. If it is other than a mere convenience of language that we speak as though there were such an independent 'self, then there must be an ingredient missing from our present-day physical understandings. The discovery of such an ingredient would surely profoundly alter our scientific outlook.
This book will not supply an answer to these deep issues, but I believe that it may open the door to them by a crack — albeit only by a crack. It will not tell us that there need necessarily be a 'self whose actions are not attributable to external cause, but it will tell us to broaden our view as to the very nature of what a 'cause' might be. A 'cause' could be something that cannot be computed in practice or in principle. I shall argue that when a 'cause' is the effect of our conscious actions, then it must be something very subtle, certainly beyond computation, beyond chaos, and also beyond any purely random influences.
Whether such a concept of 'cause' could lead us any closer to an understanding of the profound issue (or the 'illusion'?) of our free wills is a matter for the future. (p.36)
6.8 Is it consciousness that reduces the state vector? Among those who take |Ψ> seriously as a description of the physical world, there are some who would argue — as an alternative to trusting U at all scales, and thus believing in a many-worlds type of viewpoint — that something of the nature of R actually takes place as soon as the consciousness of an observer becomes involved. The distinguished physicist Eugene Wigner once sketched a theory of this nature (Wigner 1961). The general idea would be that unconscious matter — or perhaps just inanimate matter — would evolve according to U, but as soon as a conscious entity (or 'life') becomes physically entangled with the state, then something new comes in, and a physical process that results in R takes over actually to reduce the state.
There need be no suggestion, with such a viewpoint, that somehow the conscious entity might be able to 'influence' the particular choice that Nature makes at this point. Such a suggestion would lead us into distinctly murky waters and, as far as I am aware, there would be a severe conflict with observed facts with any too simplistic suggestion that a conscious act of will could influence the result of a quantum-mechanical experiment. Thus, we are not requiring, here, that 'conscious free will' should necessarily be taking an active role with regard to R (but cf. §7.1, for some alternative viewpoints).
No doubt some readers might expect that, since I am searching for a link between the quantum measurement problem and the problem of consciousness, I might find myself attracted by ideas of this general nature. I should make myself clear that this is not the case. It is probable, after all, that consciousness is a rather rare phenomenon throughout the universe. There appears to be a good deal of it occurring in many places on the surface of the earth, but as far as evidence has presented itself to us to this date, there is no highly developed consciousness — if, indeed, any at all — right out into depths of the universe many light centuries away from us. It would be a very strange picture of a 'real' physical universe in which physical objects evolve in totally different ways depending upon whether or not they are within sight or sound or touch of one of its conscious inhabitants.
7.1 Large-scale quantum action in brain function?
Brain action, according to the conventional viewpoint, is to be understood in terms of essentially classical physics — or so it would seem. Nerve signals are normally taken to be 'on or off' phenomena, just as are the currents in the electronic circuits of a computer, which either take place or do not take place — with none of the mysterious superpositions of alternatives that are characteristic of quantum actions. Whilst it would be admitted that, at underlying levels, quantum effects must have their roles to play, biologists seem to be generally of the opinion that there is no necessity to be forced out of a classical framework when discussing the large-scale implications of those primitive quantum ingredients. The chemical forces that control the interactions of atoms and molecules are indeed quantum mechanical in origin, and it is largely chemical action that governs the behaviour of the neurotransmitter substances that transfer signals from one neuron to another — across tiny gaps that are called synaptic clefts. Likewise, the action potentials that physically control nerve-signal transmission itself have an admittedly quantum-mechanical origin. Yet it seems to be generally assumed that it is quite adequate to model the behaviour of neurons themselves, and their relationships with one another, in a completely classical way. It is widely believed, accordingly, that it should be entirely appropriate to model the physical functioning of the brain as a whole, as a classical system, where the more subtle and mysterious features of quantum physics do not significantly enter the description.
This would have the implication that any possible significant activity that might take place in a brain is indeed to be taken as either 'occurring' or 'not occurring'. The strange superpositions of quantum theory, that would allow simultaneous 'occurring' and 'not occurring' — with complex-number weighting factors — would, accordingly, be considered to play no significant role. Whilst it might be accepted that at some submicroscopic level of activity such quantum superpositions do 'really' take place, it would be felt that the interference effects that are characteristic of such quantum phenomena would have no role at the relevant larger scales. Thus, it would be considered
adequate to treat any such superpositions as though they were statistical mixtures, and the classical modelling of brain activity would be perfectly satisfactory FAPP [for all practical purposes].
There are certain dissenting opinions from this, however. In particular, the renowned neurophysiologist John Eccles has argued for the importance of quantum effects in synaptic action (see, in particular, Beck and Eccles (1992), Eccles (1994)). He points to the presynaptic vesicular grid — a paracrystalline hexagonal lattice in the brain's pyramidal cells — as being an appropriate quantum site. Also, some people (even including myself; cf. ENM [The Emperor's New Mind ], pp. 400-401, and Penrose 1987) have tried to extrapolate from the fact that light-sensitive cells in the retina (which is technically part of the brain) can respond to a small number of photons (Hecht et al. 1941) — sensitive even to a single photon (Baylor et al. 1979), in appropriate circumstances—and to speculate that there might be neurons in the brain, proper, that are also essentially quantum detection devices.
With the possibility that quantum effects might indeed trigger much larger activities within the brain, some people have expressed the hope that, in such circumstances, quantum indeterminacy might be what provides an opening for the mind to influence the physical brain. Here, a dualistic viewpoint would be likely to be adopted, either explicitly or implicitly. Perhaps the 'free will' of an 'external mind' might be able to influence the quantum choices that actually result from such non-deterministic processes. On this view, it is presumably through the action of quantum theory's R-process that the dualist's 'mind-stuff' would have its influence on the behaviour of the brain.
The status of such suggestions is unclear to me, especially since, in standard quantum theory, quantum indeterminacy does not occur at quantum-level scales, since it is the deterministic U-evolution that always holds at this level. It is only in the magnification process from the quantum to the classical levels that the indeterminacy of R is deemed to occur. On the standard FAPP viewpoint, this indeterminacy is something that 'takes place' only when sufficient amounts of the environment become entangled with the quantum event. In fact, as we have seen in §6.6, on the standard view it is not even clear what 'taking place' actually means. It would be hard, on conventional quantum-physical grounds, to maintain that the theory does actually allow an indeterminacy to occur just at the level where a single quantum particle, such as a photon, atom, or small molecule, is critically involved. When (for example) a photon's wavefunction encounters a photon-sensitive cell, it sets in train a sequence of events that remains deterministic (action of U) so long as the system can be considered to stay 'at the quantum level'. Eventually, significant amounts of the environment become disturbed and, on the conventional view, one considers that R has occurred FAPP. One would have to contend that the 'mind-stuff' somehow influences the system only at this indeterminate stage.
According to the viewpoint on state reduction that I have myself been promoting in this book (cf. §6.12), to find the level at which the R-process actually becomes operative, we must look to the quite large scales that become relevant when considerable amounts of material (microns to millimetres in diameter — or perhaps a good deal more, if no significant mass movement is involved) become entangled in the quantum state. (I shall henceforth denote this fairly specific but putative procedure by OR, which stands for objective reduction*. In any case, if we try to adhere to the above dualist viewpoint, where we are looking for somewhere where an external 'mind' might have an influence on physical behaviour — presumably by replacing the pure randomness of quantum theory by something more subtle—then we must indeed find how the 'mind's' influence could enter at a much larger scale than single quantum particles. We must look to wherever the cross-over point occurs between the quantum and classical levels. As we have seen in the previous chapter, there is no general agreement about what, whether, or where such a cross-over point might be.
In my own opinion, it is not very helpful, from the scientific point of view, to "think of a dualistic 'mind' that is (logically) external to the body, somehow influencing the choices that seem to arise in the action of R. If the 'will' could somehow influence Nature's choice of alternative that occurs with R, then why is an experimenter not able, by the action of 'will power', to influence the result of a quantum experiment? If this were possible, then violations of the quantum probabilities would surely be rife! For myself, I cannot believe that such a picture can be close to the truth. To have an external 'mind-stuff' that is not itself subject to physical laws is taking us outside anything that could be reasonably called a scientific explanation, and is resorting to the viewpoint D (cf. §1.3).
The question of the absolute nature of morality is relevant to the legal issues of §1.11. There is relevance, also, to the question of 'free will', as was raised at the end of §1.11: might there be something that is beyond our inheritance, beyond environmental factors, and beyond chance influences — a separate 'self' that has a profound role in controlling our actions? I believe that we are very far from an answer to this question. As far as the arguments of this book go, all that I could claim with any confidence would be that whatever is indeed involved must lie in principle beyond the capabilities of those devices that we presently call 'computers'.
The Andromeda Paradox
In his 1989 book The Emperor's New Mind, Penrose developed a form of the Rietdijk - Putnam argument that claims to prove the universe is pre-determined for special relativistic reasons. This is a more sophisticated version of "block universe" arguments for determinsm, like those of Hermann Minkowski and J. J. C. Smart
Penrose's argument is called the Andromeda Paradox. He shows that two people walking past each other in the street could have very different present moments. If one of the people were walking towards the Andromeda Galaxy, events in this galaxy might be hours or even days advanced of the events on Andromeda for the person walking in the other direction. If this occurs, it would have dramatic effects on our understanding of time. Penrose highlighted the consequences by discussing a potential invasion of Earth by aliens living in the Andromeda galaxy. On Earth, one person might live in a universe where the Andromedeans have not yet decided to invade, whilst someone passing them in the street could live in a universe where alien spaceships are underway.
Penrose describes the situation:
"Two people pass each other on the street; and according to one of the two people, an Andromedean space fleet has already set off on its journey, while to the other, the decision as to whether or not the journey will actually take place has not yet been made. How can there still be some uncertainty as to the outcome of that decision? If to either person the decision has already been made, then surely there cannot be any uncertainty. The launching of the space fleet is an inevitability."
The observers cannot see what is happening in Andromeda. It is light-years away. The paradox is that they have different ideas of what is happening "now" in Andromeda.
The Arrow of Time
Also in his 1989 book The Emperor's New Mind, Penrose speculated on the connection between information, entropy, and the arrow of time.
Recall that the primordial fireball was a thermal state — a hot gas in expanding thermal equilibrium. Recall, also, that the term 'thermal equilibrium' refers to a state of maximum entropy. (This was how we referred to the maximum entropy state of a gas in a box.) However, the second law demands that in its initial state, the entropy of our universe was at some sort of minimum, not a maximum!
What has gone wrong? One 'standard' answer would run roughly as follows:
Penrose's "standard" answer is the work of David Layzer, in his 1975 Scientific American article "The Arrow of Time." Here is the Layzer picture:
True, the fireball was effectively in thermal equilibrium at the beginning, but the universe at that time was very tiny. The fireball represented the state of maximum entropy that could be permitted for a universe of that tiny size, but the entropy so permitted would have been minute by comparison with that which is allowed for a universe of the size that we find it to be today. As the universe expanded, the permitted maximum entropy increased with the universe's size, but the actual entropy in the universe lagged well behind this permitted maximum. The second law arises because the actual entropy is always striving to catch up with this permitted maximum.
A Summary of Quantum Mechanics
At the end of his chapter on "Quantum Magic and Quantum Mystery" Penrose summarized the situation:
Let us briefly review what
standard quantum theory has actually told us about how we should describe
the world, especially in relation to these puzzling issues — and then ask: where
do we go from here?
Recall, first of all, that the descriptions of quantum theory appear to apply
sensibly (usefully?) only at the so-called quantum level — of molecules, atoms,
or subatomic particles, but also at larger dimensions, so long as_ energy
differences between alternative possibilities remain very small. At the
quantum level, we must treat such 'alternatives' as things that can coexist, in a
kind of complex-number-weighted superposition. The complex numbers that
are used as weightings are called probability amplitudes. Each different
totality of complex-weighted alternatives defines a different quantum state.
and any quantum system must be described by such a quantum state. Often,
as is most clearly the case with the example of spin, there is nothing to say
which are to be 'actual' alternatives composing a quantum state and which are
to be just 'combinations' of alternatives. In any case, so long as the system
remains at the quantum level, the quantum state evolves in a completely
deterministic way. This deterministic evolution is the process U, governed by
the important Schrödinger equation.
When the effects of different quantum alternatives become magnified to
the classical level, so that differences between alternatives are large enough
that we might directly perceive them, then such complex-weighted superpositions
seem no longer to persist. Instead, the squares of the moduli of the
complex amplitudes must be formed (i.e. their squared distances from the
origin in the complex plane taken), and these real numbers now play a new
role as actual probabilities for the alternatives in question. Only one of the
alternatives survives into the actuality of physical experience, according to the
process R (called reduction of the state vector or collapse of the wavefunction;
completely different from U). It is here, and only here, that the
non-determinism of quantum theory makes its entry.
The quantum state may be strongly argued as providing an objective
picture. But it can be a complicated and even somewhat paradoxical one.
When several particles are involved, quantum states can (and normally 'do')
get very complicated. Individual particles then do not have 'states' on their
own, but exist only in complicated 'entanglements' with other particles,
referred to as correlations. When a particle in one region is 'observed' in the
sense that it triggers some effect that becomes magnified to the classical level,
then R must be invoked — but this apparently simultaneously affects all the
other particles with which that particular particle is correlated. Experiments
of the Einstein-Podolsky-Rosen (EPR) type (such as that of Aspect, in which
pairs of photons are emitted in opposite directions by a quantum source, and
then separately have their polarizations measured many metres apart) give
clear observational substance to this puzzling, but essential fact of quantum
physics: it is non-local (so that the photons in the Aspect experiment cannot
be treated as separate independent entities)! If R is considered to act in an
objective way (and that would seem to be implied by the objectivity of the
quantum state), then the spirit of special relativity is accordingly violated.'No
objectively real space-time description of the (reducing) state-vector seems to
exist which is consistent with the requirements of relativity! However the
observational effects of quantum theory do not violate relativity.
Quantum theory is silent about when and why R should actually (or appear
to?) take place. Moreover, it does not, in itself, properly expIain why the
classical-level world 'looks' classical. 'Most' quantum states do not at all
resemble classical ones!
Where does all this leave us? I believe that one must strongly consider the
possibility that quantum mechanics is simply wrong when applied to macroscopic
bodies — or, rather that the laws U and R supply excellent approximations,
only, to some more complete, but as yet undiscovered, theory. It is the
combination of these two laws together that has provided all the wonderful
agreement with observation that present theory enjoys, not U alone. If the
linearity of U were to extend into the macroscopic world, we should have to
accept the physical reality of complex linear combinations of different
positions (or of different spins, etc.) of cricket balls and the like. Common
sense alone tells us that this is not the way that the world actually behaves!
Cricket balls are indeed well approximated by the descriptions of classical
physics. They have reasonably well-defined locations, and are not seen to be
in two places at once, as the linear laws of quantum mechanics would allow
them to be. If the procedures U and R are to be replaced by a more
comprehensive law, then, unlike Schrodinger's equation, this new law would
have to be non-linear in character (because R itself acts non-linearly). Some
people object to this, quite rightly pointing out that much of the profound
mathematical elegance of standard quantum theory results from its linearity.
However, I feel that it would be surprising if quantum theory were not to
undergo some fundamental change in the future — to something for which this
linearity would be only an approximation. There are certainly precedents for
this kind of change. Newton's elegant and powerful theory of universal
gravitation owed much to the fact that the forces of the theory add up in a
linear way. Yet, with Einstein's general relativity, this linearity was seen to be
only an (albeit excellent) approximation — and the elegance of Einstein's
theory exceeds even that of Newton's!
Note that any theory that "explained" when and where and in which direction a random event like photon emission or nuclear decay happens would change it from an indeterministic
to a deterministic
Penrose, Roger, 1987, "Newton, Quantum Theory, and Reality, in 300 Years of Gravity
(Cambridge, Cambridge U. Press)
Penrose, Roger, 1989, The Emperor's New Mind
(New York, Penguin Books)
Penrose, Roger, 1994, Shadows of the Mind
(New York, Vintage)
Penrose, Roger, 1997, The Large, the Small, and the Human Mind
(Cambridge, Cambridge U. Press)