Quantum Physics and the Problem of Mental Causation

Bob Doyle
Information Philosopher
Department of Astronomy
Harvard University


The problem of mental causation depends heavily on the idea of “causal closure” of the world under "laws of nature." If everything that is caused has a physical cause (whether deterministic or indeterministic), what room is there for mental causes? Must mental events be eliminated - reduced to physicalism at best and epiphenomenalism at worst?

The central question in the classic mind-body problem is how can an immaterial mind move a material body if the “causal chains” are limited to interactions between physical things.

We propose a model or theory of an immaterial mind as the pure information in the biological information-processing system that is the brain and central nervous system. We show how this model can support a non-reductive physicalism and an emergent dualism.

Information is physical, but immaterial. It is neither matter nor energy, although it needs matter for its (temporary) embodiment and energy for its communication - for example to other minds or for storage in the external environment.

Indeterminism in quantum physics breaks the strict “causal chains” that have been used to “reduce” biological phenomena to physics and chemistry and mental events to neural events. But statistical causes remain and they are more than "adequate" to support the idea of self-determination.

Our informational theory of mind is a powerful alternative to the computational theories popular in cognitive science. Biological information processors are profoundly different from digital computers.

We argue against neurobiological reductionism and physical “bottom-up causation.” At the same time, we defend a supervenient statistical “downward causation” that allows free thoughts (mental events that are not pre-determined by past events and the laws of nature) to cause willed actions. Actions are ultimately statistical but “adequately determined” by our motives, reasons, intentions, desires, and feelings, in short, by our character. Our actions are thus determined for practical purposes, but "self-determined," with at least some of the causes originating inside our minds.

We defend an emergent dualism of mind and matter, subject and object, idealism and materialism. Monists might like the idea that information is a neutral quantity that can ground a triple-aspect monism of matter, life, and mind. Information itself is an emergent that did not exist in the early universe. We will show that information structures emerge in three ways and in a temporal sequence, corresponding respectively to matter, life, and mind.

First is the emergence of "order out of chaos" This has given rise to complexity and chaos theories that try to explain life as a "complex adaptive system." Ilya Prigogine won a Nobel prize for far-from-equilibrium "dissipative" processes that produce information structures, like Bénard convection cells. He called it "order out of chaos." These "complex" systems have no internal information processing. They are "dumb" structures. They do, however, exert a gross "downward causation" over their physical parts.

Second is the emergence of "order out of order." Erwin Schrödinger showed that all life feeds on a stream of negative entropy from the sun. He called this "order out of order." Biological processes rearrange the information in the negative entropy to create and maintain themselves. They are "information-processing systems." Their downward causation is extremely fine, meaning they can exert causal control over component atoms and molecules individually.

Third is the emergence of "pure information out of order." Abstract information is the "stuff of thought." It is the lingua franca, the currency, the coin of the philosophical realm. Mental processes create and store abstract information in the brain hardware. At the neuron level, atoms and molecules are exquisitely controlled by neurobiology to enable nerve firings and to record (and play back) information.

The core of our informational theory of mind is an experience recorder and reproducer. The ERR stores information in our neural networks about all the perceptual elements (sight, sound, touch, taste, smell) of an experience, along with our emotions during the experience. They are stored in whatever neurons fire together. Later, any new perceptual element that fires the same (or nearby) neurons can activate the neural network to replay the original experience, complete with its emotional content. The unconscious mind is a "blooming, buzzing confusion" of these reproduced experiences, to some of which we focus our attention. We identify four evolutionary stages in the development of an experience recorder and reproducer that exhibits consciousness.

Information Philosophy and Physics

The fundamental question of information philosophy is cosmological and ultimately metaphysical. What is the nature of processes that create information structures in the universe?

Given the second law of thermodynamics, which says that any system will over time approach a thermodynamic equilibrium of maximum disorder or entropy, in which all information is lost, and given the best current model for the origin of the universe, which says everything began in a state of equilibrium some 13.75 billion years ago, how can it be that living beings are creating and communicating new information every day?

Why are we not still in that original state of equilibrium? Cosmologists at Harvard have answered that question, but the answer is not yet widely known or appreciated.

The question may be cosmological and even metaphysical, but the answer is eminently practical and physical. It is found in the interaction between quantum mechanics and thermodynamics and involves the general relativistic expansion of the universe.

When information is stored in any structure, two physical processes must occur.

The first process is the collapse of a quantum-mechanical wave function into one of the possible states in a superposition of states. This happens in any measurement process. It also happens in any irreversible interaction between quantum systems. Such quantum events involve irreducible indeterminacy and ontological chance, but less often noted is the fact that quantum physics is directly responsible for the extraordinary temporal stability and adequate determinism of most information structures. (Our first process is what John von Neumann proposed as his irreversible Process 1. Wolfgang Pauli called it a measurement of the second kind.)

The second process is a local decrease in the entropy (which appears to violate the second law of thermodynamics) corresponding to the increase in information. Entropy greater than the information increase must be transferred away from the new information, ultimately to the night sky and the cosmic background, to satisfy the second law. (Again following von Neumann, we can call this information stabilization Process 1b.)

Given this new stable information, to the extent that the resulting quantum system can be approximately isolated, the system will deterministically evolve according to von Neumann's Process 2, the unitary time evolution described by the Schrödinger equation.

These three processes are parts of the information solution to the "problem of measurement," to which must be added the role of the "conscious observer."

The discovery and elucidation of the first two as steps in the cosmic creation process casts light on some classical problems in philosophy and physics, since it is the same process that creates new biological species and explains the freedom and creativity of the human mind.

The cosmic creation process generates the conditions without which there could be nothing of value in the universe, nothing to be known, and no one to do the knowing.

Given the answer to our fundamental question in information philosophy, we can now go on to see that information does not appear all at once, it emerges, and in three distinct phases correspondingly roughly to the emergence of matter, life, and mind.

The Three Phases or Kinds of Information Emergence

  1. the order out of chaos when the matter in the universe forms information structures.

    This was not possible before the first atoms formed about 400,000 years after the Big Bang. So information structures like the stars and galaxies did not exist before about 400 million years.

    At that time, convection and turbulent cells probably first formed in far-from-equilibrium clouds of dust and gas. They are still forming today. But this is a purely physical/material kind of order. It is information, but does not process information.

  2. the order out of order when the material information structures form self-replicating biological information structures. These are information processing systems.

    In his famous essay, "What Is Life?," Erwin Schrödinger noted that life "feeds on negative entropy" (or information). He called this "order out of order."

    This kind of processing of information first emerged about 4 billion years ago on the earth. It continues today on multiple emergent biological levels, e.g., single-cells, multi-cellular systems, organs, etc., each level creating new information not reducible to lower levels and exerting downward causation on the lower levels.

  3. the pure information out of order when organisms with minds generate, internalize, and then externalize non-biological information, communicating it to other minds and storing it in the environment. Communication can be by hereditary transmission or by an advanced organism capable of learning and then teaching others, directly or indirectly by publishing knowledge.

    This kind of information can be highly abstract mind-stuff, pure Platonic ideas, the stock in trade of philosophers. It is neither matter nor energy, a kind of pure spirit, the ghost in the machine. It is an immaterial candidate for the dualist substance of René Descartes's mind.

Quantum Physics

Quantum mechanical events have generally been thought to be unhelpful by philosophers of mind. Adding indeterminism to mental events apparently would only make our actions random and our desires the product of pure chance, they say. If our willed actions are not determined by anything, we are neither morally responsible nor truly free. Whether mental events are reducible to physical events, or whether mental events can be physical events without such a reduction, the interposition of indeterministic quantum processes apparently adds no explanatory power. And of course if mental events are epiphenomenal, they are not causally related to bodily actions.

Our challenge is to admit some quantum indeterminism into a “statistical” causality (an indeterminism which renders our mental causes merely “statistical”), yet nevertheless allow us to describe mental causes as “adequately determined.” That is to say, mental causes are essentially - and for all practical purposes - “determined,” because the statistics in most cases are near to certainty.

Even more importantly, our thoughts - and subsequent actions – are in no way completely “pre-determined,” neither from causal chains that go back before our birth such as our genetic inheritance, nor from the immediate events in the “fixed past,” which together with assumed deterministic “laws of nature,” are thought by most compatibilist philosophers to completely explain our actions. Finally, we can identify the "self-determining" causes as creative mental processes that generate and evaluate alternative possibilities for action in the light of our personal goals, then select an action based on our past experiences.

Indeterminism in quantum physics shows up as "possibilities" with calculable "probabilities." Generally, only one of these possibilities becomes an actuality as the result of an experimental measurement. We can characterize the experimental result in terms of information. The experiment must create some new information which enters the world and becomes "observable" by the experimenter. At a minimum, it may be a single "bit" of information, the "Yes/No" answer to a "question put to Nature" by the experimenter.

The creation of new information reduces the thermodynamical "entropy" locally. Information is a form of "negative entropy." And reducing entropy locally is only possible if an equal or greater amount of positive entropy is transferred away from the experiment to satisfy the second law of thermodynamics - that the total entropy always increases. Our emphasis on information creation provides a new perspective on the "problem of measurement" and suggests a simple "information interpretation" of quantum mechanics.

Possibilities, Probabilities, and Actuality:
An Information "Interpretation" of Quantum Mechanics

The Information Interpretation is simply the standard orthodox Copenhagen Interpretation plus information and minus the conscious observer. To be more specific, we add the creation of a stable information structure, and without it we show that an observer will not have an "observable."

Information philosophy interprets the wave function ψ as a "possibilities" function. With this change of terminology, the mysterious process of a wave function "collapsing" becomes more intuitive talk of many possibilities, with mathematically calculable probabilities, becoming a single actuality.

Information physics is standard quantum physics. It accepts the principle of superposition, the axiom of measurement, and the projection postulate of standard quantum mechanics. But a conscious observer is not required for the wave-function "collapse". All that is required is an interaction that creates stable information that can be observed macroscopically. This requires thermodynamics.

The transformation theory of Dirac and Jordan lets us represent ψ in a set of basis functions for which the combination of quantum system and measurement apparatus has eigenvalues. We represent ψ as in a superposition of those "possible" eigenfunctions. Quantum mechanics lets us calculate the probabilities of each of those "possibilities."

Interaction with the measurement apparatus (or indeed interaction with any other system) may project out one of those possibilities as an actuality. But for this event to be an "observable" (a John Bell "beable"), information must be created and positive entropy must be transferred away from the new information structure, in accordance with our two-stage information creation process.

All interpretations of quantum mechanics predict the same experimental results. Information physics is no exception, because the experimental data from quantum experiments is the most accurate in the history of science.

Where interpretations differ is in the picture (the visualization) they provide of what is "really" going on in the microscopic world - the so-called "quantum reality." The "orthodox" Copenhagen interpretation of Neils Bohr and Werner Heisenberg discouraged such attempts to understand the nature of the "quantum world," because they said that all our experience is derived from the "classical world" and should be described in ordinary language.

The information interpretation encourages visualization. Schrödinger called it Anschaulichkeit. He and Einstein were right that we should hope to picture quantum reality. But that demands that we accept the reality of the quantum and "quantum jumps," something most modern interpretations do not. (See our two-slit experiment and EPR experiment visualizations.)

Bohr was of course right that classical physics plays an important role. His Correspondence Principle allowed him to recover some important physical constants by assuming that the discontinuous quantum jumps for low quantum numbers (low orbits in his old quantum theory model) converged in the limit of large quantum numbers to the continuous radiation emission and absorption of classical electromagnetic theory.

In addition, we know that in macroscopic bodies with enormous numbers of quantum particles, quantum effects are averaged over, so that the uncertainty in position and momentum of a large body still obeys Heisenberg's indeterminacy principle, but the uncertainty is for all practical purposes unmeasurable and the body can be treated classically. We can say that the quantum description of matter also converges to a classical description in the limit of large numbers of quantum particles.

Both Bohr and Heisenberg suggested that just as relativistic effects can be ignored when the velocity is small compared to the velocity of light (v / c → 0), so quantum effects might be ignorable when the quantum of action h → 0. But this is wrong, because h never goes to zero. In the information interpretation, it is always a quantum world. The conditions needed for ignoring quantum indeterminacy are when the mass of the macroscopic "classical" object is large.

The creation of irreversible new information marks the transition between the quantum world and the "adequately deterministic" classical world, because the information structure must be large enough (and stable enough) to be seen. The typical measurement apparatus is macroscopic, so the quantum of action h becomes small compared to the mass m and h / m approaches zero.

Noting that the momentum p is the product of mass and velocity mv, Heisenberg's indeterminacy principle, Δp Δx > h can be rewritten as Δv Δx > h / m. When h / m is small enough, errors in the position and velocity of macroscopic objects becomes smaller that can be measured. The classical world emerges when quantum effects can be averaged over large numbers of particles. (Landau and Lifshitz argued that their random phases cancel one another.)

The information interpretation thus explains why quantum superpositions like Schrödinger's Cat are not seen in the macroscopic world. The stable new information structure in a macroscopic object has reduced the quantum possibilities (and their potential interference effects) to a classical actuality.

The central element in quantum physics is the "wave function" ψ, with its mysterious wave-particle dual nature (sometimes a wave, sometimes a particle, etc.). We believe that teaching and understanding quantum mechanics would be much simpler if we called ψ the "possibilities function." It only looks like a wave in simple cases of configuration space. But it always tells us the possibilities - the possible values of any observable, for example.

Given the "possibilities function" ψ, quantum mechanics allows us to calculate the "probabilities" for each of the "possibilities." The calculation depends on the free choice of the experimenter as to which "observables" to look for. If the measurement apparatus can register n discrete values, ψ can be expanded in terms of a set of basis functions appropriate for the chosen observable, say φn. The expansion is

ψ = cn φn

When the absolute squares of the coefficients cn are appropriately normalized to add up to 1, the probability Pn of observing value n is

Pn = cn2 = | < ψ | φn > | 2

These probabilities are confirmed statistically by repeated identical experiments that collect large numbers of results. Quantum mechanics is the most accurate physical theory in science, with measurements accurate to thirteen decimal places.

In each individual experiment, generally just one of the possibilities becomes an actuality (some experiments leave the quantum system in a new superposition of multiple possibilities).

In our information interpretation, a possibility is realized or actualized at the moment when information is created about the new state of the system. This new information requires that positive entropy be carried away from the local increase in negative entropy.

Note that an "observer" would not be able to make a "measurement" unless there is new information to be "observed." Information can be (and usually is) created and recorded before any observer looks at the results.

An information approach can help philosophers to think more clearly about quantum physics. Instead of getting trapped in talk about mysterious "collapses of the wave function," "reductions of the wave packet," or the "projection postulate" (all important issues), the information interpretation proposes we simply say that one of the "possibilities" has become "actual." It is intuitively obvious that when one possibility becomes actual, all the others are annihilated, consigned to "nothingness," as Jean Paul-Sartre put it.

We can also say that quantum theory lets us put quantitative values on the "probabilities" for each of the "possibilities." But this means that quantum theory is statistical, meaning indeterministic and random. It is not a question of our being ignorant about what is going on (an epistemological problem). What's happening is ontological chance.

We can also say that the "possibilities function" ψ moves through space (at the speed of light , or even faster?), exploring all the possibilities for where the particle might be found. This too may be seen as a special kind of information. In the famous "two-slit experiment," the "possibilities function" travels everywhere, meaning that it passes through both slits, interfering with itself and thus changing the possibilities where the particle might be found. Metaphorically, it "knows" when both slits are open, even if our intuitive classical view imagines the particle to go through only one. This changes the probabilities associated with each of the possibilities.

Animation of the possibilities function becoming actual (the collapse of the wave function) - click to restart

When the possibilities function becomes actual, the probability of the one actuality becomes unity (certainty) and the other possibilities disappear instantly, over distances that for Einstein implied violation of his special theory of relativity.

The information interpretation helps us to locate the Heisenberg and von Neumann "cut" in the path from physical event to knowledge in the human mind. John von Neumann described the steps along that path of "psycho-physical parallelism" -

"we could find [the path] of the light quanta [from the measuring instrument], and the path of the remaining light quanta into the eye of the observer, their refraction in the eye lens, and the formation of an image on the retina, and then we would say: this image is registered by the retina of the observer.

"And were our physiological knowledge more precise than it is today, we could go still further, tracing the chemical reactions which produce the impression of this image on the retina, in the optic nerve tract and in the brain, and then in the end say: these chemical changes of his brain cells are perceived by the observer.

"But in any case, no matter how far we calculate -- to the mercury vessel, to the scale of the thermometer, to the retina, or into the brain, at some time we must say: and this is perceived by the observer. That is, we must always divide the world into two parts, the one being the observed system, the other the observer."

Von Neumann said that it was arbitrary where we place the moment of perception. John Bell pictured the arbitrary location as a "shifty split." But our information interpretation can place the "cut" at the time and place where information is indelibly recorded. In Bell's drawing it is the irreversible spot on the photographic plate. Without that information, von Neumann's subjective observer could not "measure" the objective observed system.

This new information marks the irreversible transition between the quantum world and the "adequately" classical world, because the information structure must be large enough to be seen. The measurement apparatus is macroscopic, so the quantum of action h becomes small compared to the mass m and h / m approaches zero.

The information interpretation thus explains why quantum superpositions like Schrödinger's Cat are not seen in the macroscopic world. Decoherence theorists say that the lack of macroscopic superpositions is themeasurement problem But the new information structure in a macroscopic object has reduced the possibilities to an actuality.

Finally, without possibilities, no new information can be created. Claude Shannon's information theory requires the existence of multiple messages. If there is only one possible message, it brings no new information, there are no "surprises."

Information and a Non-Reductive Physicalism

The leading defender of a non-reductive physicalism was Donald Davidson. Its leading critic is Jaegwon Kim. In his 1970 essay "Mental Events," Davidson described his "Anomalous Monism":

“Mental events such as perceivings, rememberings, decisions, and actions resist capture in the nomological net of physical theory. How can this fact be reconciled with the causal role of mental events in the physical world? Reconciling freedom with causal determinism is a special case of the problem if we suppose that causal determinism entails capture in, and freedom requires escape from, the nomological net. But the broader issue can remain alive even for someone who believes a correct analysis of free action reveals no conflict with determinism. Autonomy (freedom, self-rule) may or may not clash with determinism; anomaly (failure to fall under a law) is, it would seem, another matter.”

In order to allow mental events to cause physical events, yet not be reducible to them, Davidson developed the following set of arguments.

1. "at least some mental events interact causally with physical events"
2. "where there is causality, there must be a law: events related as cause and effect fall under strict deterministic laws."
3. "there are no strict deterministic laws on the basis of which mental events can be predicted and explained." (mental events are "anomalous.")

Davidson viewed his work as extending that of Immanuel Kant on reconciling (eliminating the anomalous contradiction between) freedom and necessity. Davidson gave the term supervenience a specific philosophical meaning within analytic philosophy. He saw supervenience as the last hope for a non-reductive physicalism, which does not reduce the mental to the physical, the psychological to the neurophysiological.

Davidson set two requirements for supervenience:

1. a domain can be supervenient on another without being reducible to it (non reduction) 2. if a domain supervenes, it must be dependent on and be determined by the subvenient domain (dependence)

But in Jaegwon Kim's 1989 presidential address to the American Philosophical Association, he said:

“The fact is that under Davidson's anomalous monism, mentality does no causal work. Remember: in anomalous monism, events are causes only as they instantiate physical laws, and this means that an event's mental properties make no causal difference.”

Kim claims that:

“The most fundamental tenet of physicalism concerns the ontology of the world. It claims that the content of the world is wholly exhausted by matter. Material things are all the things that there are; there is nothing inside the spacetime world that isn't material, and of course there is nothing outside it either. The spacetime world is the whole world, and material things, bits of matter and complex structures made up of bits of matter, are its only inhabitants.”

Kim says that Davidson's goal of "non-reductive physicalism" is simply not possible. The physical world is "causally closed," says Kim:

“what options are there if we set aside the physicalist picture? … This means that one would be embracing an ontology that posits entities other than material substances — that is, immaterial minds, or souls, outside physical space, with immaterial, nonphysical properties.”

We accept part of Kim's criticism. An informational theory of mind posits the existence of something immaterial, yet physical. It is both a “non-reductive physicalism” and an “immaterial physicalism.” But it is not "outside space and time" in some Kantian sense. Although it may contain something of what believers in souls thought they believed in.

The germ of the idea for an informational theory of mind might be seen in a series of papers in the 1950’s and 1960’s by Hilary Putnam and Jerry Fodor, whose theory of functionalism pointed to characteristics of mind that are “multiply realizable” in different physical substrates. They were inspired by the then new digital computers, whose software could be moved between different computers and perform the same functions. Mind is the software in the brain hardware, they argued.

Our informational theory of mind shows how thoughts that are embodied in one mind can be converted from their material instantiation and transformed into the pure energy of sound waves or electromagnetic waves by which they are communicated to other minds, there to be embodied again. During communication, the information in human knowledge is not even embodied in material, though it is still a part of the physical world.

In 1976 Kenneth Sayre proposed that information might be a neutral category in which concepts of mind and concepts of body might be defined. We agree that information might be the basis for a neutral, triple aspect, monism.

The informational analysis of non-reductive physicalism must show exactly how information does not move in the upward direction between hierarchical levels (fundamentally because noise in the lower level makes motions incoherent), but that information does move down as the higher-level information-processing system uses it to manipulate individual physical particles (maintaining a high signal-to-noise ratio in the upper level), as the British empiricists imagined.

Quantum Randomness Blocks "Bottom-Up" Causation,
Information-Processing Structures Enable Downward Causation

We shall now see that quantum and thermal noise breaks any upwardly causal deterministic chains between the physics of the atomic and molecular level and the biophysics of the organic world. It also breaks any upward deterministic chains between the neurobiological brain and the mind, replacing them with a statistical causality that provides us with what William James called “some looseness in the joints.” []

We present two processes that exhibit randomness in the component atoms and molecules, thus blocking any organized upward influences. The first is present in every biological cell. The other is critically important in the operation of neurons. The first separates the living from the simply material. The latter is at the mind/brain boundary.

Ribosomes Build Hemoglobin from Randomly Moving Amino Acids (Life from Matter)

The twenty amino acids move about randomly in a cell, the consequence of thermal and quantum noise. Attached to them are tiny bits of transfer RNA, each with three letters of the genetic code that identify them. They bump randomly into the ribosome, which may be moving along a sequence of the genetic code written in messenger RNA sent from the cell nucleus, which has noticed that more of a specific protein or enzyme is needed. This random motion shows us that no organized or coherent information is present in the unattached tRNAs that could cause something from the bottom up to emerge at a higher level.

[Look at the animation of mRNA translation] Notice the absurdity of the idea that the random motions of the transfer RNA molecules (green in the video at right), each holding a single amino acid (red), are carrying pre-determined information of where they belong in the protein.

It is the information processing of the higher-level ribosome that is in control. As the ribosome moves along the string of mRNA, it reads the next three-letter codon and waits for a tRNA with the matching anti-codon to collide randomly. With over 60 codons for the 20 amino acids, it might be some time before the desired amino acid shows up. Note that it is the high speed of random motions that allows this process to proceed rapidly. Consider the case of hemoglobin.

When a ribosome assembles 330 amino acids in four symmetric polypeptide chains (globins), each globin traps an iron atom in a heme group at the center to form the hemoglobin protein. This is downward causal control of the amino acids, the heme groups, and the iron atoms by the ribosome. The ribosome is an example of Erwin Schrödinger's emergent "order out of order," life "feeding on the negative entropy" of digested food.

When 200 million of the 25 trillion red blood cells in the human body die each second, 100 million new hemoglobins must be assembled in each of 200 million new blood cells . With the order of a few thousand bytes of information in each hemoglobin, this is 10 thousand x 100 million x 200 million = 2 x 1020 bits of information per second, a million times more information processing than today's fastest computer CPU.

The ribosome is an information-processing biological system that has emerged from the lower level of chemistry and physics to exert downward causation on the atomic and molecular components needed to manufacture hemoglobin.

Ion Transporter Pumps in Neurons Organize Randomly Moving Individual Atoms (Mind from Life)

When a single neuron fires, the active potential rapidly changes the concentration of sodium (Na+) ions inside the cell and potassium (K+) ions outside the cell. Within milliseconds, thousands of sodium-potassium ion transporters in the thin lipid bilayer of the cell wall must move billions of those ions, two or three at a time between inside and outside the cell wall, to restore the reduced chemical potential.

All the individual ions, atoms, and molecules in the cell are moving rapidly in random directions. The indeterministic motions of the ions randomly move some near the pump opening, where quantum collaborative forces can capture them in a lock-and-key structure. The idea that the physical/chemical base level contains enough information in the motion of its atoms and molecules to cause and completely explain the operations of the higher levels of life and mind is simply absurd.

The emergent biological machinery of an ion channel exerts downward causation on the ions, powered by ATP energy carriers (feeding on negative entropy).

The sodium-potassium pump in our neurons is as close to a Maxwell's Demon as anything we are ever likely to see.

When many motor neurons fire, innnervating excitatory post-synaptic potentials (EPSPs) that travel down through the thalamus and the spinal cord where they cause muscles to contract, that is as literal as downward causation gets between the mind and the body. When the emergent mind decides to move the body, mental causation is realized as downward causation.

Although Donald Campbell in 1974 coined the phrase "downward causation," the concept was actually described a few years earlier by Roger Sperry who claimed that it supports a form of “mentalism.” Sperry cited a wheel rolling downhill as an example of what he called "downward causal control." The atoms and molecules are caught up and overpowered by the higher properties of the whole. Sperry compared the rolling wheel to an ongoing brain process or a progressing train of thought in which the overall properties of the brain process, as a coherent organizational entity, determine the timing and spacing of the firing patterns within its neural infrastructure.

In 1977, Karl Popper announced that he changed his mind on the importance of indeterminism. He developed a two-stage model of free will and said that it was an example of downward causation. Popper cited both Sperry and Campbell as the source of the idea of downward causation.

In a lecture called Natural Selection and the Emergence of Mind, Popper said he had changed his mind (a rare admission by a philosopher) about two things. First he now thought that Darwinian evolution and natural selection was not a "tautology" that made it an unfalsifiable theory. Second, he had come to accept the random variation and selection of ideas as a model of free will.

“The selection of a kind of behavior out of a randomly offered repertoire may be an act of choice, even an act of free will. I am an indeterminist; and in discussing indeterminism I have often regretfully pointed out that quantum indeterminacy does not seem to help us; for the amplification of something like, say, radioactive disintegration processes would not lead to human action or even animal action, but only to random movements.

“I have changed my mind on this issue. A choice process may be a selection process, and the selection may be from some repertoire of random events, without being random in its turn. This seems to me to offer a promising solution to one of our most vexing problems, and one by downward causation.” []

In 2007, Christoph Koch chaired a conference on free will sponsored by the Templeton Foundation. The proceedings were published in the 2009 book Downward Causation and the Neurobiology of Free Will. One of the principal contributors, Nancey Murphy, and her colleague Warren Brown, had argued in their 2007 book, Did My Neurons Make Me Do It?, for the existence of downward causation on the basis of complexity and chaos theories. Murphy and Brown’s goal was to defend a non-reductive version of physicalism whereby humans are (at least sometimes) the authors of their own thoughts and actions.

“If humans are purely physical, and if it is the brain that does the work formerly assigned to the mind or soul, then how can it fail to be the case that all of our thoughts and actions are determined by the laws of neurobiology? If this is the case, then free will, moral responsibility, and, indeed, reason itself would appear to be in jeopardy.” []

The problem, say Murphy and Brown, is not neurobiological determinism, it is neurobiological reductionism that they want to avoid. This is odd, because it is precisely the elimination of strict determinism that prevents the causal closure and reduction of the mind to the neural level below, as well as the emergence of freedom for the mind to exert downward causation on all the levels below.

Murphy and her colleagues propose to get the kind of non-reductive physicalism that Donald Davidson wanted based on the emergence of complex systems in far-from-equilibrium conditions in deterministic chaos that exhibit spontaneous self-organization. They are not the first complexity theorists to compare their complex physical system examples to living systems, since both are self-organizing and require a steady flow of matter and energy for their continued existence.

But as we have shown, the emergence of "order out of chaos" is only the first phase of emergent information structures. This phase provides only a "gross" form of downward causation." The "fine" control over component atoms and molecules arises only in the second (Life) and third (Mind) phases - the emergence of of information-processing systems.

From the moment the earliest “self-replicating” and “self-organizing” systems appeared at the origin of life, the universe has contained purposeful beings. We can say that a primitive version of the concept of a “self” emerges at this time.

Before this there was no pre-existing telos, no teleology, as many philosophers and theologians imagined, but there is teleonomy, as Jacques Monod called it. François Jacob, who shared the Nobel Prize with Monod, said that the purpose of every cell is to become two cells.[]

An Emergent Dualism, Even a Neutral Triple-Aspect Monism?

Several of the participants in our conference espouse some form of an emergent dualism (or perhaps a "dual-aspect" monism).

Our organizer, Antonella Corradini, says in her essay Emergent Dualism, "The fact that explainability of the mental by the physical does not hold for emergentism undermines any project - like Kim's - to give a physicalistic interpretation of downward causation." E. Jonathan Lowe (Corradini's co-editor of the proceedings of earlier conferences Analytic Philosophy and Psycho-physical Dualism, and who was to have participated in this conference) developed a proposal for an interactionist non-Cartesian substance dualism.

Harald Atmanspacher discusses the dual-aspect monism of Wolfgang Pauli and C. G. Jung. Godehard Brüntrup asks whether psycho-physical emergentism is committed to dualism and explores the causal efficacy of emergent mental properties. And Jeffrey Barrett explains why Eugene Wigner thought that quantum mechanics requires a mind-body dualism.

Although the concept of emergence has become very popular in the last few decades in connection with the development of chaos and complexity theories (which, as we saw, support only a "gross" form of downward causation), emergence is actually a very old idea, dating at least to the nineteenth century, with some hints of it in ancient and medieval philosophy.

The idea of emergence was implicit in the work of John Stuart Mill and explicit in "emergentists" like George Henry Lewes, Samuel Alexander, C. Lloyd Morgan, and C. D. Broad. Brian McLaughlin dubbed these thinkers the "British Emergentists." He developed an "idealized" version of British Emergentism and synthesisized what most of these thinkers had in common into a coherent and representative picture. He says:

British Emergentism maintains that everything is made of matter: There are, for example, no Cartesian souls, or entelechies, vital elan, or the like. And it holds that matter is grainy, rather than continuous; indeed, that it bottoms-out into elementary material particles, atoms or more fundamental particles...Moreover, on its view, nothing happens, no change occurs, without some motion of elementary particles. And all motion is to the beat of the laws of mechanics.

According to British Emergentism, there is a hierarchy of levels of organizational complexity of material particles that includes, in ascending order, the strictly physical, the chemical, the biological, and the psychological level. There are certain kinds of material substances specific to each level. And the kinds of each level are wholly composed of kinds of lower levels, ultimately of kinds of elementary material particles. Moreover, there are certain properties specific to the kinds of substances of a given level. These are the "special properties" of matter...

What is especially striking about British Emergentism, however, is its view about the causal structure of reality. I turn to that view in the following two paragraphs.

British Emergentism maintains that some special science kinds from each special science can be wholly composed of types of structures of material particles that endow the kinds in question with fundamental causal powers. Subtleties aside, the powers in question "emerge" from the types of structures in question...

Now, the exercise of the causal powers in question will involve the production of movements of various kinds. Indeed, Emergentism maintains that special kinds, in virtue of possessing certain types of minute internal structures, have the power to influence motion. And here is the striking point: They endow the kinds with the power to influence motion in ways unanticipated by laws governing less complex kinds and conditions concerning the arrangements of particles. Emergentism is committed to the nomological possibility of what has been called "downward causation".

Minute internal (information-processing) structures that control the motions and arrangements of the component particles is the signature aspect of British Emergentism, one that we have demonstrated with ribosomal control of the twenty kinds of amino acids in living systems and ion channel control over the ions two and three at a time in the brain's neural network.

European vitalists like Henri Bergson and Hans Driesch may not have used the term emergence, but they strongly supported the idea of teleological (purposeful), likely non-physical causes, without which they thought that life and mind could not have emerged from physical matter.

Bergson attacked the idea that the "reversibility" of physical systems can be applied to living things. Reversibility was popular in the late nineteenth century as a criticism of the second law of thermodynamics, especially the derivation of this law from statistical mechanics by Ludwig Boltzmann. The laws of Newtonian mechanics are time reversible. Atoms and molecules have no memory of their past. Living systems, however, gain something from their memory of the past.

Driesch was an anti-mechanist who developed a sophisticated form of vitalism that he called "neovitalism."

Driesch saw clear evidence of a kind of teleology in the ability of lower organisms to rebuild their lost limbs and other vital parts. He used Aristotle's term "entelechy" (loosely translated as "having the final cause in") to describe the organism's capacity to rebuild. Driesch said this disproved the theory of preformation from an original cell. Driesch studied the original cells of a sea urchin, after they had divided into two cells, then four, then eight. At each of these stages, Driesch separated out single cells and found that the separated cells went on to develop into complete organisms. This is regarded as the first example of biological cloning.

British empiricits, notably C. D. Broad, rejected Driesch's idea of entelechy as a non-material, non-spatial agent that is neither energy nor a material substance of a special kind, but we should note that it well describes the information content of any cell that lets it develop into a complete organism. Driesch himself maintained that his entelechy theory was something very different from the substance dualism of older vitalisms. So what was Broad's criticism of Driesch? Neither thinker could produce a clear description of their vital element.

Broad was sophisticated in his discussion of emergence. He saw that the kind of emergence that leads to water and its unique chemical properties, when compared to the properties of its molecular components hydrogen and oxygen, has no element of purpose or teleology. The emergence of life (and mind) from physics and chemistry, however, clearly introduces a kind of design or purpose. Modern biologists call it teleonomy, to distinguish it from a metaphysical telos that pre-exists the organism. Jacob's "The goal of every cell is to become two cells."

It seems likely that both Driesch and Broad were trying to grasp this teleonomy.

A number of later "holistic" thinkers gathered for the 1968 Alpbach Symposium organized by Arthur Koestler, which he published as the book Beyond Reductionism. They included Ludwig von Bertalanffy (who had in the 1930's anticipated some of the later work of Erwin Schrödinger and Ilya Prigogine), Paul Weiss, Jerome Bruner, Viktor Frankl, Friedrich Hayek, Jean Piaget, and C. H. Waddington, who mostly thought it likely that further emergent hierarchical levels "over and above" the molecular level would be needed to fully explain biology, and that these levels were unlikely to be deterministic.

We have demonstrated biological and neurological evidence supporting their anti-reductionist ideas of mental causation in particular and the more general problem of downward causation, for example the downward control of the motions of a cell's atoms and molecules by supervening on biological macromolecules. The molecular biology of a cell is not reducible to the laws governing the motions of its component molecules. But are there emergent laws governing motions at the cellular level, the organ level, the organism level, and so on up to the mental level?

If so, these are perhaps simply the laws of the "special sciences" that scientists identify to help them explain their discipline to one another, e.g., Mendel's laws of inheritance in biology, the Weber and Fechner laws of perception in physiology (the just noticeable difference and the logarithmic response to a stimulus), or the law of supply and demand in economics.

What then can we conclude about Jaegwon Kim's attack on the non-reductive physicalism dream of Donald Davidson? Among philosophers of mind, Kim is the standard bearer for a monistic physicalism. Every writer on emergent dualism has to confront his philosophical arguments.

The locus classicus of twentieth-century discussions of philosophy of mind and mental causation is Donald Davidson's 1970 essay "Mental Events," which was revisited in his 1993 essay, "Thinking Causes," published together with 15 critical essays on Davidson's work in the 1993 book Mental Causation, edited by John Heil and Alfred Mele.

What can we then make of Jaegwon Kim’s assertion that "non-reductive physicalism" is simply not possible, that the physical world is "causally closed?" Recall Kim’s view:

“what options are there if we set aside the physicalist picture? Leaving physicalism behind is to abandon ontological physicalism, the view that bits of matter and their aggregates in space-time exhaust the contents of the world. This means that one would be embracing an ontology that posits entities other than material substances — that is, immaterial minds, or souls, outside physical space, with immaterial, nonphysical properties.”

We find the idea of causal closure is a mistake. Quantum physics and the second law of thermodynamics show us that the physical world is not causally closed. The expansion of the universe shows it to be physically and informationally open, otherwise the total amount of information in the universe would be conserved, a constant of nature, which some mathematically inclined philosophers and scientists appear to believe (Roger Penrose and Michael Lockwood, for example).

Jaegwon Kim may be wrong about causal closure, but he is correct that the ontology we embrace does picture the mind as immaterial. Although thoughts in the mind - Descartes’ “thinking substance” - are immaterial, that does not mean that they are “outside physical space.” Rather they are physically present in our brains in the same sense as mathematics is there, as all concepts and ideas are there. They are information, the software in the hardware.

As we saw above, Davidson made three causal claims for mental causation. They can now be adapted to the statistical causality allowed by quantum physics:

1. Mental events are statistically caused by physical events.
2. Causal relations are normally backed by statistical laws.
3. There are no strict deterministic laws for mental events acting on physical events. But statistical causality and adequate determinism are good enough.

Davidson's main goal was to deny the reducibility of mental events to physical events in the lower levels, to deny the claim that the motions of the atoms and molecules at the base level are causally determinative of everything that happens at higher levels. This loss of "bottom up" causal control we can grant, and believe we have provided scientific evidence for it. But at the same time, we can defend a statistical causality that is very often “adequately determined,” which is needed by the upper level to exert its downward causal control.

We can also accept the goal of Nancey Murphy and her colleagues, that there is no neurobiological reductionism. Moreover, despite Murphy’s acceptance of causal determinism, there is no strict neurobiological determinism either.

Finally, we can validate the optimistic view of most emergentists, that there is something “over and above” the material. The mind is not “nothing but” the brain, as eliminative materialists (for example, Daniel Dennett and the Churchlands) believe.

Shall we identify this as an "emergent dualism," or more ambitiously as a "triple-aspect" neutral monism with information the neutral entity at all three levels?

Immaterial / Mind / Subject
Material / Body / Object
Concept / Mind
Subject-Percept / Life
Object / Matter

Quantum physics is no doubt congenial to dualisms, the most notable example being the Bohr-Heisenberg idea of "complementarity," which is still a central theme in the standard "Copenhagen interpretation" of quantum mechanics. Bohr and Heisenberg gave numerous lectures between 1930 and 1960 appealing to philosophers to extend their notion beyond the "wave-particle duality" to philosophical ideas like subject and object, idealism and materialism, and of course mind and matter.

Recall that William James' "radical empiricism" was the original model for neutral monism adopted more of less by Bertrand Russell, Rudolf Carnap, and others. James in turn had been influenced by the positivist scientist Ernst Mach, also a major influence on the Carnap/Schlick/Wittgenstein Vienna Circle.

The "Unus Mundus" of Carl Gustav Jung is a form of neutral monism whose holistic symmetry is "broken" into three mutually compatible parts recognized as a mental domain, a material domain, and an interface between these domains. All three domains are fundamentally informational, thus we can argue for a triple-aspect monism.

It is James' notion of "pure experience" which provides the basis for our experience recorder and reproducer model and the informational theory of mind. ERR is a strong alternative to the computational theory of mind.

The mind is pure information, the software in the hardware, the ghost in the machine, the modern spirit, the end product of the successive emergence of three kinds of information structures in the universe.

Consciousness and the Experience Recorder/Reproducer (ERR)

Consciousness can be defined in information terms as an entity (usually a living thing, but we can also include artificially conscious machines or computers) that reacts to the information (and particularly to changes in the information) in its environment. In the context of information philosophy, we can define this as information consciousness. Thus an animal in a deep sleep is not conscious because it ignores changes in its environment. And robots may be conscious in our sense. Even the lowliest control system using negative feedback (a thermostat, for example) is in a minimal sense conscious of changes in its environment.

This definition of consciousness fits with our model of the mind as an experience recorder and reproducer (ERR). The ERR model stands is a major altrnative to the popular cognitive science or “computational” model of the mind as a digital computer. No algorithms or stored programs are needed for the ERR model.

The physical metaphor is a non-linear random-access data recorder, where data is stored using content-addressable memory (the memory address is the data content itself). Simpler than a computer with stored algorithms, a better technological metaphor might be a video and sound recorder, enhanced with the ability to record smells, tastes, touches, and critically essential, feelings. The biological model is neurons that wire together during an organism’s experiences, in multiple sensory and limbic systems, such that later firing of even a part of the wired neurons can stimulate firing of all or part of the original complex.

Neuroscientists are investigating how diverse signals from multiple pathways can be unified in the brain. We offer no specific insight into these “binding” problems. Nor can we shed much light on the question of philosophical “meaning” of any given information structure, beyond the obvious relevance (survival value) for the organism of remembering past experiences.

A conscious being is constantly recording information about its perceptions of the external world, and most importantly for ERR, it is simultaneously recording its feelings. Sensory data such as sights, sounds, smells, tastes, and tactile sensations are recorded in a sequence along with pleasure and pain states, fear and comfort levels, etc.

All these experiential and emotional data are recorded in association with one another. This means that when the experiences are reproduced (played back in a temporal sequence), the accompanying emotions are once again felt, in synchronization. The capability of reproducing experiences is critical to learning from past experiences, so as to make them guides for action in future experiences. The ERR model is the minimal mind model that provides for such learning by living organisms. The ERR model does not need computer-like decision algorithms to reproduce past experiences. All that is required is that past experiences “play back” whenever they are stimulated by present experiences that resemble the past experiences in one or more ways. When the organism recreates experiences by acting them out, they can become “habitual” and “subconscious” information structures.

It is critical that the original emotions play back, along with any variations in current emotions. ERR might then become an explanatory basis for conditioning experiments, classical Pavlovian and operant, and in general a model for associative learning.

Bernard Baars’ Global Workspace Theory uses the metaphor of a “Theater of Consciousness,” in which there is an audience of purposeful agents calling for the attention of the executive on stage.

In the ERR model, vast numbers of past experiences clamor for the attention of the central executive at all times, whenever anything in current experience has some resemblance.

If we define “current experience” as all afferent perceptions and the current contents of consciousness itself, we get a dynamic self-referential system with plenty of opportunities for negative and positive feedback.

William James’ description of a “stream of consciousness” together with a “blooming, buzzing confusion” of the unconscious appear and his notion of consciousness as "pure experience" appear to describe the ERR model very well.

In the “blackboard” model of Allan Newell and Herbert Simon, concepts written on the blackboard call up similar concepts by association from deep memory structures. The ERR model supports this view, and explains the mechanism by which concepts (past experiences) come to the blackboard.

In Daniel Dennett’s consciousness model, the mind is made up of innumerable functional homunculi, each with its own goals and purposes. Some of these homunculi are information structures formed genetically, which transmit “learning” or “knowledge” from generation to generation. Others are environmentally and socially conditioned, or consciously learned.

Four “Levels” of Consciousness

Instinctive Consciousness - by animals with little or no learning capability. Automatic reactions to environmental conditions are transmitted genetically. Information about past experiences (by prior generations of the organism) is only present implicitly in the inherited reactions

Learned Consciousness - for animals whose past experiences guide current choices. Conscious, but mostly habitual, reactions are developed through experience, including instruction by parents and peers.

Predictive Consciousness - The Sequencer in the ERR system can play back beyond the current situation, allowing the organism to use imagination and foresight to evaluate the future consequences of its choices.

Reflective (Normative) Consciousness – in which conscious deliberation about values influences the choice of behaviors.

Determinism Itself Is An Emergent Property

Determinism itself emerges as the universe evolves. Determinism did not exist in the early universe of pure radiation and sub-atomic particles when all interactions of particles and matter were quantum mechanical and indeterministic.

When small numbers of atoms and molecules interact, their motions and behaviors are indeterministic, governed by the rules of quantum mechanics.

However, when large numbers of microscopic particle get together in aggregates, the indeterminacy of the individual particles gets averaged over and macroscopic adequately deterministic laws "emerge." This allows macroscopic systems to exert downward causal control on their indeterministic atomic and molecular components. Macroscopic systems, in making the transition from quantum to classical, effectively suppress the indeterminism.

But notice that when intermininism (randomness, chance) serves some useful purpose (for example, the free creations and inventions of the human mind or the generation of unique antibodies in the immune system), then the "adequately determined" macroscopic information-processing system can recruit ontological randomness from its lower-level, physical-chemical substructures.

The first "information structures" that formed in the universe were the elementary sub-atomic particles produced when the temperature and density of radiation fell below values where particles could be stable against immediate destruction by high-energy photons. The first particles were quarks, electrons, and baryons (atomic nuclei), and these were the only particles for about 380,000 years. At this time the temperature cooled to about 3000 degrees Kelvin, below which electrons could combine with nuclei to form the first atoms (the next "information structures").

It would be 400 million years before the temperature and density were low enough to allow the relatively weak but extremely long-range gravitational force to pull together masses large enough to become planets, stars, and galaxies following Newtonian laws of motion.

"Adequate" determinism and classical mechanics first emerged in the early universe, but the appearance of determinism emerges every day in bodies large enough to average over the indeterministic behavior of individual particles. Heisenberg's indeterminacy principle Δp Δx ≥ h tells us the indeterminacy in momentum and position. Just as relativistic effects can be ignored when the velocity of an object is arbitrarily small compared to the velocity of light (when v / c → 0), some argue that quantum effects can be ignored when Planck's quantum of action h goes to zero.

But this is a mistake. For an object at rest, v / c = 0, but the quantum of action h never changes. All physical systems are fundamentally quantum. Quantum mechanics is universal. The indeterminacy in the position and velocity of a large macroscopic object can become arbitrarily small if we replace the momentum p with mv and rewrite the indeterminacy relation as Δv Δx ≥ h / m. It is when the mass m is large and h / m approaches 0 that quantum effects can be ignored and the object treated as adequately determined for most practical purposes and as a "classical" object.

The "laws of nature," such as Newton's laws of motion, are all statistical in nature. They "emerge" when large numbers of atoms or molecules get together. For large enough numbers, the probabilistic laws of nature approach practical certainty. But the fundamental indeterminism of component atoms never completely disappears. Indeed, quantum events sometimes show up in the macroscopic world, when an alpha-particle decay fires a Geiger counter, a cosmic ray mutates a gene, when a single photon is detected by an eye, or a single molecule is smelled by a nose.

Of course, the philosophical "idea" of determinism had to wait for the emergence of human beings and the earliest philosophers (Democritus, Leucippus, and Epicurus). With the exception of Epicurus, very few philosophers before Charles Sanders Peirce and William James could see the positive benefits of some interdeterminism and chance to provide creative new alternative possibilities that are not pre-determined.

Information, Entropy, and Evolution

By information we mean a quantity that can be understood mathematically and physically. It corresponds to the common-sense meaning of information, in the sense of communicating or informing. It also corresponds to the information stored in books and computers. But it also measures the information in any physical object, like a recipe, blueprint, or production process, and the information in biological systems, including the genetic code and the cell structures.

Ultimately, the information we mean is the departure of a physical system from pure chaos, from "thermodynamic equilibrium." In equilibrium, there is only motion of the microscopic constituent particles ("the motion we call heat"). The existence of macroscopic structures, such as the stars and planets, and their motions, is a departure from thermodynamic equilibrium.

Information is mathematically related to the measure of disorder known as the entropy by Ludwig Boltzmann's famous formula S = k log W, where S is the entropy and W is the probability - the number of ways that the internal components (the matter and energy particles of the system) can be rearranged and still be the same system.

The second law of thermodynamics says that the entropy (or disorder) of a closed physical system increases until it reaches a maximum, the state of thermodynamic equilibrium. It requires that the entropy of the universe is now and has always been increasing.

This established fact of increasing entropy led many scientists and philosophers to assume that the universe we have is "running down" to a "heat death." They think that means the universe began in a very high state of information, since the second law requires that any organization or order is susceptible to decay. The information that remains today, in their view, has always been here. There is nothing new under the sun.

But the universe is not a closed system. It is in a dynamic state of expansion that is moving away from thermodynamic equilibrium faster than entropic processes can keep up. The maximum possible entropy is increasing much faster than the actual increase in entropy. The difference between the maximum possible entropy and the actual entropy is potential information, as shown by Harvard cosmologist David Layzer.

Creation of information structures means that in parts of the universe the local entropy is actually going down. Creation of a low entropy system is always accompanied by radiation of entropy away from the local structures to distant parts of the universe, into the night sky for example.

Creation of information structures means that today there is more information in the universe than at any earlier time. This fact of increasing information fits well with an undetermined universe that is still creating itself. In this universe, stars are still forming, biological systems are creating new species, and intelligent human beings are co-creators of the world we live in.

All this creation is the result of the two-step core creative process. Understanding this process is as close as we are likely to come to understanding the idea of an anthropomorphic creator of the universe, a still-present divine providence, the cosmic source of everything good and evil.

Darwinian evolution is another two-step creative process. Darwin proposed that chance variations (mutations) in the gene pool would be followed by natural selection to identify the variations with the greatest reproductive success.

A Note on Free Will

The two-step cosmic creation process also underlies the most plausible and practical model for free will. Because each free decision to act also creates information in the world, it too must be a two-stage process, first involving some quantum indeterminism to freely generate alternative possibilities, then an adequately determined decision and action statistically caused by reasons, motives, intentions, feelings and desires.

Darwin inspired the first philosopher to propose a two-stage model of free will, William James, who compared his “mental evolution” explicitly to Darwinian evolution.

The genius of James’ picture of free will is that indeterminism is the source for what James called "alternative possibilities" and "ambiguous futures." The chance generation of such alternative possibilities for action does not in any way limit his choice to one of them. For James, chance is not the direct cause of actions. He makes it clear that it is his choice that “grants consent” to one of them.

James was the first thinker to enunciate clearly a two-stage decision process, with chance in a present time of random alternatives, leading to a choice which grants consent to one possibility and transforms an equivocal ambiguous future into an unalterable and simple past. There is a temporal sequence of undetermined alternative possibilities followed by an adequately determined choice where chance is no longer a factor.

In our history of the free will problem, we have found several thinkers who have developed two-stage solutions to the classical problem of free will, including a few of our conference participants.

Among them are William James, Henri Poincaré, Jacques Hadamard, Arthur Holly Compton, Karl Popper, Daniel Dennett, Henry Margenau, Robert Kane, David Sedley and Anthony Long, Roger Penrose, David Layzer, Julia Annas, Alfred Mele, John Martin Fischer, Stephen Kosslyn, Storrs McCall and E. J. Lowe, John Searle, Uwe Meixner, Massimo Pauri, and Martin Heisenberg.

Robert Kane's "Torn Decisions"

Most two-stage_models of free will locate indeterminism in the early deliberative stage, in order to generate alternative possibilities that are not pre-determined. The other place that indeterminism might be involved is in the decision itself. This would make the decision random, except for Kane's defense of a "torn decision." When this is a moral decision, Kane makes it the basis for his "self-forming actions" (SFAs) that provide "ultimate responsibility" (UR).

For detailed information on the most plausible and practical solution to the ancient problem of free will, see my book, Free Will: The Scandal in Philosophy. The full text is available online (PDFs of chapters).

Information and Objective Value

Is the Good something that exists in the world? The Existentialists thought not. Most religions place its origin in a supernatural Being. Humanists felt it a human invention. Modern bioethicists situate value in all life. A variety of ancient religions looked to the sun as the source of all life and thus good. They anthropomorphized the sun or the "bright sky" as God. Dark and night were stigmatized as evil and "fallen."

Philosophers have ever longed to discover a cosmic good. The ideal source of the good is remote as possible from the Earth in space and in time, for Kant a transcendental God outside space and time, for Plato a timeless Good to be found in Being itself, for his student Aristotle a property of the first principles that set the world in motion.

Can we discover a cosmic good? At least identify the source of anything resembling the Good? Yes, we can. Does it resemble the Good anthropomorphized as a God personally concerned about our individual goods? No, it does not. But it has one outstanding characteristic of such a God, it is Providence. We have discovered that which provides. It provides the light, it provides life, it provides intelligence.

We replace the difficult problem of “Does God exist?” with the more tractable problem “Does Goodness exist?” Humanists situate values in reason or human nature. Bioethicists seek to move the source of goodness to the biosphere. Life becomes the summum bonum. Information philosophers look out to the universe as a whole and find a cosmos that grew from a chaos.

Exactly how that is possible requires a profound understanding of the second law of thermodynamics in an expanding and open universe. A very small number of processes that we call ergodic can reduce the entropy locally to create macroscopic information structures like galaxies, stars, and planets and microscopic ones like atoms, molecules, organisms, and human intelligence.

A battle rages between cosmic ergodic processes and chaotic entropic processes that destroy structure and information. Anthropomorphizing these processes as good and evil gives us a dualist image that nicely solves the monotheistic problem of evil." If God is the Good, God is not responsible for the Evil. Instead, we can clearly see an Ergod who is Divine Providence – the cosmic source without which we would not exist and so a proper object of reverence. And Entropy is the "devil incarnate."

Our moral guide to action is then very simple – preserve information structures against the entropy.

Celebrating the first modern philosopher, René Descartes, we call our model for value the Ergo. For those who want to anthropomorphize on the slender thread of discovering the natural Providence, call it Ergod. No God can be God without being Ergodic.

Ergodic processes are those that resist the terrible and universal Second Law of Thermodynamics, which commands the increase of chaos and entropy (disorder). Without violating that inviolable law overall, they reduce the entropy locally, bringing pockets of cosmos and negative entropy (order and information-rich structures). We call all this cosmic order the Ergo. It is the ultimate sine qua non.

Alexander, Samuel, Space, Time, and Deity
Baars, Bernard. In the Theater of Consciousness
Campbell, Donald. "Downward Causation," in Karl Popper, Schilpp, P.A., ed.
Chalmers, David. 1996 The Conscious Mind, Oxford University Press, New York.
Corradini, Antonella. "Emergent Dualism," in Psycho-Physical Dualism Today: An Interdisciplinary Approach, ed. Jonathan Lowe and Alessandro Antonietti
Corradini, Antonella and Timothy O’Connor, eds. Emergence in Science and Philosophy (Routledge Studies in the Philosophy of Science),
Davidson, Donald. "The 'Mental' and the 'Physical'", in Concepts, Theories, and the Mind-Body Problem, Minnesota Studies in the Philosophy of Science, vol.2, p. 414
Davidson, Donald. 1970, “Mental Events,” in Experience and Theory, University of Massachusetts Press (reprinted in Davidson (1980), pp. 207-227).
Davidson, Donald. (1980). Essays on Actions and Events, Oxford: Clarendon Press.
Deacon, Terrence. Incomplete Nature.
Dennett, Daniel. Brainstorms.
Descartes, René. (1642/1986). Meditations on First Philosophy, translated by John Cottingham, Cambridge: Cambridge University Press.
Descartes, René. Discourse on Method, 6:32
Doyle, Bob. Free Will: The Scandal in Philosophy
Feigl, Herbert. (1958). "The 'Mental' and the 'Physical'" in Minnesota Studies in the Philosophy of Science, vol. II, pp. 370-497.
Fodor, Jerry. (1974). "Special Sciences, or the Disunity of Science as a Working Hypothesis," Synthese 28; 97-115.
Heil, John; and Alfred Mele (eds.). (1993). Mental Causation. Oxford: Clarendon Press.
Heisenberg, Martin. “Is Free Will an Illusion?, Nature,
Hobbes, Thomas. 1654 Of Liberty and Necessity
Jammer, Max. Conceptual Development of Quantum Mechanics
Kim, Jaegwon. 1989 "The Myth of Nonreductive Materialism", in Proceedings of the American Philosophical Association, Princeton University Press, p.150)
Kim, Jaegwon. (1998). Mind in a Physical World: An Essay on the Mind-Body Problem and Mental Causation. Cambridge, Mass.: MIT Press.
Kim, Jaegwon. (2005). Physicalism, or Something Near Enough, Princeton, Princeton University Press.
Layzer, David. Cosmogenesis.
Lewes, George Henry. Problems of Life and Mind
Lloyd Morgan, Conwiy. Emergent Evolution
McLaughlin, Brian. "The Rise and Fall of British Emergentism," in Emergence or Reduction?: Essays on the Prospects of Nonreductive Physicalism, A. Beckermann, H. Flohr, and J. Kim, eds., de Gruyter 1992 Mill, John Stuart. A System of Logic
Murphy, Nancey and Warren Brown. Did My Neurons Make Me Do It?
Murphy, Nancey, George E.R. Ellis, and Timothy O’Connor. Downward Causation and the Neurobiology of Free Will
Nagel, Thomas. Mind and Cosmos
Peirce, Charles Sanders. 1892 "The Law of Mind," The Monist, vol. 2, pp. 533-559, Collected Papers of Charles Sanders Peirce, vol VI, p.86
Putnam, Hilary. (1967). "The Nature of Mental States" in Mind, Language, and Reality: Philosophical Papers, vol. II (Cambridge University Press (1975).(Functionalism)
Putnam, Hilary. (1975). "The Meaning of 'Meaning'", in Mind, Language and Reality: Philosophical Papers 2, 1975, Cambridge: Cambridge University Press, pp. 215-71.
Putnam, Hilary. (1988). "The Nature of Mental States" in Representation and Reality (Cambridge. MIT Press (1988). (Abandons Functionalism)
Prigogine, Ilya. Order Out Of Chaos
Robb, David. (2003). "Mental Causation," The Stanford Encyclopedia of Philosophy, Edward Zalta (ed.). (link)
Sayre, Kenneth. Cybernetics and the Philosophy of Mind
Schrödinger, Erwin. "What Is Life?", in What Is Life?, and other essays, 1945
Searle, John.
Shannon, Claude.
Smart, J.J.C.
Sperry, Roger.
Stubenberg, Leopold. "Neutral Monism," The Stanford Encyclopedia of Philosophy, Edward Zalta (ed.) (link)
Yoo, Julie. (2006). "Mental Causation," The Internet Encyclopedia of Philosophy, James Fieser and Bradley Dowden (eds.). (link)