Citation for this page in APA citation style.           Close


Philosophers

Mortimer Adler
Rogers Albritton
Alexander of Aphrodisias
Samuel Alexander
William Alston
Anaximander
G.E.M.Anscombe
Anselm
Louise Antony
Thomas Aquinas
Aristotle
David Armstrong
Harald Atmanspacher
Robert Audi
Augustine
J.L.Austin
A.J.Ayer
Alexander Bain
Mark Balaguer
Jeffrey Barrett
William Barrett
William Belsham
Henri Bergson
George Berkeley
Isaiah Berlin
Richard J. Bernstein
Bernard Berofsky
Robert Bishop
Max Black
Susanne Bobzien
Emil du Bois-Reymond
Hilary Bok
Laurence BonJour
George Boole
Émile Boutroux
Daniel Boyd
F.H.Bradley
C.D.Broad
Michael Burke
Lawrence Cahoone
C.A.Campbell
Joseph Keim Campbell
Rudolf Carnap
Carneades
Nancy Cartwright
Gregg Caruso
Ernst Cassirer
David Chalmers
Roderick Chisholm
Chrysippus
Cicero
Randolph Clarke
Samuel Clarke
Anthony Collins
Antonella Corradini
Diodorus Cronus
Jonathan Dancy
Donald Davidson
Mario De Caro
Democritus
Daniel Dennett
Jacques Derrida
René Descartes
Richard Double
Fred Dretske
John Dupré
John Earman
Laura Waddell Ekstrom
Epictetus
Epicurus
Austin Farrer
Herbert Feigl
Arthur Fine
John Martin Fischer
Frederic Fitch
Owen Flanagan
Luciano Floridi
Philippa Foot
Alfred Fouilleé
Harry Frankfurt
Richard L. Franklin
Bas van Fraassen
Michael Frede
Gottlob Frege
Peter Geach
Edmund Gettier
Carl Ginet
Alvin Goldman
Gorgias
Nicholas St. John Green
H.Paul Grice
Ian Hacking
Ishtiyaque Haji
Stuart Hampshire
W.F.R.Hardie
Sam Harris
William Hasker
R.M.Hare
Georg W.F. Hegel
Martin Heidegger
Heraclitus
R.E.Hobart
Thomas Hobbes
David Hodgson
Shadsworth Hodgson
Baron d'Holbach
Ted Honderich
Pamela Huby
David Hume
Ferenc Huoranszki
Frank Jackson
William James
Lord Kames
Robert Kane
Immanuel Kant
Tomis Kapitan
Walter Kaufmann
Jaegwon Kim
William King
Hilary Kornblith
Christine Korsgaard
Saul Kripke
Thomas Kuhn
Andrea Lavazza
Christoph Lehner
Keith Lehrer
Gottfried Leibniz
Jules Lequyer
Leucippus
Michael Levin
Joseph Levine
George Henry Lewes
C.I.Lewis
David Lewis
Peter Lipton
C. Lloyd Morgan
John Locke
Michael Lockwood
Arthur O. Lovejoy
E. Jonathan Lowe
John R. Lucas
Lucretius
Alasdair MacIntyre
Ruth Barcan Marcus
Tim Maudlin
James Martineau
Nicholas Maxwell
Storrs McCall
Hugh McCann
Colin McGinn
Michael McKenna
Brian McLaughlin
John McTaggart
Paul E. Meehl
Uwe Meixner
Alfred Mele
Trenton Merricks
John Stuart Mill
Dickinson Miller
G.E.Moore
Thomas Nagel
Otto Neurath
Friedrich Nietzsche
John Norton
P.H.Nowell-Smith
Robert Nozick
William of Ockham
Timothy O'Connor
Parmenides
David F. Pears
Charles Sanders Peirce
Derk Pereboom
Steven Pinker
Plato
Karl Popper
Porphyry
Huw Price
H.A.Prichard
Protagoras
Hilary Putnam
Willard van Orman Quine
Frank Ramsey
Ayn Rand
Michael Rea
Thomas Reid
Charles Renouvier
Nicholas Rescher
C.W.Rietdijk
Richard Rorty
Josiah Royce
Bertrand Russell
Paul Russell
Gilbert Ryle
Jean-Paul Sartre
Kenneth Sayre
T.M.Scanlon
Moritz Schlick
Arthur Schopenhauer
John Searle
Wilfrid Sellars
Alan Sidelle
Ted Sider
Henry Sidgwick
Walter Sinnott-Armstrong
J.J.C.Smart
Saul Smilansky
Michael Smith
Baruch Spinoza
L. Susan Stebbing
Isabelle Stengers
George F. Stout
Galen Strawson
Peter Strawson
Eleonore Stump
Francisco Suárez
Richard Taylor
Kevin Timpe
Mark Twain
Peter Unger
Peter van Inwagen
Manuel Vargas
John Venn
Kadri Vihvelin
Voltaire
G.H. von Wright
David Foster Wallace
R. Jay Wallace
W.G.Ward
Ted Warfield
Roy Weatherford
C.F. von Weizsäcker
William Whewell
Alfred North Whitehead
David Widerker
David Wiggins
Bernard Williams
Timothy Williamson
Ludwig Wittgenstein
Susan Wolf

Scientists

David Albert
Michael Arbib
Walter Baade
Bernard Baars
Jeffrey Bada
Leslie Ballentine
Marcello Barbieri
Gregory Bateson
Horace Barlow
John S. Bell
Mara Beller
Charles Bennett
Ludwig von Bertalanffy
Susan Blackmore
Margaret Boden
David Bohm
Niels Bohr
Ludwig Boltzmann
Emile Borel
Max Born
Satyendra Nath Bose
Walther Bothe
Jean Bricmont
Hans Briegel
Leon Brillouin
Stephen Brush
Henry Thomas Buckle
S. H. Burbury
Melvin Calvin
Donald Campbell
Sadi Carnot
Anthony Cashmore
Eric Chaisson
Gregory Chaitin
Jean-Pierre Changeux
Rudolf Clausius
Arthur Holly Compton
John Conway
Jerry Coyne
John Cramer
Francis Crick
E. P. Culverwell
Antonio Damasio
Olivier Darrigol
Charles Darwin
Richard Dawkins
Terrence Deacon
Lüder Deecke
Richard Dedekind
Louis de Broglie
Stanislas Dehaene
Max Delbrück
Abraham de Moivre
Bernard d'Espagnat
Paul Dirac
Hans Driesch
John Eccles
Arthur Stanley Eddington
Gerald Edelman
Paul Ehrenfest
Manfred Eigen
Albert Einstein
George F. R. Ellis
Hugh Everett, III
Franz Exner
Richard Feynman
R. A. Fisher
David Foster
Joseph Fourier
Philipp Frank
Steven Frautschi
Edward Fredkin
Benjamin Gal-Or
Howard Gardner
Lila Gatlin
Michael Gazzaniga
Nicholas Georgescu-Roegen
GianCarlo Ghirardi
J. Willard Gibbs
James J. Gibson
Nicolas Gisin
Paul Glimcher
Thomas Gold
A. O. Gomes
Brian Goodwin
Joshua Greene
Dirk ter Haar
Jacques Hadamard
Mark Hadley
Patrick Haggard
J. B. S. Haldane
Stuart Hameroff
Augustin Hamon
Sam Harris
Ralph Hartley
Hyman Hartman
Jeff Hawkins
John-Dylan Haynes
Donald Hebb
Martin Heisenberg
Werner Heisenberg
John Herschel
Basil Hiley
Art Hobson
Jesper Hoffmeyer
Don Howard
John H. Jackson
William Stanley Jevons
Roman Jakobson
E. T. Jaynes
Pascual Jordan
Eric Kandel
Ruth E. Kastner
Stuart Kauffman
Martin J. Klein
William R. Klemm
Christof Koch
Simon Kochen
Hans Kornhuber
Stephen Kosslyn
Daniel Koshland
Ladislav Kovàč
Leopold Kronecker
Rolf Landauer
Alfred Landé
Pierre-Simon Laplace
Karl Lashley
David Layzer
Joseph LeDoux
Gerald Lettvin
Gilbert Lewis
Benjamin Libet
David Lindley
Seth Lloyd
Hendrik Lorentz
Werner Loewenstein
Josef Loschmidt
Ernst Mach
Donald MacKay
Henry Margenau
Owen Maroney
David Marr
Humberto Maturana
James Clerk Maxwell
Ernst Mayr
John McCarthy
Warren McCulloch
N. David Mermin
George Miller
Stanley Miller
Ulrich Mohrhoff
Jacques Monod
Vernon Mountcastle
Emmy Noether
Donald Norman
Alexander Oparin
Abraham Pais
Howard Pattee
Wolfgang Pauli
Massimo Pauri
Wilder Penfield
Roger Penrose
Steven Pinker
Colin Pittendrigh
Walter Pitts
Max Planck
Susan Pockett
Henri Poincaré
Daniel Pollen
Ilya Prigogine
Hans Primas
Zenon Pylyshyn
Henry Quastler
Adolphe Quételet
Pasco Rakic
Nicolas Rashevsky
Lord Rayleigh
Frederick Reif
Jürgen Renn
Giacomo Rizzolati
Emil Roduner
Juan Roederer
Jerome Rothstein
David Ruelle
David Rumelhart
Tilman Sauer
Ferdinand de Saussure
Jürgen Schmidhuber
Erwin Schrödinger
Aaron Schurger
Sebastian Seung
Thomas Sebeok
Franco Selleri
Claude Shannon
Charles Sherrington
David Shiang
Abner Shimony
Herbert Simon
Dean Keith Simonton
Edmund Sinnott
B. F. Skinner
Lee Smolin
Ray Solomonoff
Roger Sperry
John Stachel
Henry Stapp
Tom Stonier
Antoine Suarez
Leo Szilard
Max Tegmark
Teilhard de Chardin
Libb Thims
William Thomson (Kelvin)
Richard Tolman
Giulio Tononi
Peter Tse
Alan Turing
Francisco Varela
Vlatko Vedral
Mikhail Volkenstein
Heinz von Foerster
Richard von Mises
John von Neumann
Jakob von Uexküll
C. S. Unnikrishnan
C. H. Waddington
John B. Watson
Daniel Wegner
Steven Weinberg
Paul A. Weiss
Herman Weyl
John Wheeler
Wilhelm Wien
Norbert Wiener
Eugene Wigner
E. O. Wilson
Günther Witzany
Stephen Wolfram
H. Dieter Zeh
Semir Zeki
Ernst Zermelo
Wojciech Zurek
Konrad Zuse
Fritz Zwicky

Presentations

Biosemiotics
Free Will
Mental Causation
James Symposium
 
David Layzer's Theory of Free Will
The Arrow of Time
As early as 1971, in an unpublished manuscript on The Arrow of Time, Layzer wrote about the connection between indeterminacy and ethics.
We regard the future as being radically different from the past. We know the past through tangible records, including those contained in our own nervous systems, but we can only make more or less incomplete predictions about the future. Moreover, we believe that we cannot change the past but that we can influence the future, and we base our ethical and judicial systems on this premise. For such notions as praise and blame, reward and punishment, would be meaningless if the future were not only unknown but also in some degree indeterminate.
Cosmogenesis
Layzer may have first written on human freedom in his 1990 book Cosmogenesis. In his concluding chapter, Layzer discusses the problem of human freedom and especially creativity. Although he offers no resolution of the free will problem, he places great emphasis on an unpredictable creativity as the basis of both biological evolution and human activity in a universe with an open future.
Chapter 15: Chance, Necessity, and Freedom
To be fully human is to be able to make deliberate choices. Other animals sometimes have, or seem to have, conflicting desires, but we alone are able to reflect on the possible consequences of different actions and to choose among them in the light of broader goals and values. Because we have this capacity we can be held responsible for our actions; we can deserve praise and blame, reward and punishment. Values, ethical systems, and legal codes all presuppose freedom of the will. So too, as P. F. Strawson has pointed out, do "reactive attitudes" like guilt, resentment, and gratitude. If I am soaked by a summer shower I may be annoyed by my lack of foresight in not bringing an umbrella, but I don't resent the shower. I could have brought the umbrella; the shower just happened.

Freedom has both positive and negative aspects. The negative aspects — varieties of freedom from — are the most obvious. Under this heading come freedom from external and internal constraints. The internal constraints include ungovernable passions, addictions, and uncritical ideological commitments. The positive aspects of freedom are more subtle. Let's consider some examples.

1. A decision is free to the extent that it results from deliberation. Absence of coercion isn't enough. Someone who bases an important decision on the toss of a coin seems to be acting less freely than someone who tries to assess its consequences and to evaluate them in light of larger goals, values, and ethical precepts.

2. Goals, values, and ethical precepts may themselves be accepted uncritically or under duress, or we may feel free to modify them by reflection and deliberation. Many people don't desire this kind of freedom and many societies condemn and seek to suppress it. Freedom and stability are not easy to reconcile, and people who set a high value on stability tend to set a correspondingly low value on freedom. But whether or not we approve of it, the capacity to reassess and reconstruct our own value systems represents an important aspect of freedom.

3. Henri Bergson believed that freedom in its purest form manifests itself in creative acts, such as acts of artistic creation. Jonathan Glover has argued in a similar vein that human freedom is inextricably bound up with the "project of self-creation." The outcomes of creative acts are unpredictable, but not in the same way that random outcomes are unpredictable. A lover of Mozart will immediately recognize the authorship of a Mozart divertimento that he happens not to have heard before. The piece will "sound like Mozart." At the same time, it will seem new and fresh; it will be full of surprises. If it wasn't, it wouldn't be Mozart. In the same way, the outcomes of self-creation are new and unforeseeable, yet coherent with what has gone before.

Although philosophical accounts of human freedom differ, they differ surprisingly little. On the whole, they complement rather than conflict with one another.

What makes freedom a philosophical problem is the difficulty of reconciling a widely shared intuitive conviction that human beings are or can be free (in the ways discussed above or in similar ways) with an objective view of the world as a causally connected system of events. We feel ourselves to be free and responsible agents, but science tells us (or seems to tell us) that we are collections of molecules moving and interacting according to strict causal laws.

For Plato and Aristotle, there was no real difficulty. They believed that the soul initiates motion — that acts of will are the first links of the causal chains in which they figure. With few exceptions, modern neurobiologists have rejected the view of the relation between mind and body that this doctrine implies. They regard mental processes as belonging to the natural world, subject to the same physical laws that govern inanimate matter. The differences between animate and inanimate systems and between conscious, and nonconscious nervous processes are not caused by the presence or absence of nonmaterial substances (the breath of, life, mind, spirit, soul) but by the presence or absence of certain kinds of order. This conclusion is more than a profession of scientific faith. It becomes unavoidable once we accept the hypothesis of biological evolution, without which, as Theodosius Dobzhansky remarked, nothing in biology makes sense. The evolutionary hypothesis implies that human consciousness evolved from simpler kinds of consciousness, which in turn evolved from nonconscious forms of nervous activity. There is no point in this evolutionary sequence where mind or spirit or soul can plausibly be assumed to have inserted itself "from without." It seems even more implausible to suppose that it was there all along, although, as we saw earlier, some modem philosophers and scientists have held this view.

Karl Popper and other philosophers have tried to resolve the apparent conflict between free will and determinism by attacking the most sacred of natural science's sacred cows, the assumption that all natural processes obey physical laws.

In asserting that there may be phenomena that don't obey physical laws, these philosophers are obviously on safe ground. But the assumption of indeterminism doesn't really help. A freely taken decision or a creative act doesn't just come into being. It is the necessary — and hence law-abiding — outcome of a complex process. Free actions also have predictable — and hence lawful - consequences; otherwise, planning and foresight would be futile. Thus every free act belongs to a causal chain: it is the necessary outcome of a deliberative or creative process, and it has predictable consequences.

Some physicists and philosophers have suggested that quantal indeterminacy may provide leeway for free acts in an otherwise deterministic Universe. Freedom, however, doesn't reside in randomness; it resides in choice.

The standard argument against free will is that neither determinism (necessity) nor indeterminism (chance) can provide freedom.
Plato and Aristotle were right in linking Chance and Necessity as "forces" opposed to design and purpose in the Universe.

Thus freedom seems equally inconsistent with determinism and indeterminism.

Thomas Nagel has suggested that it isn't even possible to give a coherent account of our inner sense of freedom:

When we try to explain what we believe which seems to be undermined by a conception of actions as events in the world - determined or not — we end up with something that is either incomprehensible or clearly inadequate.
"The real problem," Nagel says, "stems from a clash between the view of action from inside and any view of it from outside." Yet the intuitive view of what it means to be free doesn't rest on introspection alone. We recognize other people's spontaneity and creativity even—or especially—when it is of such a high order that we can't imagine ourselves capable of it. We can apprehend the exquisitely ordered unpredictability of Mozart's music without beginning to be able to imagine what it would be like to compose such music. And even subjective impressions of freedom, unlike subjective impressions of pain or of self, aren't hard to describe.
Layzer here describes the generation of alternative possibilities in the first stage of a two-stage model
Consider the process of making a decision. Shall I do A or B? My head says A; my heart says B. I agonize. I try to imagine the consequences first of A, then of B. Suddenly, a new thought occurs to me: C. Yes, I'll do C. The essential aspect of such commonplace experiences is that their outcomes aren't determined in advance but are created by the process of deliberation itself, a process unfolding in time. All creative processes have this character.

Such processes, however, go on not only in people's subjective awareness but also in their brains. Conscious experience gives us a fragmentary and unrepresentative view of its underlying cerebral processes, but there is no reason to suppose that the view is deceptive. On the contrary, modern techniques of imaging brain activity suggest that there is a high degree of structural correspondence between consciousness and brain activity.

Layzer sees that alternatives are not pre-determined from before the generation of possibilities
If, then, the outcome of a deliberative or creative process seems undetermined at the outset, if it seems to us that such processes create their outcomes, perhaps the reason is that the outcomes of the underlying cerebral processes are, in some objective sense, undetermined, are, in some objective sense, created by the processes themselves.

I will argue that the neural processes that give rise to subjective experiences of freedom are indeed creative processes, in the sense, that they bring into the world kinds of order that didn't exist earlier and weren't prefigured in earlier physical states. These novel and unforeseen products of neural activity include not only works of art, but also the evolving patterns of synaptic connections that underlie the intentions, plans, and projects that guide our commonplace activities. Although consciousness gives us only superficial and incomplete glimpses of this ceaseless constructive activity, we are aware of it almost continuously during our waking hours. This awareness may be the source of — or even constitute — the subjective impression that we participate in molding the future.

Much of the argument that supports this view has already been given in earlier chapters. Let me now try to pull it together around the following three questions:

1. Do all law-abiding processes have predetermined outcomes?
2. What does it mean to say that a physical process creates its outcomes?
3. How is this kind of creativity related to creativity in contexts relevant to the problem of human freedom?
Layzer ignores quantum indeterminacy, which continues to generate undetermined outcomes beyond randomness in the initial conditions
[Answer to question 1]: Do all law-abiding processes have predetermined outcomes? Outcomes are determined by laws plus initial conditions. They are undetermined to the extent that the initial conditions are unspecified.

[Answer to question 2]: A theory of cosmic evolution requires initial conditions. The simplest initial conditions is that the Universe began to expand from a purely random state — a state wholly devoid of order. From this postulate, we can easily deduce the Strong Cosmological Principle. The inference hinges on the fact that none of our present physical laws discriminates between different points in space or between different directions at a point. (A physicist would say, "The laws are invariant under spatial translations and rotations.") This implies that no physical process can introduce discriminatory information. So if information that would discriminate between positions or directions is absent at a single moment, it must be absent forever. In short, if the Strong Cosmological Principle is valid at any single moment, it must be valid for all time.

Layzer was first to answer this question on the growth of order
If the Universe began to expand from a state of utter randomness, how did order come into being? Before reviewing our answer to this question, we have to recall how we dealt with the concept of order itself.

The two key ideas needed to formulate an adequate scientific definition of order were put forward by Ludwig Boltzmann.

Boltzmann had a third idea that influenced Layzer's strong cosmological principle, the infinite nature of space and time
The first idea is the distinction between microstates and macrostates. Macrostates are groups of microstates, defined by their statistical properties. For example, the microstates of a gas may be assigned to macrostates defined by density, temperature, and chemical composition. Proteins may be assigned to macrostates defined by biological fitness. Boltzmann's second key idea was to identify the randomness or entropy of a macrostate with the logarithm of the number of its microstates. Supplementing this definition of randomness, we defined the order or information of a macrostate as the difference between its potential randomness or entropy (the largest value of the randomness or entropy consistent with given constraints) and the actual value. Thus maximally random macrostates have zero order and maximally ordered macrostates have zero randomness. According to these definitions, a physical system far removed from thermodynamic equilibrium (the macrostate of maximum randomness) is highly ordered. So is a protein whose biological fitness can't be improved by changes in its sequence of amino acids: it belongs to a very small subset of the class of polypeptides of the same length.

These definitions of randomness and order are important not just, or even primarily, because they lend precision to the corresponding intuitive notions in a wide range of scientific contexts. They are important primarily because they are adapted to theoretical accounts of the growth and decay of order. Boltzmann himself proved (under restrictive assumptions) that molecular interactions in a gas not already in its most highly random macrostate increase its randomness. In Chapter 8 we saw how the cosmic expansion generates chemical order (chemical abundances far removed from those that would prevail in thermodynamic equilibrium); in Chapter 9 we discussed the origin and growth of structural order in the astronomical Universe; and in Chapters 10 and 11 we saw how random genetic variation and differential reproduction generate the biological order encoded in genetic material.

Astronomical and biological order-generating processes are hierarchically linked in the manner discussed in Chapter 2. Each process requires initial conditions generated by earlier processes. For example, the first self-replicating molecules needed an environment that provided high-grade energy, molecular building blocks, and catalysts. High-grade energy was supplied, directly or indirectly, by sunlight, produced by the burning of hydrogen deep inside the Sun. To understand why hydrogen is so abundant, we have to go back to the early Universe, when the primordial chemical composition of the cosmic medium was laid down by an interplay between nuclear reactions and the cosmic expansion. Apart from hydrogen, the atoms that make up biomolecules (carbon, oxygen, and nitrogen are the most common) were synthesized in exploding stars far more massive than the Sun. So, too, were inorganic catalysts like zinc and magnesium. Finally, the emergence of an environment favorable to life as we know it resulted from planet-building processes, for which we still lack an adequate theory.

Although some of the specific order-generating processes we have discussed are speculative or controversial, the general principles underlying the emergence of order from chaos seem more secure. In particular, we can now understand why, in spite of the second law of thermodynamics, the Universe is not running down. The Second Law states that all natural processes tend to increase randomness. In an ordinary isolated system, the growth of randomness leads inevitably to a decline of order, because the sum of randomness and order is a fixed quantity.

in the expanding universe, information can increase at the same time as entropy increases, satisfying the second law
The Universe, however, is not an ordinary isolated system. Because space is expanding, the sum of randomness and order is not a fixed quantity; it tends to increase with time. Hence a gap may open up between the actual randomness of the cosmic medium and its maximum possible randomness. This gap represents a form of order. Chemical order (as evidenced by the prevalence of hydrogen) emerges when equilibrium-maintaining chemical reactions can no longer keep pace with the cosmic expansion. Structural order (in the form of astronomical systems) emerges when the uniform state of an expanding medium becomes unstable—that is, less than maximally random.

By making randomness an objective property of the Universe, the Strong Cosmological Principle also objectifies the timebound varieties of order, which consist in the absence of randomness. The infinitely detailed world picture of Laplace's Intelligence is devoid of macroscopic order. It contains no objective counterpart to astronomical or biological order. Laplace's Intelligence is an idiot savant. It knows the position and velocity of every particle in the Universe; but because this vast fund of knowledge (or its quantal-counterpart) is complete in itself, there is no room in it for information about stars, galaxies, plants, animals, or states of mind. In this book I have argued that the external world — the world that natural science describes — is fundamentally different from the universe of Laplace and Einstein, which is given once and for all in space and time (or in spacetime). It is a world of becoming as well as being, a world in which order emerged from primordial chaos and begot new forms of order. The processes that have created and continue to create order obey universal and unchanging physical laws. Yet because they generate information, their outcomes are not implicit in their initial conditions.

Recently, Layzer has written two more papers on Free Will, "Naturalizing Libertarian Free Will," and "Free Will as a Scientific Problem."

Naturalizing Libertarian Free Will
In an unpublished 2010 paper on free will entitled Naturalizing Libertarian Free Will (Word doc), Layzer describes how his strong cosmological principle adds a new and fundamental form of objective indeterminacy to the world. Indeterminacy is necessary, he says, to eliminate the presumption of determinism (which is incompatible with libertarian free will) and make room for indeterminism. Note that Layzer's indeterminacy enters physics not through the measurement postulate of quantum mechanics that applies in the microscopic domain, but via a cosmological condition he calls his strong cosmological principle that comes from the astronomical domain of an assumed infinite universe.
The proposition that physical laws and antecedent conditions determine the outcomes of all physical processes (other than quantum measurements) is widely regarded as the cornerstone of a naturalistic worldview. Defenders of libertarian free will who accept this proposition must therefore choose between two options:
(1) They can argue (against prevailing opinion among neurobiologists) that some brain processes involved in choice and decision-making are, in effect, quantum measurements – that is, that they involve interactions between a microscopic system initially in a definite quantum state and a macroscopic system that registers some definite value of a particular physical quantity.

(2) They can argue that our current physical laws – specifically, the laws of quantum mechanics – need to be revised. For example, the physicist Eugene Wigner has argued that Schrödinger’s equation must be modified to account for certain conscious processes (perceiving the outcome of a quantum measurement).

This paper explores a third possibility: that the presumption of determinism is false. (p.2)

Layzer's strong cosmological principle introduces a new kind of objective indeterminacy. It implies that macroscopic phenomena involve objective, non-epistemic, chance.
It entails a picture of the physical universe in which chance prevails in the macroscopic domain (and hence in the world of experience). Because chance plays a key role in the production of genetic variation and in natural selection itself, evolutionary biologists have long advocated such a picture. Chance also plays a key role in other biological processes, including the immune response and visual perception. I argue that reflective choice and deliberation, like these processes and evolution itself, is a creative process mediated by indeterminate macroscopic processes, and that through our choices we help to shape the future. (p.2)

Layzer is not concerned

"with idealized (and often trivial) choices between two given alternatives but with what I’ve called reflective choice, in which the alternatives may not be given beforehand, or not completely given, and in which one works out and evaluates the possible consequences of each imagined alternative. Much of the work involved in such processes undoubtedly does not rise to the level of consciousness. But consciousness accompanies the parts of the process that seem to us most crucial.
Layzer's "alternatives that may not be given beforehand" are generated during the first stage of our two-stage model of free will. They need a continuous source of macroscopic indeterminacy, where Layzer's source is found in the initial conditions of the universe.
The probabilistic source of a new objective indeterminacy and chance
In Layzer's strong cosmological principle (SCP), the source of indeterminacy is the absence of microscopic information in the initial conditions of the universe. Conventional statistical mechanics describes a system as in some unknown microstate, one of many compatible with the given macrostate. Layzer's SCP says that a complete description provides only a probability distribution of microstates.

These probability distributions are conventionally viewed as incomplete descriptions of systems in definite though unknown, or even unknowable, microstates. Layzer's account interprets them as complete descriptions. Because microstates evolve deterministically, the conventional interpretation implies that macroscopic systems evolve deterministically. In Layzer's view, by contrast, a macroscopic system’s initial state need not uniquely determine its subsequent states. He describes the critical difference between the statistical entropy of Ludwig Boltzmann and J. Willard Gibbs.

Boltzmann’s theory applies to ideal gases; Gibbs’s statistical mechanic applies not only to samples of an ideal gas but to any closed system of N particles governed by the laws of classical mechanics. Its quantum counterpart, quantum statistical mechanics, preserves the overall structure of Gibbs’s theory and its main theorems.

Like Maxwell and Boltzmann, Gibbs identified thermodynamic equilibrium with statistical equilibrium. Boltzmann’s theory reproduces the classical thermodynamic theory of an ideal gas, with the statistical entropy of the probability distribution of a single molecule’s microstates in the role of thermodynamic entropy and a parameter that characterizes the maximum-statistical-entropy probability distribution in the role of absolute temperature. Gibbs’s theory reproduces the whole of classical thermodynamics, with the statistical entropy of the probability distribution of a closed macroscopic system’s microstates in the role of thermodynamic entropy and a parameter that characterizes the maximum-statistical-entropy probability distribution in the role of absolute temperature. So Boltzmann’s theory may appear at first sight to be a limiting case of Gibbs’s. But Boltzmann proved that the statistical entropy of the single-molecule probability distribution increases with time (unless it has already reached its largest admissible value), while Gibbs proved that the statistical entropy of the probability distribution of the microstates of the sample as a whole is constant in time.

The resolution of this apparent contradiction is unproblematic. It hinges on a mathematical property of statistical entropy. The statistical entropy of an N-particle probability distribution can be expressed as the sum of two contributions. The first contribution is N times the statistical entropy of the single-particle distribution. The second contribution is associated with statistical correlations between molecules of the gas sample. The constancy of the N-particle statistical entropy is consistent with the growth of the single-particle contribution. Taken together, Boltzmann’s H theorem and Gibbs’s proof that the statistical entropy of the N-particle probability distribution is constant in time imply that the second contribution – the contribution associated with intermolecular correlations – decreases at a rate that exactly compensates the growth of the first contribution. In terms of information: the decline of single-particle information in a closed gas sample is matched by the growth of correlation information.

Although Boltzmann correctly identified the thermodynamic entropy of a closed gas sample with the single-particle statistical entropy, his derivation of the H theorem – the statistical counterpart of the Second Law as applied to an ideal-gas sample – had a technical flaw. The derivation rests on an assumption (known as the Stosszahlansatz) that cannot in fact hold for a closed gas sample. A stronger form of this assumption states that correlation information – the amount by which the information of the single-molecule probability distribution, multiplied by N, falls short of the information of the N-molecule probability distribution – is permanently absent. As we’ve just seen, this assumption cannot be true, because the decay of single-molecule information creates correlation information at the same rate. So even if correlation information is initially absent, it cannot be permanently absent.

The persistence of correlation information in a closed system poses a threat to the very notion of thermodynamic equilibrium.

Free Will as a Scientific Problem
In his second recent paper on free will, Layzer says that "for reasons that have little to do with quantum indeterminism we have the capacity to shape the future through our choices, plans, and actions...quantum indeterminism is not the only form of indeterminism. A variety of macroscopic processes, I will argue, have indeterminate outcomes; chance is endemic in the macroscopic domain." (p.1)

Layzer criticizes quantum mechanics, reviewing the superposition principle (quantum wave functions are probability amplitudes with non-zero values for different states and in different positions at the same time), the axiom of measurement (expectation values predict the outcomes of many experiments), and the projection postulate (measurement collapses the wave function to a single state or location). He describes alternative interpretations for quantum mechanics that avoid the non-intuitive "collapses," by Eugene Wigner and by Hugh Everett (many-worlds).

He also discusses statistical mechanics, with the intractable problem of microscopic reversibility but macroscopic irreversibility.

Layzer then introduces his Strong Cosmological Principle, the idea that local physical variables are random probabilities dependent on the frequency of occurrence of given properties in distant similar places in the infinite universe.

The classical variables that figure in Einstein’s description of the structure and contents of spacetime are to be interpreted as random variables – mathematical objects characterized not by a definite value at each point of space-time, but by a set of possible values and corresponding probabilities. We can interpret these probabilities as relative frequencies, or proportions, in infinite samples whose members are randomly distributed throughout space.
He says that "this interpretation of Einstein’s description of spacetime and its contents resolves the prima facie conflict between the deterministic character of Einstein’s field equations and the fact that quantum measurements alter the macroscopic structure of spacetime unpredictably." (p.26)

Layzer's account of chance resembles in important ways an account given a century ago by Henri Poincaré, who in the 1880's discovered the phenomenon now called deterministic chaos in his studies of planetary orbits for three-body problems. The outcomes of chaotic processes depend sensitively on their initial conditions. [James Clerk Maxwell similarly discovered chaos theory in hydrodynamic flows twenty years earlier.] Poincaré showed that if the initial values of the parameters that define an orbit are smoothly distributed over a small subrange of their possible values, the possible values of these parameters at a later time will be smoothly distributed over the entire range.

A historical account characterizes the initial orbital state by a joint probability distribution of positions and velocities, which evolves into a distribution that characterizes a multitude of observationally distinguishable orbits. To accommodate such situations, Layzer says he needs to

modify the rule that links probability distributions of (classical or quantum) microstates to classical macrostates. The standard rule equates the value of a macroscopic variable in a given macrostate to the result of averaging the corresponding microscopic variable over the probability distribution of microstates that represents the given macrostate. We modify it in three ways.

First, we characterize macrostates by experimentally distinguishable ranges (or aggregates) of microstates, as in the above examples. A probability distribution of microstates may then represent two or more experimentally distinguishable macrostates.

Second, we equate the result of averaging a microscopic variable over such a probability distribution to the result of averaging the measured value of the corresponding macroscopic variable over a “large number” of replicas of the measurement.

Finally, to incorporate into our rule the fact that neither physical laws nor initial and boundary conditions that comply with the strong cosmological principle serve to define a particular position, we interpret the set of replicas mentioned in the preceding paragraph as a “cosmological ensemble” – a set of replicas randomly and uniformly distributed throughout an infinite space. (Like Gibbs’s ensembles, a cosmological ensemble is made up of imaginary replicas. But each replica in a cosmological ensemble is in a definite macrostate. And cosmological ensembles have a physical interpretation: they allow us to express the assumption that physics cannot make unconditional predictions about where in the universe given measurement outcomes are realized.)

These rules enable us to calculate the probabilities of experimentally distinguishable measurement outcomes from measurements of mean values. (p.28)

Layzer claims that his
historical account of initial conditions offers a new view of the role of chance in macroscopic processes. Physicists have conventionally held that the outcomes of macroscopic processes other than quantum measurements are predictable in principle. Some, though not all, evolutionary biologists have taken issue with this doctrine, which also seems to be at odds with judgments based on ordinary experience. But physics as conventionally interpreted assures us that to a contemporary version of the omniscient mind posited by Laplace in his essay on chance, nothing except quantum measurement outcomes would be unpredictable. The historical account of initial conditions sketched in this essay supports the contrary view suggested by evolutionary biology and experience: much of what we observe in the world around us is influenced by chance. (p.31)
Layzer is then prepared to address the problem of free will. The premise of physical determinism is false, he says.
Defenders of libertarian free will usually grant at the outset that events other than the outcomes of quantum measurements are determined by universal physical laws and antecedent conditions. They must then explain how it can be that we are able to shape the future through our choices and decisions. In this essay I have argued that the premise is false: Events in the macroscopic world are not determined by universal physical laws and antecedent conditions; a wide class of macroscopic processes have indeterminate outcomes. And if the processes involved in reflective choice belong to this class, there is no scientific reason why we should not accept the proposition that we shape the future through our choices and decisions... (p.36)
He asks "How does free will fit into a scientific picture of the world?" (p.38)
Do conscious acts of will cause our voluntary actions? From a thorough examination of the evidence bearing on this question the psychologist Daniel Wegner has concluded that the answer is no. “Conscious will arises from processes that are psychologically and anatomically distinct from the processes whereby mind creates action.” This conclusion accords well with the arguments and conclusions of the present essay.
To summarize his latest paper on free will, Layzer imagines that a large assembly of similar situations in different regions of the infinite universe can provide the source of the macroscopic indeterminism needed for free will, without depending on quantum indeterminism.

In each individual system, everything appears determined, but in the assembly of all systems, the strong cosmological principle insures there will be a variety of objectively indeterminate outcomes.

Layzer says that the fact that we don't know which of the many possible systems we are in means that our future is indeterminate, more specifically that our current state has not been pre-determined by the initial state of the universe.

Layzer does not explain specifically how the abstract indeterminacy in "cosmological ensembles" affects the human mind/brain, nor how it does not entail that our choices and actions are random (the randomness objection in the standard argument against free will).

As Arthur Stanley Eddington first did in 1927, Layzer finds that his objective physical indeterminacy withdraws the determinist objection to free will. However, where Eddington's indeterminacy came from the then-new quantum mechanics, Layzer's indeterminacy comes from his strong cosmological principle, the abstract notion that there are multiple similar situations for our decisions in other locations - a "cosmological ensemble" - in the infinite universe.

Layzer's "objective indeterminacy" appears to provide no more basis for a free will model than Epicurus's "swerve," William James's "chance," or Werner Heisenberg's quantum indeterminacy, without a more careful explanation of exactly how his indeterminacy figures in the two stages of the decision process.

Normal | Teacher | Scholar