While he himself is a confirmed compatibilist, even a determinist, in "On Giving Libertarians What They Say They Want," Chapter 15 of his 1978 book Brainstorms, Daniel Dennett articulated the case for a two-stage model of free will better than any libertarian. His "Valerian" model of decision making, named after the poet Paul Valéry, combines indeterminism to generate alternative possibilities, with (in our view, adequate) determinism to choose among the possibilities.
"The model of decision making I am proposing, has the following feature: when we are faced with an important decision, a consideration-generator whose output is to some degree undetermined produces a series of considerations, some of which may of course be immediately rejected as irrelevant by the agent (consciously or unconsciously). Those considerations that are selected by the agent as having a more than negligible bearing on the decision then figure in a reasoning process, and if the agent is in the main reasonable, those considerations ultimately serve as predictors and explicators of the agent's final decision." (Brainstorms, p.295)
Dennett gives six excellent reasons why this is the kind of free will that libertarians say they want. He says,
At times, Dennett seems pleased with his result.
"This result is not just what the libertarian is looking for, but it is a useful result nevertheless. It shows that we can indeed install indeterminism in the internal causal chains affecting human behavior at the macroscopic level while preserving the intelligibility of practical deliberation that the libertarian requires. We may have good reasons from other quarters for embracing determinism, but we need not fear that macroscopic indeterminism in human behavior would of necessity rob our lives of intelligibility by producing chaos." (p.292) "we need not fear that causal indeterminism would make our lives unintelligible." (p.298) "Even if one embraces the sort of view I have outlined, the deterministic view of the unbranching and inexorable history of the universe can inspire terror or despair, and perhaps the libertarian is right that there is no way to allay these feelings short of a brute denial of determinism. Perhaps such a denial, and only such a denial, would permit us to make sense of the notion that our actual lives are created by us over time out of possibilities that exist in virtue of our earlier decisions; that we trace a path through a branching maze that both defines who we are, and why, to some extent (if we are fortunate enough to maintain against all vicissitudes the integrity of our deliberational machinery) we are responsible for being who we are." (p.299)
At other times, he is skeptical.
His model, he says, "installs indeterminism in the right place for the libertarian, if there is a right place at all." and "it seems that all we have done is install indeterminism in a harmless place by installing it in an irrelevant place." (p.295)
Dennett seems to be soliciting interest in the model - from libertarian quarters? It is unfortunate that libertarians did not accept and improve Dennett's two-stage model. See What if Libertarians Had Accepted What Dan Dennett Gave Them in 1978? If they had, the history of the free will problem would have been markedly different for the last thirty years, perhaps successfully reconciling indeterminism with free will, as the best two-stage models now do, just as Hume reconciled freedom with determinism.
"There may not be compelling grounds from this quarter for favoring an indeterministic vision of the springs of our action, but if considerations from other quarters favor indeterminism, we can at least be fairly sanguine about the prospects of incorporating indeterminism into our picture of deliberation, even if we cannot yet see what point such an incorporation would have." (p.299)The point of incorporating indeterminism is of course is to break the causal chain of pre-determinism, and to provide a source for novel ideas that were not already implicit in past events, thus explaining not only free will but creativity. This requires irreducible and ontological quantum indeterminacy. But Dennett does not think that irreducible quantum randomness provides anything essential beyond the deterministic pseudo-random number generation of computer science.
"Isn't it the case that the new improved proposed model for human deliberation can do as well with a random-but-deterministic generation process as with a causally undetermined process? Suppose that to the extent that the considerations that occur to me are unpredictable, they are unpredictable simply because they are fortuitously determined by some arbitrary and irrelevant factors, such as the location of the planets or what I had for breakfast." (p.298)With his strong background in computer science and artificial intelligence, it is no surprise that Dennett continues to seek a "computational" model of the mind. But man is not a machine and the mind is not a computer. Dennett accepts the results of modern physics and does not deny the existence of quantum randomness. He calls himself a "naturalist" who wants to reconcile free will with natural science. But what is "natural" about a computer-generated pseudo-random number sequence? The algorithm that generates it is quintessentially artificial. In the course of evolution, quantum mechanical randomness (along with the incredible quantum stability of information structures, without which no structures at all would exist) is naturally available to generate alternative possibilities. Why would evolution need to create an algorithmic computational capability to generate those possibilities, when genuine and irreducible quantum randomness already provides them? And who, before human computer designers, would be the author or artificer of the algorithm? Gregory Chaitin tells us that the information in a random-number sequence is only as much as is in the algorithm that created the sequence. And note that the artificial algorithm author implicitly has the kind of knowledge attributed to Laplace's Demon. Since Dennett is a confirmed atheist, it seems odd that he has the "antipathy to chance" described by William James that is characteristic of religious believers. Quantum randomness is far more atheistic than pseudo-randomness, with the latter's implicit author or artificer. Despite his qualms, Dennett seems to have located randomness in exactly the right place, the random generation of alternative considerations for his adequately determined selection process. In this first stage of free will, genuine quantum randomness (though not pseudo-randomness) breaks the causal chain of Laplacian determinism, without making the decisions themselves random.
Dennett on Austin's PuttIn his 2003 book, Freedom Evolves, Daniel Dennett says that Austin's Putt clarifies the mistaken fear that determinism reduces posibilities. Considering that Dennett is an actualist, who believes there is only one possible future, this bears close examination. First, don't miss the irony that Dennett is using "possible worlds" thinking, which makes the one world we are in only able to have one possible future, our actual world. Dennett says
Now that we have a clearer understanding of possible worlds, we can expose three major confusions about possibility and causation that have bedeviled the quest for an account of free will. First is the fear that determinism reduces our possibilities. We can see why the claim seems to have merit by considering a famous example proposed many years ago by John Austin:Consider the case where I miss a very short putt and kick myself because I could have holed it. It is not that I should have holed it if I had tried: I did try, and missed. It is not that I should have holed it if conditions had been different: that might of course be so, but I am talking about conditions as they precisely were, and asserting that I could have holed it. There is the rub. Nor does "I can hole it this time" mean that I shall hole it this time if I try or if anything else; for I may try and miss, and yet not be convinced that I could not have done it; indeed, further experiments may confirm my belief that I could have done it that time, although I did not. (Austin 1961, p. 166)Austin didn't hole the putt. Could he have, if determinism is true? The possible-worlds interpretation exposes the misstep in Austin's thinking. First, suppose that determinism holds, and that Austin misses, and let H be the sentence "Austin holes the putt." We now need to choose the set X of relevant possible worlds that we need to canvass to see whether he could have made it. Suppose X is chosen to be the set of physically possible worlds that are identical to the actual world at some time t0 prior to the putt. Since determinism says that there is at any instant exactly one physically possible future, this set of worlds has just one member, the actual world, the world in which Austin misses. So, choosing set X in this way, we get the result that H does not hold for any world in X. So it was not possible, on this reading, for Austin to hole the putt. Of course, this method of choosing X (call it the narrow method) is only one among many. Suppose we were to admit into X worlds that differ in a few imperceptibly microscopic ways from actuality at t0; we might well find that we've now included worlds in which Austin holes the putt, even when determinism obtains. This is, after all, what recent work on chaos has shown: Many phenomena of interest to us can change radically if one minutely alters the initial conditions. So the question is: When people contend that events are possible, are they really thinking in terms of the narrow method? Suppose that Austin is an utterly incompetent golfer, and his partner in today's foursome is inclined to deny that he could have made the putt. If we let X range too widely, we may include worlds in which Austin, thanks to years of expensive lessons, winds up a championship player who holes the putt easily. That is not what Austin is claiming, presumably. Austin seems to endorse the narrow method of choosing X when he insists that he is "talking about conditions as they precisely were." Yet in the next sentence he seems to rescind this endorsement, observing that "further experiments may confirm my belief that I could have done it that time, although I did not." What further experiments might indeed confirm Austin's belief that he could have done it? Experiments on the putting green? Would his belief be shored up by his setting up and sinking near-duplicates of that short putt ten times in a row? If this is the sort of experiment he has in mind, then he is not as interested as he claims he is in conditions as they precisely were. To see this, suppose instead that Austin's "further experiments" consisted in taking out a box of matches and lighting ten in a row. "See," he says, "I could have made that very putt." We would rightly object that his experiments had absolutely no bearing on his claim. Sinking ten short putts would have no more bearing on his claim, understood in the narrow sense as a claim about "conditions as they precisely were." We suggest that Austin would be content to consider "Austin holes the putt" possible if, in situations very similar to the actual occasion in question, he holes the putt. We think that this is what he meant, and that he would be right to think about his putt this way. This is the familiar, reasonable, useful way to conduct "further experiments" whenever we are interested in understanding the causation involved in a phenomenon of interest. We vary the initial conditions slightly (and often systematically) to see what changes and what stays the same. This is the way to gather useful information from the world to guide our further campaigns of avoidance and enhancement. Curiously, this very point was made, at least obliquely, by G. E. Moore in the work Austin was criticizing in the passage quoted. Moore's examples were simple: Cats can climb trees and dogs can't, and a steamship that is now traveling at 25 knots can, of course, also steam at 20 knots (but not, of course, in precisely the circumstances it is now in, with the engine set at Full Speed Ahead). The sense of "can" invoked in these uncontroversial claims, the sense called "can (general)" by Honoré (1964) in an important but neglected article, is one that requires us to look not at "conditions as they precisely were" but at minor variations on those conditions. So Austin equivocates when he discusses possibilities. In truth, the narrow method of choosing X does not have the significance that he and many others imagine. From this it follows that the truth or falsity of determinism should not affect our belief that certain unrealized events were nevertheless "possible," in an important everyday sense of the word. We can bolster this last claim by paying a visit to a narrow domain in which we know with certainty that determinism reigns: the realm of chess-playing computer programs.
Evolution as an Algorithmic ProcessDennett maintains that biological evolution does not need quantum randomness, and says he was shocked by Jacques Monod's claim that random quantum processes are "essential" to evolution. Monod defines the importance of chance, or what he calls "absolute coincidence" as something like the intersection of causal chains that Aristotle calls an "accident." But, says Dennett, in his 1984 book Elbow Room,
when Monod comes to define the conditions under which such coincidences can occur, he apparently falls into the actualist trap. Accidents must happen if evolution is to take place, Monod says, and accidents can happen — "Unless of course we go back to Laplace's world, from which chance is excluded by definition and where Dr. Brown has been fated to die under Jones' hammer ever since the beginning of time." (Chance and Necessity, p. 115) If "Laplace's world" means just a deterministic world, then Monod is wrong. Natural selection does not need "absolute" coincidence. It does not need "essential" randomness or perfect independence; it needs practical independence — of the sort exhibited by Brown and Jones, and Jules and Jim, each on his own trajectory but "just happening" to intersect, like the cards being deterministically shuffled in a deck and just happening to fall into sequence. Would evolution occur in a deterministic world, a Laplacean world where mutation was caused by a nonrandom process? Yes, for what evolution requires is an unpatterned generator of raw material, not an uncaused generator of raw material. Quantum-level effects may indeed play a role in the generation of mutations, but such a role is not required by theory. It is not clear that "genuine" or "objective" randomness of either the quantum-mechanical sort or of the mathematical, informationally incompressible sort is ever required by a process, or detectable by a process. (Chaitin (1976) presents a Gödelian proof that there is no decision procedure for determining whether a series is mathematically random.) Even in mathematics, where the concept of objective randomness can find critical application within proofs, there are cases of what might be called practical indistinguishability.
Dennett's Challenge - Where Indeterminism Might MatterDennett has asked for cases where quantum indeterminism would make a substantive improvement over the pseudo-randomness that he thinks is enough for both biological evolution and free will. Dennett does not deny quantum indeterminacy. He just doubts that quantum randomness is necessary for free will. Information philosophy suggests that its great importance is that it breaks the causal chain of pre-determinism. See the page Where, and When, is Randomness Located? for more details on where indeterminism is located, and for the positions of Bob Doyle, Bob Kane, and Al Mele compared to Dennett's Valerian Model of free will. Quantum randomness has been available to evolving species for billions of years before pseudo-randomness emerges with humans. But Dennett does not think, as does Jacques Monod, for example, that quantum indeterminacy is necessary for biological evolution. The evolved virtual creatures of artificial life programs demonstrate for Dennett that biological evolution is an algorithmic process. Here are five cases where quantum chance is critically important and better than pseudo-randomness. They all share a basic insight from information physics. Whenever a stable new information structure is created, two things must happen. The first is a collapse of the quantum wave function that allows one or more particles to combine in the new structure. The second is the transfer away from the structure to the cosmic background of the entropy required by the second law of thermodynamics to balance the local increase in negative entropy (information).
Laplace's DemonIndeterministic events are unpredictable. Consequently, if any such probabilistic events occur, as Dennett admits, Laplace’s demon cannot predict the future. Information cosmology provides a second reason why such a demon is impossible. There was little or no information at the start of the universe. There is a great deal today, and more being created every day. There is not enough information in the past to determine the present, let alone completely determine the future. Creating future information requires quantum events, which are inherently indeterministic. The future is only probable, though it may be "adequately determined." Since there is not enough information available at any moment to comprehend all the information that will exist in the future, Laplace demons are impossible.
Intelligent DesignersSuppose that determinism is true, and that the chance driving spontaneous variation of the gene pool is merely epistemic (human ignorance), so that a deterministic algorithmic process is driving evolution. Gregory Chaitin has shown that the amount of information (and thus the true randomness) in a sequence of random numbers is no more than that in the algorithm that generates them. This makes the process more comprehensible for a supernatural intelligent designer. And it makes the idea of an intelligent designer, deterministically controlling evolution with complete foreknowledge, more plausible. This is unfortunate. An intelligent designer with a big enough computer could reverse engineer and alter the algorithm behind the pseudo-randomness driving evolution. This is just what genetic engineers do. But cosmic rays, which are inherently indeterministic quantum events, damage the DNA to produce genetic mutations, variations in the gene pool. No intelligent designer could control such evolution. So genetic engineers are intelligent designers, but they cannot control the whole of evolution.
Frankfurt ControllersFor almost fifty years, compatibilists have used Frankfurt-style Cases to show that alternative possibilities are not required for freedom of action and moral responsibility. Bob Kane showed in 1985 that, if a choice is undetermined, the Frankfurt controller cannot tell until the choice is made whether the agent will do A or do otherwise. Compatibilists were begging the question by assuming a deterministic connection between a “prior sign” of a decision and the decision itself. More fundamentally, information philosophy tells us that because chance (quantum randomness) helps generate the alternate possibilities, information about the choice does not come into the universe until the choice has been made. Either way, the controller would have to intervene before the choice, in which case it is the controller that is responsible for the decision. Frankfurt controllers do not exist.
Dennett's EavesdropperWe can call this Dennett's Eavesdropper because, in a discussion of quantum cryptography, Dennett agrees there is a strong reason to prefer quantum randomness to pseudo-randomness for encrypting secure messages. He sees that if a pseudo-random number sequence were used, a clever eavesdropper might discover the algorithm behind it and thus be able to decode the message. Quantum cryptography and quantum computing use the non-local properties of entangled quantum particles. Non-locality shows up when the wave-function of a two-particle system collapses and new information comes into the universe. See the Einstein-Podolsky-Rosen experiment.
Creating New MemesRichard Dawkin’s unit of cultural information has the same limits as purely physical information. Claude Shannon’s mathematical theory of the communication of information says that information is not new without probabilistic surprises. Quantum physics is the ultimate source of that probability and the possibilities that surprise us. If the result were not truly unpredictable, it would be implicitly present in the information we already have. A new meme, like Dennett’s intuition pumps, skyhooks, and cranes, would have been already predictable there in the past and not his very original creations. See the Information Philosopher contributions to Dennett's Fall 2010 seminar on Free Will at Tufts University.
On Giving Libertarians What They Say They Want
Chapter 15 of Brainstorms, 1978.
Why is the free will problem so persistent? Partly, I suspect, because it is called the free will problem. Hilliard, the great card magician, used to fool even his professional colleagues with a trick he called the tuned deck. Twenty times in a row he'd confound the quidnuncs, as he put it, with the same trick, a bit of prestidigitation that resisted all the diagnostic hypotheses of his fellow magicians. The trick, as he eventually revealed, was a masterpiece of subtle misdirection; it consisted entirely of the name, "the tuned deck", plus a peculiar but obviously non-functional bit of ritual. It was, you see, many tricks, however many different but familiar tricks Hilliard had to perform in order to stay one jump ahead of the solvers. As soon as their experiments and subtle arguments had conclusively eliminated one way of doing the trick, that was the way he would do the trick on future trials. This would have been obvious to his sophisticated onlookers had they not been so intent on finding the solution to the trick. The so called free will problem is in fact many not very closely related problems tied together by a name and lots of attendant anxiety. Most people can be brought by reflection to care very much what the truth is on these matters, for each problem poses a threat: to our self-esteem, to our conviction that we are not living deluded lives, to our conviction that we may justifiably trust our grasp of such utterly familiar notions as possibility, opportunity and ability.*
There is no very good reason to suppose that an acceptable solution to one of the problems will be, or even point to, an acceptable solution to the others, and we may be misled by residual unallayed worries into rejecting or undervaluing partial solutions, in the misguided hope that we might allay all the doubts with one overarching doctrine or theory. But we don't have any good theories. Since the case for determinism is persuasive and since we all want to believe we have free will, compatibilism is the strategic favorite, but we must admit that no compatibilism free of problems while full of the traditional flavors of responsibility has yet been devised. The alternatives to compatibilism are anything but popular. Both the libertarian and the hard determinist believe that free will and determinism are incompatible. The hard determinist says: "So much of the worse for free will." The libertarian says: "So much the worse for determinism," at least with regard to human action. Both alternatives have been roundly and routinely dismissed as at best obscure, at worst incoherent. But alas for the compatibilist, neither view will oblige us by fading away. Their persistence, like Hilliard's success, probably has many explanations. I hope to diagnose just one of them. In a recent paper, David Wiggins has urged us to look with more sympathy at the program of libertarianism.' Wiggins first points out that a familiar argument often presumed to demolish libertarianism begs the question. The first premise of this argument is that every event is either causally determined or random. Then since the libertarian insists that human actions cannot be both free and determined, the libertarian must be supposing that any and all free actions are random. But one would hardly hold oneself responsible for an action that merely happened at random, so libertarianism, far from securing a necessary condition for responsible action, has unwittingly secured a condition that would defeat responsibility altogether. Wiggins points out that the first premise, that every event is either causally determined or random, is not the innocent logical truth it appears to be. The innocent logical truth is that every event is either causally determined or not causally determined. There may be an established sense of the word "random" that is unproblematically synonymous with "not causally determined", but the word "random" in common parlance has further connotations of pointlessness or arbitrariness, and it is these very connotations that ground our acquiescence in the further premise that one would not hold oneself responsible for one's random actions. It may be the case that whatever is random in the sense of being causally undetermined, is random in the sense connoting utter meaninglessness, but that is just what the libertarian wishes to deny. This standard objection to libertarianism, then, assumes what it must prove; it fails to show that undetermined action would be random action and hence action for which we could not be held responsible. But is there in fact any reasonable hope that the libertarian can find some defensible ground between the absurdity of "blind chance" on the one hand and on the other what Wiggins calls the cosmic unfairness of the determinist's view of these matters? Wiggins thinks there is. He draws our attention to a speculation of Russell's: "It might be that without infringing the laws of physics, intelligence could make improbable things happen, as Maxwell's demon would have defeated the second law of thermo-dynamics by opening the trap door to fast-moving particles and closing it to slow-moving particles. Wiggins sees many problems with the speculation, but he does, nevertheless, draw a glimmer of an idea from it.
For indeterminism maybe all we really need to imagine or conceive is a world in which (a) there is some macroscopic indeterminacy founded in microscopic indeterminacy, and (b) an appreciable number of the free actions or policies or deliberations of individual agents, although they are not even in principle hypothetico-deductively derivable from antecedent conditions, can be such as to persuade us to fit them into meaningful sequences. We need not trace free actions back to volitions construed as little pushes aimed from outside the physical world. What we must find instead are patterns which are coherent and intelligible in the low level terms of practical deliberation, even though they are not amenable to the kind of generalization or necessity which is the stuff of rigorous theory. (p. 52)The "low level terms of practical deliberation" are, I take it, the familiar terms of intentional or reason-giving explanation. We typically render actions intelligible by citing their reasons, the beliefs and desires of the agent that render the actions at least marginally reasonable under the circumstances. Wiggins is suggesting then that if we could somehow make sense of human actions at the level of intentional explanation, then in spite of, the fact that those actions might be physically undetermined, they would not be random. Wiggins invites us to take this possibility seriously, but he has little further to say in elaboration or defense of this. He has said enough, however, to suggest to me a number of ways in which we could give libertarians what they seem to want. Wiggins asks only that human actions be seen to be intelligible in the low-level terms of practical deliberation. Surely if human actions were predictable in the low-level terms of practical deliberation, they would be intelligible in those terms. So I propose first to demonstrate that there is a way in which human behavior could be strictly undetermined from the physicist's point of view while at the same time accurately predictable from the intentional level. This demonstration, alas, will be very disappointing, for it relies on a cheap trick and what it establishes can be immediately seen to be quite extraneous to the libertarian's interests. But it is a necessary preamble to what I hope will be a more welcome contribution to the libertarian's cause. So let us get the disappointing preamble behind us. Here is how a bit of human behavior could be undetermined from the physicist's point of view, but quite clearly predictable by the intentionalist. Suppose we were to build an electronic gadget that I will call an answer box. The answer box is designed to record a person's answers to simple questions. It has two buttons, a Yes button, and a No button, and two foot pedals, a Yes pedal, and a No pedal, all clearly marked. It also has a little display screen divided in half, and on one side it says "use the buttons" and on the other side it says "use the pedals". We design this bit of apparatus so that only one half of this display screen is illuminated at any one time. Once a minute, a radium randomizer determines, in an entirely undetermined way of course, whether the display screen says "use the buttons" or "use the pedals". I now propose the following experiment. First, we draw up a list of ten very simple questions that have Yes or No answers, questions of the order of difficulty of "Do fish swim?" and "Is Texas bigger than Rhode Island?" We seat a subject at the answer box and announce that a handsome reward will be given to those who correctly follow all the experimental instructions, and a bonus will be given to those who answer all our questions correctly. Now, can the physicist in principle predict the subject's behavior? Let us suppose the subject is in fact a physically deterministic system, and let us suppose further that the physicist has perfect knowledge of the subject's initial state, all the relevant deterministic laws, and all the interactions within the closed situation of the experimental situation. Still, the unpredictable behavior of the answer box will infect the subject on a macroscopic scale with its own indeterminacy on at least ten occasions during the period the physicist must predict. So the best the physicist can do is issue a multiple disjunctive or multiple conditional prediction. Can the intentionalist do any better? Yes, of course. The intentionalist, having read the instructions given to the subject and having sized up the subject as a person of roughly normal intelligence and motivation, and having seen that all the odd numbered questions have Yes answers and the even numbered questions have No answers, confidently predicts that the subject will behave as follows: "The subject will give Yes answers to questions 1, 3, 5, 7, and 9, and the subject will answer the rest of the questions in the negative". There are no if's, or's or maybe's in those predictions. They are categorical and precise — precise enough for instance to appear in a binding contract or satisfy a court of law. This is, of course, the cheap trick I warned you about. There is no real difference in the predictive power of the two predictors. The intentionalist for instance is no more in a position to predict whether the subject will move finger or foot than the physicist is, and the physicist may well be able to give predictions that are tantamount to the intentionalist's. The physicist may for instance be able to make this prediction: "When question 6 is presented, if the illuminated sign on the box reads use the pedals, the subject's right foot will move at velocity k until it depresses the No pedal n inches, and if the illuminated sign says use the buttons, the subject's right index finger will trace a trajectory terminating on the No button." Such a prediction is if anything more detailed than the intentionalist's simple prediction of the negative answer to question 6, and it might in fact be more reliable and better grounded as well. But so what? What we are normally interested in, what we are normally interested in predicting, moreover, is not the skeletal motion of human beings but their actions, and the intentionalist can predict the actions of the subject (at least insofar as most of us would take any interest in them) without the elaborate rigmarole and calculations of the physicist. The possibility of indeterminacy in the environment of the kind introduced here, and hence the possibility of indeterminacy in the subject's reaction to that environment, is something with regard to which the intentionalistic predictive power is quite neutral. Still, we could not expect the libertarian to be interested in this variety of undetermined human behavior, behavior that is undetermined simply because the behavior of the answer box, something entirely external to the agent, is undetermined. Suppose then we move something like the answer box inside the agent. It is a commonplace of action theory that virtually all human actions can be accomplished or realized in a wide variety of ways. There are, for instance, indefinitely many ways of insulting your neighbor, or even of asserting that snow is white. And we are often not much interested, nor should we be, in exactly which particular physical motion accomplishes the act we intend. So let us suppose that our nervous system is so constructed. and designed that whenever in the implementation of an intention, our control system is faced with two or more options with regard to which we are non-partisan, a purely undetermined tie-breaking "choice" is made. There you are at the supermarket, wanting a can of Campbell's Tomato Soup, and faced with an array of several hundred identical cans of Campbell's Tomato Soup, all roughly equidistant from your hands. What to do? Before you even waste time and energy pondering this trivial problem, let us suppose, a perfectly random factor determines which can your hand reaches out for. This is of course simply a variation on the ancient theme of Buridan's ass, that unfortunate beast who, finding himself hungry, thirsty and equidistant between food and water, perished for lack of the divine nudge that in a human being accomplishes a truly free choice. This has never been a promising vision of the free choice of responsible agents, if only because it seems to secure freedom for such a small and trivial class of our choices. What does it avail me if I am free to choose this can of soup, but not free to choose between buying and stealing it? But however unpromising the idea is as a centerpiece for an account of free will, we must not underestimate its possible scope of application. Such trivial choice points seldom obtrude in our conscious deliberation, no doubt, but they are quite possibly ubiquitous nonetheless at an unconscious level. Whenever we choose to perform an action of a certain sort, there are no doubt slight variations in timing, style and skeletal implementation of those actions that are within our power but beneath our concern. For all we know, which variation occurs is undetermined. That is, the implementation of any one of our intentional actions may encounter undetermined choice points in many places in the causal chain. The resulting behavior would not be distinguishable to our everyday eyes, or from the point of view of our everyday interests, from behavior that was rigidly determined. What we are mainly interested in, as I said before, are actions, not motions, and what we are normally interested in predicting are actions. It is worth noting that not only can we typically predict actions from the intentional stance without paying heed to possibly undetermined variations of implementation of these actions, but we can even put together chains of intentional predictions that are relatively immune to such variation. In the summer of 1974 many people were confidently predicting that Nixon would resign. As the day and hour approached, the prediction grew more certain and more specific as to time and place; Nixon would resign not just in the near future, but in the next hour, and in the White House and in the presence of television cameramen and so forth. Still, it was not plausible to claim to know just how he would resign, whether he would resign with grace, or dignity, or with an attack on his critics, whether he would enunciate clearly or mumble or tremble. These details were not readily predictable, but most of the further dependent predictions we were interested in making did not hinge on these subtle variations. However Nixon resigned, we could predict that Goldwater would publicly approve of it, Cronkite would report that Goldwater had so approved of it, Sevareid would comment on it, Rodino would terminate the proceedings of the Judiciary Committee, and Gerald Ford would be sworn in as Nixon's successor. Of course some predictions we might have made at the time would have hinged crucially on particular details of the precise manner of Nixon's resignation, and if these details happened to be undetermined both by Nixon's intentions and by any other feature of the moment, then some human actions of perhaps great importance would be infected by the indeterminacy of Nixon's manner at the moment just as our exemplary subject's behavior was infected by the indeterminacy of the answer box. That would not, however, make these actions any the less intelligible to us as actions. This result is not just what the libertarian is looking for, but it is a useful result nevertheless. It shows that we can indeed install indeterminism in the internal causal chains affecting human behavior at the macroscopic level while preserving the intelligibility of practical deliberation that the libertarian requires. We may have good reasons from other quarters for embracing determinism, but we need not fear that macroscopic indeterminism in human behavior would of necessity rob our lives of intelligibility by producing chaos. Thus, philosophers such as Ayer and Hobart, who argue that free will requires determinism, must be wrong. There are some ways our world could be macroscopically indeterministic, without that fact remotely threatening the coherence of the intentionalistic conceptual scheme of action description presupposed by claims of moral responsibility. Still, it seems that all we have done is install indeterminism in a harmless place by installing it in an irrelevant place. The libertarian would not be relieved to learn that although his decision to murder his neighbor was quite determined, the style and trajectory of the death blow was not. Clearly, what the libertarian has in mind is indeterminism at some earlier point, prior to the ultimate decision or formation of intention, and unless we can provide that, we will not aid the libertarian's cause. But perhaps we can provide that as well. Let us return then, to Russell's speculation that intelligence might make improbable things happen. Is there any way that something like this could be accomplished? The idea of intelligence exploiting randomness is not unfamiliar. The poet, Paul Valéry, nicely captures the basic idea:
It takes two to invent anything. The one makes up combinations; the other one chooses, recognizes what he wishes and what is important to him in the mass of the things which the former has imparted to him. What we call genius is much less the work of the first one than the readiness of the second one to grasp the value of what has been laid before him and to choose it.*Here we have the suggestion of an intelligent selection from what may be a partially arbitrary or chaotic or random production, and what we need is the outline of a model for such a process in human decision-making. An interesting feature of most important human decision-making is that it is made under time pressure. Even if there are, on occasion, algorithmic decision procedures giving guaranteed optimal solutions to our problems, and even if these decision procedures are in principle available to us, we may not have time or energy to utilize them. We are rushed, but moreover, we are all more or less lazy, even about terribly critical decisions that will affect our lives — our own lives, to say nothing of the lives of others. We invariably settle for a heuristic decision procedure; we satisfice (The term is Herbert Simon's. See his The Sciences of the Artificial (1969) for a review of the concept.); we poke around hoping for inspiration; we do our best to think about the problem in a more or less directed way until we must finally stop mulling, summarize our results as best we can, and act. A realistic model of such decision-making just might have the following feature: When someone is faced with an important decision, something in him generates a variety of more or less relevant considerations bearing on the decision. Some of these considerations, we may suppose, are determined to be generated, but others may be non-deterministically generated. For instance, Jones, who is finishing her dissertation on Aristotle and the practical syllogism, must decide within a week whether to accept the assistant professorship at the University of Chicago, or the assistant professorship at Swarthmore. She considers the difference in salaries, the probable quality of the students, the quality of her colleagues, the teaching load, the location of the schools, and so forth. Let us suppose that considerations A, B, C, D, E, and F occur to her and that those are the only considerations that occur to her, and that on the basis of those, she decides to accept the job at Swarthmore. She does this knowing of course that she could devote more time and energy to this deliberation, could cast about for other relevant considerations, could perhaps dismiss some of A-F as being relatively unimportant and so forth, but being no more meticulous, no more obsessive, than the rest of us about such matters, she settles for the considerations that have occurred to her and makes her decision. Let us suppose though, that after sealing her fate with a phone call, consideration G occurs to her, and she says to herself: "If only G had occurred to me before, I would certainly have chosen the University of Chicago instead, but G didn't occur to me". Now it just might be the case that exactly which considerations occur to one in such circumstances is to some degree strictly undetermined. If that were the case, then even the intentionalist, knowing everything knowable about Jones' settled beliefs and preferences and desires, might nevertheless be unable to predict her decision except perhaps conditionally. The intentionalist might be able to argue as follows: "If considerations A-F occur to Jones, then she will go Swarthmore," and this would be a prediction that would be grounded on a rational argument based on considerations A-F according to which Swarthmore was the best place to go. The intentionalist might go on to add, however, that if consideration G also occurs to Jones (which is strictly unpredictable unless we interfere and draw Jones' attention to G), Jones will choose the University of Chicago instead. Notice that although we are supposing that the decision is in this way strictly unpredictable except conditionally by the intentionalist, whichever choice Jones makes is retrospectively intelligible. There will be a rationale for the decision in either case; in the former case a rational argument in favor of Swarthmore based on A-F, and in the latter case, a rational argument in favor of Chicago, based on A-G. (There may, of course be yet another rational argument based on A-H, or I, or J, in favor of Swarthmore, or in favor of going on welfare, or in favor of suicide.) Even if in principle we couldn't predict which of many rationales could ultimately be correctly cited in justification or retrospective explanation of the choice made by Jones, we could be confident that there would be some sincere, authentic, and not unintelligible rationale to discover. The model of decision making I am proposing, has the following feature: when we are faced with an important decision, a consideration-generator whose output is to some degree undetermined produces a series of considerations, some of which may of course be immediately rejected as irrelevant by the agent (consciously or unconsciously). Those considerations that are selected by the agent as having a more than negligible bearing on the decision then figure in a reasoning process, and if the agent is in the main reasonable, those considerations ultimately serve as predictors and explicators of the agent's final decision. What can be said in favor of such a model, bearing in mind that there are many possible substantive variations on the basic theme? First, I think it captures what Russell was looking for. The intelligent selection, rejection and weighting of the considerations that do occur to the subject is a matter of intelligence making the difference. Intelligence makes the difference here because an intelligent selection and assessment procedure determines which microscopic indeterminacies get amplified, as it were, into important macroscopic determiners of ultimate behavior. Second, I think it installs indeterminism in the right place for the libertarian, if there is a right place at all. The libertarian could not have wanted to place the indeterminism at the end of the agent's assessment and deliberation. It would be insane to hope that after all rational deliberation had terminated with an assessment of the best available course of action, indeterminism would then intervene to flip the coin before action. It is a familiar theme in discussions of free will that the important claim that one could have done otherwise under the circumstances is not plausibly construed as the claim that one could have done otherwise given exactly the set of convictions and desires that prevailed at the end of rational deliberation. So if there is to be a crucial undetermined nexus, it had better be prior to the final assessment of the considerations on the stage, which is right where we have located it. Third, I think that the model is recommended by considerations that have little or nothing to do with the free will problem. It may well turn out to be that from the point of view of biological engineering, it is just more efficient and in the end more rational that decision-making should occur in this way. Time rushes on, and people must act, and there may not be time for a person to canvass all his beliefs, conduct all the investigations and experiments that he would see were relevant, assess every preference in his stock before acting, and it may be that the best way to prevent the inertia of Hamlet from overtaking us is for our decision-making processes to be expedited by a process of partially random generation and test. Even in the rare circumstances where we know there is, say, a decision procedure for determining the optimal solution to a decision problem, it is often more reasonable to proceed swiftly and by heuristic methods, and this strategic principle may in fact be incorporated as a design principle at a fairly fundamental level of cognitive-conative organization. A fourth observation in favor of the model is that it permits moral education to make a difference, without making all of the difference. A familiar argument against the libertarian is that if our moral decisions were not in fact determined by our moral upbringing, or our moral education, there would be no point in providing such an education for the young. The libertarian who adopted our model could answer that a moral education, while not completely determining the generation of considerations and moral decision-making, can nevertheless have a prior selective effect on the sorts of considerations that will occur. A moral education, like mutual discussion and persuasion generally, could adjust the boundaries and probabilities of the generator without rendering it deterministic. Fifth - and I think this is perhaps the most important thing to be said in favor of this model — it provides some account of our important intuition that we are the authors of our moral decisions. The unreflective compatibilist is apt to view decision-making on the model of a simple balance or scale on which the pros and cons of action are piled. What gets put on the scale is determined by one's nature and one's nurture, and once all the weights are placed, gravity as it were determines which way the scale will tip, and hence determines which way we will act. On such a view, the agent does not seem in any sense to be the author of the decisions, but at best merely the locus at which the environmental and genetic factors bearing on him interact to produce a decision. It all looks terribly mechanical and inevitable, and seems to leave no room for creativity or genius. The model proposed, however, holds out the promise of a distinction between authorship and mere implication in a causal chain. Consider in this light the difference between completing a lengthy exercise in long division and constructing a proof in, say, Euclidian geometry. There is a sense in which I can be the author of a particular bit of long division, and can take credit if it turns out to be correct, and can take pride in it as well, but there is a stronger sense in which I can claim authorship of a proof in geometry, even if thousands of school children before me have produced the very same proof. There is a sense in which this is something original that I have created. To take pride in one's computational accuracy is one thing, and to take pride in one's inventiveness is another, and as Valery claimed, the essence of invention is the intelligent selection from among randomly generated candidates. I think that the sense in which we wish to claim authorship of our moral decisions, and hence claim responsibility for them requires that we view them as products of intelligent invention, and not merely the results of an assiduous application of formulae. I don't want to overstate this case; certainly many of the decisions we make are so obvious, so black and white, that no one would dream of claiming any special creativity in having made them and yet would still claim complete responsibility for the decisions thus rendered. But if we viewed all our decision-making on those lines, I think our sense of our dignity as moral agents would be considerably impoverished. Finally, the model I propose points to the multiplicity of decisions that encircle our moral decisions and suggests that in many cases our ultimate decision as to which way to act is less important phenomenologically as a contributor to our sense of free will than the prior decisions affecting our deliberation process itself: the decision, for instance, not to consider any further, to terminate deliberation; or the decision to ignore certain lines of inquiry. These prior and subsidiary decisions contribute, I think, to our sense of ourselves as responsible free agents, roughly in the following way: I am faced with an important decision to make, and after a certain amount of deliberation, I say to myself: "That's enough. I've considered this matter enough and now I'm going to act," in the full knowledge that I could have considered further, in the full knowledge that the eventualities may prove that I decided in error, but with the acceptance of responsibility in any case. I have recounted six recommendations for the suggestion that human decision-making involves a non-deterministic generate-and-test procedure. First, it captures whatever is compelling in Russell's hunch. Second, it installs determinism in the only plausible locus for libertarianism (something we have established by a process of elimination). Third, it makes sense from the point of view of strategies of biological engineering. Fourth, it provides a flexible justification of moral education. Fifth, it accounts at least in part for our sense of authorship of our decisions. Sixth, it acknowledges and explains the importance of decisions internal to the deliberation process. It is embarrassing to note, however, that the very feature of the model that inspired its promulgation is apparently either gratuitous or misdescribed or both, and that is the causal indeterminacy of the generator. We have been supposing, for the sake of the libertarian, that the process that generates considerations for our assessment generates them at least in part by a physically or causally undetermined or random process. But here we seem to be trading on yet another imprecision or ambiguity in the word "random". When a system designer or programmer relies on a "random" generation process, it is not a physically undetermined process that is required, but simply a patternless process. Computers are typically equipped with a random number generator, but the process that generates the sequence is a perfectly deterministic and determinate process. If it is a good random number generator (and designing one is extraordinarily difficult, it turns out) the sequence will be locally and globally patternless. There will be a complete absence of regularities on which to base predictions about unexamined portions of the sequence. Isn't it the case that the new improved proposed model for human deliberation can do as well with a random-but-deterministic generation process as with a causally undetermined process? Suppose that to the extent that the considerations that occur to me are unpredictable, they are unpredictable simply because they are fortuitously determined by some arbitrary and irrelevant factors, such as the location of the planets or what I had for breakfast. It appears that this alternative supposition diminishes not one whit the plausibility or utility of the model that I have proposed. Have we in fact given the libertarians what they really want without giving them indeterminism? Perhaps. We have given the libertarians the materials out of which to construct an account of personal authorship of moral decisions, and this is something that the compatibilistic views have never handled well. But something else has emerged as well. Just as the presence or absence of macroscopic indeterminism in the implementation style of intentional actions turned out to be something essentially undetectable from the vantage point of our Lebenswelt, a feature with no significant repercussions in the "manifest image", to use Sellars' term, so the rival descriptions of the consideration generator, as random-but-causally-deterministic versus random-and-causally-indeterministic, will have no clearly testable and contrary implications at the level of micro-neurophysiology, even if we succeed beyond our most optimistic fantasies in mapping deliberation processes onto neural activity. That fact does not refute libertarianism, or even discredit the motivation behind it, for what it shows once again is that we need not fear that causal indeterminism would make our lives unintelligible. There may not be compelling grounds from this quarter for favoring an indeterministic vision of the springs of our action, but if considerations from other quarters favor indeterminism, we can at least be fairly sanguine about the prospects of incorporating indeterminism into our picture of deliberation, even if we cannot yet see what point such an incorporation would have. Wiggins speaks of the cosmic unfairness of determinism, and I do not think the considerations raised here do much to allay our worries about that. Even if one embraces the sort of view I have outlined, the deterministic view of the unbranching and inexorable history of the universe can inspire terror or despair, and perhaps the libertarian is right that there is no way to allay these feelings short of a brute denial of determinism. Perhaps such a denial, and only such a denial, would permit us to make sense of the notion that our actual lives are created by us over time out of possibilities that exist in virtue of our earlier decisions; that we trace a path through a branching maze that both defines who we are, and why, to some extent (if we are fortunate enough to maintain against all vicissitudes the integrity of our deliberational machinery) we are responsible for being who we are. That prospect deserves an investigation of its own. All I hope to have shown here is that it is a prospect we can and should take seriously.
"Could Have Done Otherwise"
Chapter 6 of Elbow Room, 1984.
I. Do We Care Whether We Could Have Done Otherwise? In the midst of all the discord and disagreement among philosophers about free will, there are a few calm islands of near unanimity. As van Inwagen notes:
Almost all philosophers agree that a necessary condition for holding an agent responsible for an act is believing that the agent could have refrained from performing that act. (van Inwagen 1975, p.189)But if this is so, then whatever else I may have done in the preceding chapters, I have not yet touched the central issue of free will, for I have not yet declared a position on the "could have done otherwise" principle: the principle that holds that one has acted freely (and responsibly) only if one could have done otherwise. It is time, at last, to turn to this central, stable area in the logical geography of the free will problem. First I will show that this widely accepted principle is simply false. Then I will turn to some residual problems about the meaning of "can"—Austin's frog at the bottom of the beer mug (see chapter one, page 19). The "could have done otherwise" principle has been debated for generations, and the favorite strategy of compatibilists—who must show that free will and determinism are compatible after all—is to maintain that "could have done otherwise" does not mean what it seems at first to mean; the sense of the phrase denied by determinism is irrelevant to the sense required for freedom. It is so obvious that this is what the compatibilists have to say that many skeptics view the proffered compatibilist "analyses" of the meaning of "could have done otherwise" as little more than self-deceived special pleading. James (1921, p.149) called this theme "a quagmire of evasion" and Kant (Critique of Practical Reason, Abbot translation 1873, p.96) called it a "wretched subterfuge." Instead of rising to the defense of any of the earlier analyses — many of which are quite defensible so far as I can see — I will go on the offensive. I will argue that whatever "could have done otherwise" actually means, it is not what we are interested in when we care about whether some act was freely and responsibly performed. There is, as van Inwagen notes, something of a tradition of simply assuming that the intuitions favoring the "could have done otherwise" principle are secure. But philosophers who do assume this do so in spite of fairly obvious and familiar grounds for doubt. One of the few philosophers to challenge it is Frankfurt, who has invented a highly productive intuition pump that generates counterexamples in many flavors: cases of overdetermination, where an agent deliberately and knowingly chose to do something, but where — thanks typically to some hovering bogeyman — if he hadn't so chosen, the bogeyman would have seen to it that he did the thing anyway (Frankfurt 1969, but see also van Inwagen 1978 and 1983, and Fischer 1982). Here is the basic, stripped-down intuition pump (minus the bells and whistles on the variations, which will not concern us — but only because we will not be relying on them):
Jones hates Smith and decides, in full possession of his faculties, to murder him. Meanwhile Black, the nefarious neurosurgeon (remember him?), who also wants Smith dead, has implanted something in Jones' brain so that just in case Jones changes his mind (and chickens out), Black, by pushing his special button, can put Jones back on his murderous track. In the event Black doesn't have to intervene; Jones does the deed all on his own.In such a case, Frankfurt claims, the person would be responsible for his deed, since he chose it with all due deliberation and wholeheartedness, in spite of the lurking presence of the overdeterminer whose hidden presence makes it the case that Jones couldn't have done otherwise. I accept Frankfurt's analysis of these cases (that is, I think they can be defended against the objections raised by van Inwagen, Fischer, and others), and think these thought experiments are useful in spite of their invocation of imaginary bogeymen, for they draw attention to the importance, for responsibility, of the actual causal chain of deliberation and choice running through the agent—whatever may be happening elsewhere. But Frankfurt's strategy seems to me to be insufficiently ambitious. Although he takes his counterexamples to show that the "could have done otherwise" principle—which he calls the principle of alternate possibilities—is irremediably false, his counterexamples are rather special and unlikely cases, and they invite the defender of the principle to try for it patch: modify the principle slightly to take care of Frankfurt's troublesome cases. Exotic circumstances do little or nothing to dispel the illusion that in the normal run of things, where such overdetermination is lacking, the regnant principle is indeed that if a person could not have refrained (could not have done otherwise), he would not be held responsible. But in fact, I will argue, it is seldom that we even seem to care whether or not a person could have done otherwise. And when we do, it is often because we wish to draw the opposite conclusion about responsibility from the one tradition endorses. "Here I stand," Luther said. "I can do no other." Luther claimed that he could do no other, that his conscience made it impossible for him to recant. He might, of course, have been wrong, or have been deliberately overstating the truth. But even if he was — perhaps especially if he was — his declaration is testimony to the fact that we simply do not exempt someone from blame or praise for an act because we think he could do no other. Whatever Luther was doing, he was not trying to duck responsibility. There are cases where the claim "I can do no other" is an avowal of frailty: suppose what I ought to do is get on the plane and fly to safety, but I stand rooted on the ground and confess I can do no other — because of my irrational and debilitating fear of flying. In such a case I can do no other, I claim, because my rational control faculty is impaired. But in other cases, like Luther's, when I say I cannot do otherwise I mean I cannot because I see so clearly what the situation is and because my rational control faculty is not impaired. It is too obvious what to do; reason dictates it; I would have to be mad to do otherwise, and since I happen not to be mad, I cannot do otherwise. (Notice, by the way, that we say it was "up to" Luther whether or not to recant, and we do not feel tempted to rescind that judgment when we learn that he claimed he could do no other. Notice, too, that we often say things like this: "If it were up to me, I know for certain what I would do.") I hope it is true — and think it very likely is true — that it would be impossible to induce me to torture an innocent person by offering me a thousand dollars. "Ah" — comes the objection — "but what if some evil space pirates were holding the whole world ransom, and promised not to destroy the world if only you would torture an innocent person? Would that be something you would find impossible to do?" Probably not, but so what? That is a vastly different case. If what one is interested in is whether under the specified circumstances I could have done otherwise, then the other case mentioned is utterly irrelevant. I claimed it would not be possible to induce me to torture someone for a thousand dollars. Those who hold dear the principle of "could have done otherwise" are always insisting that we should look at whether one could have done otherwise in exactly the same circumstances. I claim something stronger; I claim that I could not do otherwise even in any roughly similar case. I would never agree to torture an innocent person for a thousand dollars. It would make no difference, I claim, what tone of voice the briber used, or whether or not I was tired and hungry, or whether the proposed victim was well illuminated or partially concealed in shadow. I am, I hope, immune to all such offers. Now why would anyone's intuitions suggest that if I am right, then if and when I ever have occasion to refuse such an offer, my refusal would not count as a responsible act? Perhaps this is what some people think: they think that if I were right when I claimed I could not do otherwise in such cases, I would be some sort of zombie, "programmed" always to refuse thousand-dollar bribes. A genuinely free agent, they think, must be more volatile somehow. If I am to be able to listen to reason, if I am to be flexible in the right way, they think, I mustn't be too dogmatic. Even in the most preposterous cases, then, I must be able to see that "there are two sides to every question." I must be able to pause, and weigh up the pros and cons of this suggested bit of lucrative torture. But the only way I could be constituted so that I can always "see both sides" — no matter how preposterous one side is — is by being constituted so that in any particular case "I could have done otherwise." That would be fallacious reasoning. Seeing both sides of the question does not require that one not be overwhelmingly persuaded, in the end, by one side. The flexibility we want a responsible agent to have is the flexibility to recognize the one-in-a-zillion case in which, thanks to that thousand dollars, not otherwise obtainable, the world can be saved (or whatever). But the general capacity to respond flexibly in such cases does not at all require that one "could have done otherwise" in the particular case, but only that under some variations in the circumstances—the variations that matter—one would do otherwise. It might be useful to compare two cases that seem quite different at first, but belong on a continuum. I. Suppose I know that if I ever see the full moon, I will probably run amok and murder the first person I see. So I make careful arrangements to have myself locked up in a windowless room on several nights each month. I am thus rendered unable to do the awful things I would do otherwise. Moreover, it is thanks to my own responsible efforts that I have become unable to do these things. A fanciful case, no doubt, but consider the next case, which is somewhat more realistic. II. Suppose I know that if I ever see a voluptuous woman walking unescorted in a deserted place I will probably be overcome by lust and rape her. So I educate myself about the horrors of rape from the woman's point of view, and enliven my sense of the brutality of the crime so dramatically that if I happen to encounter such a woman in such straits, I am unable to do the awful thing I would have done otherwise. (What may convince me that 1 would otherwise have done this thing is that when the occasion arises I experience a considerable inner tumult; I discover myself shaking the bars of the cage I have built for myself.) Thanks to my earlier responsible efforts, I have become quite immune to this rather more common sort of possession; I have done what had to be done to render certain courses of action unthinkable to me. Like Luther, I now can do no other Suppose — to get back all the way to realism — that our parents and teachers know that if we grow up without a moral education, we will become selfish, untrustworthy and possibly dangerous people. So they arrange to educate us, and thanks to their responsible efforts, our minds recoil from thoughts of larceny, treachery and violence. We find such alternatives unthinkable under most normal circumstances, and moreover have been taught to think ahead for ourselves and to contribute to our own moral development. Doesn't a considerable part of being a responsible person consist in making oneself unable to do the things one would be blamed for doing if one did them? Philosophers have often noted, uneasily, that the difficult moral problem cases, the decisions that "might go either way," are not the only, or even the most frequent, sorts of decisions for which we hold people responsible. They have seldom taken the hint to heart, however, and asked whether the "could have done otherwise" principle was simply wrong. I grant that we do indeed often ask ourselves whether an agent could have done otherwise — and in particular whether or not we ourselves could have done otherwise — in the wake of some regrettable act. But we never show any interest in trying to answer the question we have presumably just asked! Defenders of the principle suppose that there is a sense of "could have done otherwise" according to which, if determinism is true, no one ever could have done otherwise than he did. Suppose they are right that there is such a sense. Is it the sense we intend when we use the words "could he have done otherwise?" to inaugurate an inquiry into an agent's responsibility for an act he committed? It is not. In pursuing such inquiries we manifestly ignore the sort of investigations that would have to be pursued if we really were interested in the answer to that question, the metaphysicians' question about whether or not the agent was completely determined by the state of the universe at that instant to perform that action. If our responsibility really did hinge, as this major philosophical tradition insists, on the question of whether we ever could do otherwise than we in fact do in exactly those circumstances, we would be faced with a most peculiar problem of ignorance: it would be unlikely in the extreme, given what now seems to be the case in physics, that anyone would ever know whether anyone has ever been responsible. For today's orthodoxy is that indeterminism reigns at the subatomic level of quantum mechanics, so in the absence of any general and accepted argument for universal determinism, it is possible for all we know that our decisions and actions are truly the magnified, macroscopic effects of quantum-level indeterminacies occurring in our brains. But it is also possible, for all we know, that even though indeterminism reigns in our brains at the subatomic quantum mechanical level, our macroscopic decisions and acts are all themselves determined; the quantum effects could just as well be self-canceling, not amplified (as if by organic Geiger counters in the neurons). And it is extremely unlikely, given the complexity of the brain at even the molecular level (a complexity for which the word "astronomical" is a vast understatement), that we could ever develop good evidence that any particular act was such a large-scale effect of a critical subatomic indeterminacy. So if someone's responsibility for an act did hinge on whether, at the moment of decision, that decision was (already) determined by a prior state of the world, then barring a triumphant return of universal determinism in microphysics (which would rule out all responsibility on this view), the odds are very heavy that we will never have any reason to believe of any particular act that it was or was not responsible. The critical difference would be utterly inscrutable from every macroscopic vantage point, and practically inscrutable from the most sophisticated microphysical vantage point imaginable. Some philosophers might take comfort in this conclusion, but I would guess that only a philosopher could take comfort in it. To say the very least it is hard to take seriously the idea that something that could matter so much could be so magnificently beyond our ken. (Or look at the point another way: those who claim to know that they have performed acts such that they could have done otherwise in exactly those circumstances must admit that they proclaim this presumably empirical fact without benefit of the slightest shred of evidence, and without the faintest hope of ever obtaining any such evidence.)* Given the sheer impossibility of conducting any meaningful investigation into the question of whether or not an agent could have done otherwise, what can people think they are doing when they ask that question in particular cases? They must take themselves to be asking some other question. They are right; they are asking a much better question. (If a few people have been asking the unanswerable metaphysical question, they were deluded into it by philosophy.) The question people are really interested in asking is a better question for two reasons: it is usually empirically answerable, and its answer matters. For not only is the traditional metaphysical question unanswerable; its answer, even if you knew it, would be useless. What good would it do to know, about a particular agent, that on some occasion (or on every occasion) he could have done otherwise than he did? Or that he could not have done otherwise than he did? Let us take the latter case first. Suppose you knew (because God told you, presumably) that when Jones pulled the trigger and murdered his wife at time t, he could not have done otherwise. That is, given Jones' microstate at t and the complete microstate of Jones' environment (including the gravitational effects of distant stars, and so on) at t, no other Jones-trajectory was possible than the trajectory he took. If Jones were ever put back into exactly that state again, in exactly that circumstance, he would pull the trigger again. And if he were put in that state a million times, he would pull the trigger a million times. Now if you learned this, would you have learned anything about Jones? Would you have learned anything about his character, for instance, or his likely behavior on merely similar occasions? No. Although people are physical objects which, like atoms or ball bearings or bridges, obey the laws of physics, they are not only more complicated than anything else we know in the universe, they are also designed to be so sensitive to the passing show that they never can be in the same microstate twice. One doesn't even have to descend to the atomic level to establish this. People learn, and remember, and get bored, and shift their attention, and change their interests so incessantly, that it is as good as infinitely unlikely that any person is ever in the same (gross) psychological or cognitive state on two occasions. And this would be true even if we engineered the surrounding environment to be "utterly the same" on different occasions—if only because the second time around the agent would no doubt think something that went unthought the first time, like "Oh my, this all seems so utterly familiar; now what did I do last time?" (see chapter two, page 33) There is some point in determining how a bridge is caused to react to some very accurately specified circumstances, since those may be circumstances it will actually encounter in its present state on a future occasion. But there would be no payoff in understanding to be gained by determining the micro-causation of the behavior of a human being in some particular circumstance, since he will certainly never confront that micro-circumstance again, and even if he did, he would certainly be in a significantly different reactive state at the time. Learning (from God, again) that a particular agent was not thus determined to act would be learning something equally idle, from the point of view of character assessment or planning for the future. As we saw in chapter five, the undetermined agent will be no more flexible, no more versatile, no more sensitive to nuances, no more reformable, than his deterministic cousin. So if anyone is interested at all in the question of whether or not one could have done otherwise in exactly the same circumstances (and internal state), this will have to be a particularly pure metaphysical curiosity—that is to say, a curiosity so pure as to be utterly lacking in any ulterior motive, since the answer could not conceivably make any noticeable difference to the way the world went.* Why, though, does it still seem as if there ought to be a vast difference, somehow visible from the ordinary human vantage point, between a world in which we could not do otherwise and a world in which we could? Why should determinism still seem so appalling? Perhaps we are misled by the God's-eye-view image, "sub specie aeternitatis," in which we spy our own life-trajectories in space and time laid out from birth to death in a single, fixed, rigid, unbranching, four-dimensional "space-time worm," pinned to the causal fabric and unable to move. (Causation, in Hume's fine metaphor, is "the cement of the universe" (Mackie 1974), so perhaps we see our entire lives as cast in concrete, trapped like a fossil in the unchanging slab of space-time.) What we would like, it seems, is for someone to show us that we can move about in that medium. But this is a confusion; if we feel this yearning it is because we have forgotten that time is one of the dimensions we have spatialized in our image. Scanning from left to right is scanning from past to future, and a vertical slice of our image captures a single moment in time. To have elbow room in that medium—to be able to wiggle and squirm in between the fixed points of birth and death for instance—would not be to have the power to choose in an undetermined way, but to have the power to choose two or more courses at one time. Is that what we want—to have our cake and eat it too? To have chosen both to marry and to remain unmarried, both to pull the trigger and to drop the gun? If that is the variety of free will we want, then whether or not it might be worth wanting, we can be quite confident that it must elude us — unless, perhaps, we adopt Everett's many-worlds interpretation of quantum mechanics, in which case it just might follow that we do lead a zillion lives (though our many alter egos, alas, could never get together and compare notes)! If we let go of that fantasy and ask what we really, soberly want, we find a more modest hope: while there are indeed times when we would give anything to be able to go back and undo something in the past, we recognize that the past is closed for us, and we would gladly settle for an "open future." But what would an open future be? A future in which our deliberation is effective: a future in which if I decide to do A then I will do A, and if I decide to do B then I will do B; a future in which — since only one future is possible — the only possible thing that can happen is the thing I decide in the end to do. 2. What We Care About If it is unlikely then that it matters whether or not a person could have done otherwise (when we look microscopically closely at the causation involved) what is the other question that we are really interested in when we ask "but could he have done otherwise?" Once more I am going to use the tactic of first answering a simpler question about simpler entities. Consider a similar question that might arise about our deterministic robot, the Mark I Deterministic De-liberator. By hypothesis, it lives its entire life as a deterministic machine on a deterministic planet, so that whatever it does, it could not have done otherwise, if we mean that in the strict and metaphysical sense of those words that philosophers have concentrated on. Suppose then that one fine Martian day it makes a regrettable mistake; it concocts and executes a scheme that destroys something valuable — another robot, perhaps. I am not supposing, for the moment, that it can regret anything* but just that its designers, back on Earth, regret what it has done, and find themselves wondering a wonder that might naturally be expressed: could it have done otherwise? They know it is a deterministic system, of course, so they know better than to ask the metaphysical question. Their question concerns the design of the robot; for in the wake of this regrettable event they may wish to redesign it slightly, to make this sort of event less likely in the future.* What they want to know, of course, is what information the robot was relying on, what reasoning or planning it did, and whether it did "enough" of the right sort of reasoning or planning. Of course in one sense of "enough" they know the robot did not do enough of the right sort of thing; if it had, it would have done the right thing. But it may be that the robot's design in this case could not really be improved. For it may be that it was making optimal use of optimally designed heuristic procedures — but this time, unluckily, the heuristic chances it took didn't pay off. Put the robot in a similar situation in the future, and thanks to no more than the fact that its pseudo-random number generator is in a different state, it will do something different; in fact it will usually do the right thing. It is tempting to add: it could have done the right thing on this occasion—meaning by this that it was well enough designed, at that time, to have done the right thing (its "character" is not impugned). Its failure depended on nothing but the fact that something undesigned (and unanticipatable) happened to intervene in the process in a way that made an unfortunate difference. A heuristic program is not guaranteed to yield the "right" or sought- after result. Some heuristic programs are better than others; when one fails, it may be possible to diagnose the failure as assignable to some characteristic weakness in its design. But even the best are not foolproof, and when they fail, as they sometimes must, there may be no reason at all for the failure: as Cole Porter would say, it was just one of those things. Such failures are not the only cases of failures that will "count" for the designers as cases where the system "could have done otherwise." If they discover that the robot's failure, on this occasion, was due to a "freak" bit of dust that somehow drifted into a place where it could disrupt the system, they may decide that this was such an unlikely event that there is no call to redesign the system to guard against its recur- rence.* They will note that, in the micro-particular case, their robot could not have done otherwise; moreover, if (by remotest possibility) it ever found itself in exactly the same circumstance again, it would fail again. But the designers will realize that they have no rational interest in doing anything to improve the design of the robot. It failed on the occasion, but its design is nevertheless above reproach. There is a difference between being optimally designed and being infallible. (See chapter seven.) Consider yet another sort of case. The robot has a ray gun that it fires with 99.9 percent accuracy. That is to say, sometimes, over long distances, it fails to hit the target it was aiming at. Whenever it misses, the engineers want to know something about the miss: was it due to some systematic error in the controls, some foible or flaw that will keep coming up, or was it just one of those things — one of those "acts of God" in which, in spite of an irreproachable execution of an optimally designed aiming routine, the thing just narrowly missed? There will always be such cases; the goal is to keep them to a minimum — consistent with cost-effectiveness of course. Beyond a certain point, it isn't worth caring about errors. Quine (1960, pp. 182 and 259) notes that engineers have a concept of more than passing philosophical interest: the concept of "don't-cares" — the cases that one is rational to ignore. When they are satisfied that a particular miss was a don't-care, they may shrug and say: "Well, it could have been a hit." What concerns the engineers when they encounter misperformance in their robot is whether or not the misperformance is a telling one: does it reveal something about a pattern of systematic weakness, likely to recur, or an inappropriate and inauspicious linking between sorts of circumstances and sorts of reactions? Is this sort of thing apt to happen again, or was it due to the coincidental convergence of fundamentally independent factors, highly unlikely to recur? To get evidence about this they ignore the micro-details, which will never be the same again in any case, and just average over them, analyzing the robot into a finite array of macroscopically defined states, organized in such a way that there are links between the various degrees of freedom of the system. The question they then ask is this: are the links the right links for the task ?* This rationale for ignoring micro-determinism (wherever it may "in principle" exist) and squinting just enough to blur such fine distinctions into probabilistically related states and regions that can be treated as homogeneous is clear, secure, and unproblematic in science, particularly in engineering and biology, as we have seen. (See Wiener 1948 and Wimsatt 1980.) That does not mean, of course, that this is also just the right way to think of people, when we are wondering if they have acted responsibly. But there is a lot to be said for it. Why do we ask "could he have done otherwise?" We ask it because something has happened that we wish to interpret. An act has been performed, and we wish to understand how the act came about, why it came about, and what meaning we should attach to it. That is, we want to know what conclusions to draw from it about the future. Does it tell us anything about the agent's character, for instance? Does it suggest a criticism of the agent that might, if presented properly, lead the agent to improve his ways in some regard? Can we learn from this incident that this is or is not an agent who can be trusted to behave similarly on similar occasions in the future? If one held his character constant, but changed the circumstances in minor—or even major—ways, would he almost always do the same lamentable sort of thing? Was what we have just observed a "fluke," or was it a manifestation of a "robust" trend—a trend that persists, or is constant, over an interestingly wide variety of conditions?* When the agent in question is oneself, this rationale is even more plainly visible. Suppose I find I have done something dreadful. Who cares whether, in exactly the circumstances and state of mind I found myself, I could have done something else? I didn't do something else, and it's too late to undo what I did.* But when I go to interpret what I did, what do I learn about myself? Ought I to practice the sort of maneuver I botched, in hopes of making it more reliable, less vulnerable to perturbation, or would that be wasted effort? Would it be a good thing, so far as I can tell, for me to try to adjust my habits of thought in such sorts of cases in the future? Knowing that I will always be somewhat at the mercy of the considerations that merely happen to occur to me as time rushes on, knowing that I cannot entirely control this process of deliberation, I may take steps to bias the likelihood of certain sorts of considerations routinely "coming to mind" in certain critical situations. For instance, I might try to cultivate the habit of counting to ten in my mind before saying anything at all about Ronald Reagan, having learned that the deliberation time thus gained pays off handsomely in cutting down regrettable outbreaks of intemperate commentary. Or I might decide that no matter how engrossed in conversation I am, I must learn to ask myself how many glasses of wine I have had every time I see someone hovering hospitably near my glass with a bottle. This time I made a fool of myself; if the situation had been quite different, I certainly would have done otherwise; if the situation had been virtually the same, I might have done otherwise and I might not. The main thing is to see to it that I will jolly well do otherwise in similar situations in the future. That, certainly, is the healthy attitude to take toward the regrettable parts of one's recent past. It is the self-applied version of the engineers' attitude toward the persisting weaknesses in the design of the robot. Of course if I would rather find excuses than improve myself, I may dwell on the fact that I don't have to "take" responsibility for my action, since I can always imagine a more fine-grained standpoint from which my predicament looms larger than I do. (If you make yourself really small, you can externalize virtually everything.) In chapter seven I will say more about the rationale for being generous with one's self-ascriptions of responsibility. But for now I will just draw attention to a familiar sort of case in which we hover in the vicinity of asking whether we really could have done otherwise, and then (wisely) back off. One often says, after doing something awful, "I'm terribly sorry; I simply never thought of the consequences; it simply didn't occur to me what harm I was doing!" This looks almost like the beginning of an excuse—"Can I help it what occurs to me and what doesn't?"—but healthy self-controllers shun this path. They take responsibility for what might be, very likely is, just an "accident," just one of those things. That way, they make themselves less likely to be "accident" victims in the future. 3. The Can of Worms The chance of the
quantum-theoretician is not the
ethical freedom of the
Augustinian, and Tyche is as
relentless a mistress as
Ananke.—Norbert Wiener (1948, P 49) These edifying reflections invite one final skeptical thrust: "You paint a rose picture of self-controllers doing the best they can to improve their characters, but what sense can be made of this striving? If determinism is true, then whatever does happen is the only thing that can happen." As van Inwagen (1975, PP. 49-50) says, "To deny that men have free will is to assert that what a man does do and what he can do coincide." In a deterministic world what sense could we make of the exhortation to do the best we can? It does seem to us that sometimes people do less well than they are able to do. How can we make sense of this? If determinism is true, and if this means that the only thing one can do is what one does in fact do, then without even trying, everyone will always be doing his very best—and also his very worst. Unless there is some room between the actual and the possible, some elbow room in which to maneuver, we can make no sense of exhortation. Not only that: retrospective judgment and assessment are also apparently rendered pointless. Not only will it be true that everyone always does his best, but every thing will be as good as it can be. And as bad. Dr. Pan-gloss, the famous optimist, will be right: it is the best of all possible worlds. But his nemesis, Dr. Pang-loss the pessimist, will sigh and agree: it is the best of all possible worlds — and it couldn't be worse!* As the philosophers' saying goes, "ought" implies "can" — even in domains having nothing whatever to do with free will and moral responsibility. Even if we are right to abandon allegiance to the "could have done otherwise" principle as a prerequisite of responsible action, there is the residual problem (according to the incompatibilists) that under determinism, we can never do anything but what we in fact do. As Slote observes, "this itself seems a sufficient challenge to deeply entrenched and cherished beliefs to make it worthwhile to see whether the recent arguments can be attacked at some point before the conclusion that all actions are necessary." (Slote 1982, P. 9). But the challenge is even more unpalatable than Slote claims. If the incompatibilists were right about us, it would be because they were right about everything: under determinism nothing can do anything other than what it in fact does. The conclusion must be that in a deterministic world, since an atom of oxygen that never links up with any hydrogen atoms is determined never to link up with any hydrogen atoms, it is physically impossible for it to link up with any hydrogen atoms. In what sense, then, could it be true that it, like any oxygen atom, can link up with two hydrogen atoms? Ayers calls this threatened implication of determinism "actualism" — only the actual is possible. (Ayers 1968, p. 6) Something is surely wrong with actualism, but actualism is so wrong that it is highly unlikely that its falsehood can be parlayed into a reductio ad absurdum of determinism. The argument would be disconcertingly short: this oxygen atom has valence 2; therefore it can unite with two hydrogen atoms to form a molecule of water (it can right now, whether or not it does); therefore determinism is false. There are impressive arguments from physics that lead to the conclusion that determinism is false — but this isn't one of them. Hume speaks of "a certain looseness" we want to exist in our world. (Treatise, II, III, 2, Selby-Bigge ed., P. 408) This is the looseness that prevents the possible from shrinking tightly around the actual, the looseness presupposed by our use of the word "can." We need this looseness for many things, so we need to know what "can" means, not just for our account of human freedom, and for the social sciences, but for our account of biology, engineering (see chapter three), and in fact any field that relies significantly on statistics and probability theory. What could the biologist mean, for instance, when speaking of some feature of some species as better than some other "possible" feature? If the generally adaptive trend of natural selection is to be coherently described — let alone explained — we must often distinguish a design selected as better (or as no better) than other "possible" designs that selection has spurned.* Biologists assure us that unicorns are not only not actual; they are impossible — as impossible as mermaids. (It has something to do with the violation of bilaterality required for a single, centered horn, I gather.) But the biologists also assure us that there are many possible species that haven't yet existed, and probably never willshort-legged, fat horses good only for eating, say, or blotchless giraffes. Only a small portion of the possible variations ever appear. In probability theory, we take it that a coin toss has two possible outcomes: heads or tails.
When witnessing the toss of a coin, X will normally envisage as possibly true the hypothesis that the coin will land heads up and that it will land tails up. He may also envisage other possibilities — e.g., its landing on its edge. However, if he takes for granted even the crudest folklore of modern physics, he will rule out as impossible the coin's moving upward to outer space in the direction of Alpha Centauri. (Levi 198o, P. 3)Everywhere one looks, one finds reliance on claims about what things can be in what states, what outcomes are possible, and what is impossible but not logically impossible (self-contradictory). If this elusive sense of "can" has nothing particular to do with agency, it nevertheless makes it appearance vividly in that area. In "Ifs and Cans," Austin (1961) offers a famous series of criticisms of the attempt to define "could have done otherwise" as "would have done otherwise if. . ." for various different fillings of the blank. Austin's objections to this strategy have been ably criticized by several philosophers (see especially Chisholm, 1964a). But more important than those objections and criticisms, which have received a great deal of attention from philosophers, is Austin's abrupt, unargued, and all too influential dismissal (in one footnote and one aside) of the most promising approach to the residual, froggy problem. Austin notes in passing that "There is some plausibility, for example, in the suggestion that 'I can do X' means 'l shall succeed in doing X, if I try,' and 'I could have done X' means 'I should have succeeded in doing X, if I had tried.' " But a famous long footnote dismisses the suggestion:
Plausibility, but no more. Consider the case where I miss a very short putt and kick myself because I could have holed it. It is not that I should have holed it if I had tried: I did try, and missed. It is not that I should have holed it if conditions had been different: that might of course be so, but I am talking about conditions as they precisely were, and asserting that I could have holed it. There is the rub. Nor does 'I can hole it this time' mean that I shall hole it this time if I try or if anything else: for I may try and miss, and yet not be convinced that I could not have done it; indeed, further experiments may confirm my belief that I could have done it that time, although I did not. But if I tried my hardest, say, and missed, surely there must have been something that caused me to fail, that made me unable to succeed? So that I could not have holed it. Well, a modern belief in science, in there being an explanation of everything, may make us assent to this argument. But such a belief is not in line with the traditional beliefs enshrined in the word can: according to them, a human ability or power or capacity is inherently liable not to produce success, on occasion, and that for no reason (or are bad luck and bad form sometimes reasons?). (p. 166)But then what should give way, according to Austin—"a modern belief in science" or the "traditional beliefs enshrined in the word can"? Austin does not say, and leaves the impasse unresolved. The impasse is an illusion; modern science needs the same "can" that traditional beliefs about human agency need. And what must give is Austin's insistence that he was "talking about conditions as they precisely were." As we have seen, there is never any purchase to be gained by talking about micro-precise conditions; when we talk about what someone—or something—can do we are always interested in something general. This point is made well by Honore (1964), in a seldom-cited critical commentary on Austin's paper. Honore proposes that we distinguish between two senses of "can": "can" (particular) and "can" (general)—and notes that the particular sense is almost degenerate: it "is almost equivalent to 'will' and has predictive force." (p. 464) In the past tense, particular "can" is only appropriate for describing success: "Thus 'I could see you in the undergrowth' is properly said only when I have succeeded in seeing you."
Success or failure, on the assumption that an effort has been or will be made, is the factor which governs the use of the notion: if the agent tried and failed, he could not do the action: if he tried and succeeded, he was able to do it. (Honore 1964, P. 464)The more useful notion is "can" (general), which in the case of an agent imputes skill or ability, and in the case of an inanimate thing, imputes the sort of potentiality discussed in chapter five (for example, the different states that something can be in). But as we saw then, that sense of "can" is a manifestly epistemic notion; that is, it is generated by any self-controlling planner's need to partition the world into those things and their "states" that are all possible-for-all-it-knows. Philosophical tradition distinguishes several varieties of possibility.
(a) logical or "alethic" possibility: the complement of logical impossibility; something is logically possible if it is consistently describable; it is logically possible that there is a unicorn in the garden, but (if the biologists are right) it is not biologically or physically possible.
(b) physical or "nomic" possibility: something is physically possible if it does not violate the laws of physics or the laws of nature (nomos = law, in Greek). It is physically impossible to travel faster than the speed of light, even though one can describe such a feat without contradicting oneself.
(c) epistemic possibility: something is epistemically possible for Jones if it is consistent with everything Jones already knows. So epistemic possibility is generally viewed as subjective and relative, unlike logical and physical possibility, which are deemed entirely objective.* It is customary in philosophical discussions of free will to distinguish epistemic possibility from its kin, and then dismiss it as of no further interest in that context.* Austin's dismissal is one of the briefest. After considering two other senses of "could have," he mentions a third sense, in which sense 'I could have done something different' means 'I might, for all anyone could know for certain beforehand, have done something different.' This third kind of 'could have' might, I think, be held to be a vulgarism, 'could' being used incorrectly for 'might': but in any case we shall not be concerned with it here. (Austin 1961, P. 207) It is a shame that philosophers have not been concerned with it, for it is the key to the resolution of the riddle about "can." The useful notion of "can," the notion that is relied upon not only in personal planning and deliberation, but also in science, is a concept of possibility—and with it, of course, interdefined concepts of impossibility and necessity—that are, contrary to first appearances, fundamentally "epistemic." As Slote points out in his pioneering article, "Selective Necessity and Free Will" (Slote 1982), the sorts of concepts of necessity and possibility relied upon in these contexts obey different modal principles from the concept of "classical" alethic necessity. In particular, such necessity is not "agglomerative," by which Slote means closed with respect to conjunction introduction.* Slote illustrates the concept with an example of an "accidental" meeting: Jules happens to meet his friend Jim at the bank; he thinks it is a happy accident, as indeed it is. But Jules' being at the bank is not an accident, since he always goes there on Wednesday morning as part of his job; and Jim's being there is also no accident, since he has been sent by his superior. That Jules is at L at time t is no accident; that Jim is at L at time t is no accident. But that Jules is at L at time t and Jim is at L at time t — that is an accident. (Slote 1982, esp. pp. 15-17) This is apparently accidentality or coincidentality from-a-limitedpoint-of-view. We imagine that if we knew much, much more than Jules and Jim together know, we would have been able to predict their convergence at the bank; >em>to us, their meeting would have been "no accident." But this is nevertheless just the concept of accidentality we need to describe the "independence" of a thing's powers or abilities from the initial conditions or background conditions in which those powers are exercised. For instance, it is no accident that this particular insect has just the evasive flight pattern it does have (for it was designed by evolution to have that pattern). And it is no accident that the predatory bird that catches that insect has the genes it does (for it too was designed to have those genes). But it is an accident — happy for the bird and its progeny, unhappy for the insect — that a bird with just those genes caught just that evasive insect. And out of thousands of such happy accidents better birds — and better insects — come to be designed. Out of a conspiracy of accidents, by the millions, comes the space of "possibility" within which selection can occur. The eminent biologist, Jacques Monod, describes the importance for evolution of chance, or what he calls "absolute coincidence" (Monod 1972, P. 112ff.), and illustrates absolute coincidence with an example strikingly like Slote's:
Suppose that Dr. Brown sets out on an emergency call to a new patient. In the meantime Jones the contractor's man has started making emergency repairs on the roof of a nearby building. As Dr. Brown walks past the building, Jones inadvertently lets go of his hammer, whose (deterministic) trajectory happens to intercept that of the physician, who dies of a fractured skull. We say he was a victim of chance. (p. 114)But when Monod comes to define the conditions under which such coincidences can occur, he apparently falls into the actualise trap. Accidents must happen if evolution is to take place, Monod says, and accidents can happen—"Unless of course we go back to Laplace's world, from which chance is excluded by definition and where Dr. Brown has been fated to die under Jones' hammer ever since the beginning of time." (p. 115) If "Laplace's world" means just a deterministic world, then Monod is wrong. Natural selection does not need "absolute" coincidence. It does not need "essential" randomness or perfect independence; it needs practical independence—of the sort exhibited by Brown and Jones, and Jules and Jim, each on his own trajectory but "just happening" to intersect, like the cards being deterministically shuffled in a deck and just happening to fall into sequence. Would evolution occur in a deterministic world, a Laplacean world where mutation was caused by a nonrandom process? Yes, for what evolution requires is an unpatterned generator of raw material, not an uncaused generator of raw material. Quantum-level effects may indeed play a role in the generation of mutations, but such a role is not required by theory.* It is not clear that "genuine" or "objective" randomness of either the quantum-mechanical sort or of the mathematical, informationally incompressible sort is ever required by a process, or detectable by a process. (Chaitin (1976) presents a Godelian proof that there is no decision procedure for determining whether a series is mathematically random.) Even in mathematics, where the concept of objective randomness can find critical application within proofs, there are cases of what might be called practical indistinguishability. In number theory, the Fermat-Lagrange Theorem states that every natural number is the sum of four perfect squares: n = x2 + y2 + z2 + w2 The theorem is easy enough to prove, I gather, but finding the values for x, y, z, and w for a given n is a tedious business. There is a straightforward, "brute force" algorithm that will always find the values by simple exhaustive trial and error, but it has the alarming property of requiring, oil the average, 2" steps to terminate. Thus, for a natural number as small as, say, 203, the algorithm could not be expected to find the answer before the heat death of the universe. It is not what the jargon calls a feasible algorithm, even though in principle (as a philosopher would note) it always yields the correct answer. But all is not lost. Rabin and others have developed so-called random algorithms, which rely in extremely counterintuitive ways on randomization. One such algorithm has been discovered by Rabin for finding values for the Fermat-Lagrange theorem. It is not logically guaranteed to find the right answer any faster than the brute force algorithm, but its expected termination time (with the right answer) is only (log n)3 steps, a manageably small number even for large values of n. The probability of a much longer or much shorter termination time drops off so steeply as to be entirely negligible. The formal proof that this is its expected termination time makes essential mention of the invocation of random sequences in the algorithm. Question: in the actual world of hardware computers, does it make any difference whether the computer uses a genuinely random sequence or a pseudo-random sequence? That is, if one wrote Rabin's program to run on a computer that didn't have a radium randomizer but relied instead on a pseudo-random number generating algorithm, would this cheap shortcut work? Or would attempts to find the values for a particular n run longer than the expected number of steps in virtue of the hidden, humanly undetectable nonrandomness of the sequence? Would the number system, in its hauteur, punish the mathematician for trying to plumb its secrets with mere pseudo-random exploration? As it turns out, experience to date has been that one can indeed get away with pseudorandom sequences. In the actual runs that have been attempted, it has made no difference.* But surely mere practical indistinguishability, even in the limit, is not the Real Thing—real, objective possibility. That is the intuition we must now examine. It is at the heart of the brusque rejection, by philosophers, of epistemic possibility as a building stone in the foundation of free will. So-called "classical" or Newtonian physics is deterministic, but as several physicists have recently noted, many of the most mundane macroscopic phenomena in a Newtonian world would be, by Newtonian principles, unpredictable by any being that fell short of being an infinite Laplacean demon, for they would require infinite precision of initial observation. That is, errors in observation, however minuscule, would propagate and grow exponentially (Berry 1983 and Ford 1983). In Newtonian physics, there are stable systems (precious few of them) and unstable or chaotic systems. "For nonchaotic systems, error propagates less rapidly and . . . even a coarse-grained past suffices to determine precisely a coarse-grained future." Eclipses, for instance, may be predicted centuries in advance. But "a chaotic orbit is random and incalculable; its information content is both infinite and incompressible." (Ford 1983, P. 7) The trajectory of a pinball (the example is Berry's) after bumping, say, twenty posts (in a few seconds) is unpredictable in the limit, far outstripping the limits of accuracy of any imaginable observation devices. Now this result is surely "just epistemic." What could it have to do with free will? Just this, I think: such chaotic systems are the source of the "practical" (but one might say infinitely practical) independence of things that shuffles the world and makes it a place of continual opportunity. The opportunities provided are not just our opportunities, but also those of Mother Nature—and of oxygen atoms which can join forces on occasion with hydrogen atoms. It is not any parochial fact about our epistemic limitations that distinguishes the world into stable, predictable systems and unstable, chaotic systems; it is a fact about the world itself—because it is a fact about the world's predictability by any predicting system at all, however powerful. There is no higher perspective (unless we count the perspective of an infinite being) from which the "accidental" collisions of locally predictable trajectories are themselves predictable and hence "no accident" after all. It is this contrast between the stable and the chaotic that grounds our division of the world into the enduring and salient features of the world, and those features that we must treat statistically or probabilistically (in effect, either averaging over them and turning them into a blur, or treating them as equi-possible members of some ensemble of alternatives). And this division of the world is not just our division; it is, for instance, Mother Nature's division as well. Since for all Mother Nature knows (or could know) it is possible that these insects will cross paths (sometime, somewhere) with these insectivorous birds, they had better be designed with some avoidance machinery. This endows them with a certain power (a bit of "can do," as slang has it) that will serve well (in general). (These all too sketchy remarks about "can" are at best a pointing gesture toward the final, finished surface of this part of my sculpted portrayal of the free agent. This is another area where much more work needs to be done, and some of the work, certainly, is quite beyond me. But if I am even approximately right in this first, rough pass over the region, the work still to be done will at least move the investigation off of stale, overworked surfaces into new spaces.)