Citation for this page in APA citation style.           Close


Core Concepts

Abduction
Belief
Best Explanation
Cause
Certainty
Chance
Coherence
Correspondence
Decoherence
Divided Line
Downward Causation
Emergence
Emergent Dualism
ERR
Identity Theory
Infinite Regress
Information
Intension/Extension
Intersubjectivism
Justification
Materialism
Meaning
Mental Causation
Multiple Realizability
Naturalism
Necessity
Possible Worlds
Postmodernism
Probability
Realism
Reductionism
Schrödinger's Cat
Supervenience
Truth
Universals

Philosophers

Mortimer Adler
Rogers Albritton
Alexander of Aphrodisias
Samuel Alexander
William Alston
Anaximander
G.E.M.Anscombe
Anselm
Louise Antony
Thomas Aquinas
Aristotle
David Armstrong
Harald Atmanspacher
Robert Audi
Augustine
J.L.Austin
A.J.Ayer
Alexander Bain
Mark Balaguer
Jeffrey Barrett
William Barrett
William Belsham
Henri Bergson
George Berkeley
Isaiah Berlin
Richard J. Bernstein
Bernard Berofsky
Robert Bishop
Max Black
Susanne Bobzien
Emil du Bois-Reymond
Hilary Bok
Laurence BonJour
George Boole
Émile Boutroux
F.H.Bradley
C.D.Broad
Michael Burke
Lawrence Cahoone
C.A.Campbell
Joseph Keim Campbell
Rudolf Carnap
Carneades
Ernst Cassirer
David Chalmers
Roderick Chisholm
Chrysippus
Cicero
Randolph Clarke
Samuel Clarke
Anthony Collins
Antonella Corradini
Diodorus Cronus
Jonathan Dancy
Donald Davidson
Mario De Caro
Democritus
Daniel Dennett
Jacques Derrida
René Descartes
Richard Double
Fred Dretske
John Dupré
John Earman
Laura Waddell Ekstrom
Epictetus
Epicurus
Herbert Feigl
Arthur Fine
John Martin Fischer
Frederic Fitch
Owen Flanagan
Luciano Floridi
Philippa Foot
Alfred Fouilleé
Harry Frankfurt
Richard L. Franklin
Michael Frede
Gottlob Frege
Peter Geach
Edmund Gettier
Carl Ginet
Alvin Goldman
Gorgias
Nicholas St. John Green
H.Paul Grice
Ian Hacking
Ishtiyaque Haji
Stuart Hampshire
W.F.R.Hardie
Sam Harris
William Hasker
R.M.Hare
Georg W.F. Hegel
Martin Heidegger
Heraclitus
R.E.Hobart
Thomas Hobbes
David Hodgson
Shadsworth Hodgson
Baron d'Holbach
Ted Honderich
Pamela Huby
David Hume
Ferenc Huoranszki
William James
Lord Kames
Robert Kane
Immanuel Kant
Tomis Kapitan
Walter Kaufmann
Jaegwon Kim
William King
Hilary Kornblith
Christine Korsgaard
Saul Kripke
Thomas Kuhn
Andrea Lavazza
Christoph Lehner
Keith Lehrer
Gottfried Leibniz
Jules Lequyer
Leucippus
Michael Levin
George Henry Lewes
C.I.Lewis
David Lewis
Peter Lipton
C. Lloyd Morgan
John Locke
Michael Lockwood
E. Jonathan Lowe
John R. Lucas
Lucretius
Alasdair MacIntyre
Ruth Barcan Marcus
James Martineau
Storrs McCall
Hugh McCann
Colin McGinn
Michael McKenna
Brian McLaughlin
John McTaggart
Paul E. Meehl
Uwe Meixner
Alfred Mele
Trenton Merricks
John Stuart Mill
Dickinson Miller
G.E.Moore
Thomas Nagel
Otto Neurath
Friedrich Nietzsche
John Norton
P.H.Nowell-Smith
Robert Nozick
William of Ockham
Timothy O'Connor
Parmenides
David F. Pears
Charles Sanders Peirce
Derk Pereboom
Steven Pinker
Plato
Karl Popper
Porphyry
Huw Price
H.A.Prichard
Protagoras
Hilary Putnam
Willard van Orman Quine
Frank Ramsey
Ayn Rand
Michael Rea
Thomas Reid
Charles Renouvier
Nicholas Rescher
C.W.Rietdijk
Richard Rorty
Josiah Royce
Bertrand Russell
Paul Russell
Gilbert Ryle
Jean-Paul Sartre
Kenneth Sayre
T.M.Scanlon
Moritz Schlick
Arthur Schopenhauer
John Searle
Wilfrid Sellars
Alan Sidelle
Ted Sider
Henry Sidgwick
Walter Sinnott-Armstrong
J.J.C.Smart
Saul Smilansky
Michael Smith
Baruch Spinoza
L. Susan Stebbing
Isabelle Stengers
George F. Stout
Galen Strawson
Peter Strawson
Eleonore Stump
Francisco Suárez
Richard Taylor
Teilhard de Chardin
Kevin Timpe
Mark Twain
Peter Unger
Peter van Inwagen
Manuel Vargas
John Venn
Kadri Vihvelin
Voltaire
G.H. von Wright
David Foster Wallace
R. Jay Wallace
W.G.Ward
Ted Warfield
Roy Weatherford
C.F. von Weizsäcker
William Whewell
Alfred North Whitehead
David Widerker
David Wiggins
Bernard Williams
Timothy Williamson
Ludwig Wittgenstein
Susan Wolf

Scientists

Michael Arbib
Walter Baade
Bernard Baars
Jeffrey Bada
Leslie Ballentine
Gregory Bateson
John S. Bell
Mara Beller
Charles Bennett
Ludwig von Bertalanffy
Susan Blackmore
Margaret Boden
David Bohm
Niels Bohr
Ludwig Boltzmann
Emile Borel
Max Born
Satyendra Nath Bose
Walther Bothe
Hans Briegel
Leon Brillouin
Stephen Brush
Henry Thomas Buckle
S. H. Burbury
Donald Campbell
Anthony Cashmore
Eric Chaisson
Gregory Chaitin
Jean-Pierre Changeux
Arthur Holly Compton
John Conway
John Cramer
Francis Crick
E. P. Culverwell
Antonio Damasio
Olivier Darrigol
Charles Darwin
Richard Dawkins
Terrence Deacon
Lüder Deecke
Richard Dedekind
Louis de Broglie
Stanislas Dehaene
Max Delbrück
Abraham de Moivre
Paul Dirac
Hans Driesch
John Eccles
Arthur Stanley Eddington
Gerald Edelman
Paul Ehrenfest
Albert Einstein
Hugh Everett, III
Franz Exner
Richard Feynman
R. A. Fisher
David Foster
Joseph Fourier
Philipp Frank
Steven Frautschi
Edward Fredkin
Lila Gatlin
Michael Gazzaniga
GianCarlo Ghirardi
J. Willard Gibbs
Nicolas Gisin
Paul Glimcher
Thomas Gold
A. O. Gomes
Brian Goodwin
Joshua Greene
Jacques Hadamard
Mark Hadley
Patrick Haggard
J. B. S. Haldane
Stuart Hameroff
Augustin Hamon
Sam Harris
Hyman Hartman
John-Dylan Haynes
Donald Hebb
Martin Heisenberg
Werner Heisenberg
John Herschel
Art Hobson
Jesper Hoffmeyer
E. T. Jaynes
William Stanley Jevons
Roman Jakobson
Pascual Jordan
Ruth E. Kastner
Stuart Kauffman
Martin J. Klein
William R. Klemm
Christof Koch
Simon Kochen
Hans Kornhuber
Stephen Kosslyn
Ladislav Kovàč
Leopold Kronecker
Rolf Landauer
Alfred Landé
Pierre-Simon Laplace
David Layzer
Joseph LeDoux
Benjamin Libet
Seth Lloyd
Hendrik Lorentz
Josef Loschmidt
Ernst Mach
Donald MacKay
Henry Margenau
James Clerk Maxwell
Ernst Mayr
John McCarthy
Warren McCulloch
George Miller
Stanley Miller
Ulrich Mohrhoff
Jacques Monod
Emmy Noether
Alexander Oparin
Abraham Pais
Howard Pattee
Wolfgang Pauli
Massimo Pauri
Roger Penrose
Steven Pinker
Colin Pittendrigh
Max Planck
Susan Pockett
Henri Poincaré
Daniel Pollen
Ilya Prigogine
Hans Primas
Adolphe Quételet
Jürgen Renn
Juan Roederer
Jerome Rothstein
David Ruelle
Tilman Sauer
Jürgen Schmidhuber
Erwin Schrödinger
Aaron Schurger
Claude Shannon
Charles Sherrington
David Shiang
Herbert Simon
Dean Keith Simonton
B. F. Skinner
Lee Smolin
Ray Solomonoff
Roger Sperry
John Stachel
Henry Stapp
Tom Stonier
Antoine Suarez
Leo Szilard
Max Tegmark
William Thomson (Kelvin)
Giulio Tononi
Peter Tse
Vlatko Vedral
Heinz von Foerster
John von Neumann
John B. Watson
Daniel Wegner
Steven Weinberg
Paul A. Weiss
John Wheeler
Wilhelm Wien
Norbert Wiener
Eugene Wigner
E. O. Wilson
Stephen Wolfram
H. Dieter Zeh
Ernst Zermelo
Wojciech Zurek
Konrad Zuse
Fritz Zwicky

Presentations

Biosemiotics
Free Will
Mental Causation
James Symposium

 
Frank Ramsey

Frank Ramsey was the brilliant and precocious son of a Cambridge don. In his short but productive life, he made significant corrections to the Principia Mathematica of Russell and Whitehead and he was a principal translator of the works of Wittgenstein.

Ramsey was a pragmatic thinker who frequently made references to Charles Sanders Peirce.

In his work with John Maynard Keynes, Ramsey made important contributions to economics and (as did Keynes) to probability theory. Ramsey distinguished personal probabilities from the objective probabilities in physics and logic. He argued that personal probabilities can be determined by observing an individual's actions that reflect the individual's beliefs.

Ramsey argued that the quantitative degree of probability that an individual attaches to a particular outcome can be measured by finding what odds the individual would accept when betting on that outcome. 1 This was what William James called "bettability."

And it is similar to establishing the presence of personal belief (knowledge?) - actionable information - in a mind by observing the person's actions - the fundamental premise of behaviorism.

In his brief remarks on knowledge, Ramsey suggested the epistemological methods of reliabilism and causality as views justifying knowledge. 2

Knowledge as a Reliable Causal Process
I have always said that a belief was knowledge if it was (i) true, (ii) certain, (iii) obtained by a reliable process. But the word 'process' is very unsatisfactory; we can call inference a process, but even then unreliable seems to refer only to a fallacious method not to a false premiss as it is supposed to do. Can we say that a memory is obtained by a reliable process? I think perhaps we can if we mean the causal process connecting what happens with my remembering it. We might then say, a belief obtained by a reliable process must be caused by what are not beliefs in a way or with accompaniments that can be more or less relied on to give true beliefs, and if in this train of causation occur other intermediary beliefs these must all be true ones.

E.g. 'Is telepathy knowledge?' may mean: (a) Taking it there is such a process, can it be relied on to create true beliefs in the telepathee (within some limits, e.g. when what is believed is about the telepathee's thoughts)? or (b) Supposing we are agnostic, does the feeling of being telepathed to guarantee truth? Ditto for female intuition, impressions of character, etc. Perhaps we should say not (iii) obtained by a reliable process but (iii) formed in a reliable way.

We say 'I know', however, whenever we are certain, without reflecting on reliability. But if we did reflect then we should remain certain if, and only if, we thought our way reliable. (Supposing us to know it; if not, taking it merely as described it would be the same, e.g. God put it into my mind: a supposedly reliable process.) For to think the way reliable is simply to formulate in a variable hypothetical the habit of following the way.

One more thing. Russell says in his Problems of Philosophy that there is no doubt that we are sometimes mistaken, so that all our knowledge is infected with some degree of doubt. Moore used to deny this, saying of course it was self-contradictory, which is mere pedantry and ignoration of the kind of knowledge meant.

But substantially the point is this: we cannot without self-contradiction say p and q and r and . . . and one of p, q, r . . . is false. (N.B.— We know what we know, otherwise there would not be a contradiction). But we can be nearly certain that one is false and yet nearly certain of each; but p, q, r are then infected with doubt. But Moore is right in saying that not necessarily all are so infected; but if we exempt some, we shall probably become fairly clear that one of the exempted is probably wrong, and so on.

Cause and Effect
We have now to explain the peculiar importance and objectivity ascribed to causal laws; how, for instance, the deduction of effect from cause is conceived as so radically different from that of cause from effect. (No one would say that the cause existed because of the effect.) It is, it seems, a fundamental fact that the future is due to the present, or, more mildly, is affected by the present, but the past is not. What does this mean? It is not clear and, if we try to make it clear, it turns into nonsense or a definition: 'We speak of ratio essendi when the protasis is earlier than the apodasis.' We feel that this is wrong; we think there is some difference between before and after at which we are getting; but what can it be? There are differences between the laws deriving effect from cause and those deriving cause from effect; but can they really be what we mean? No; for they are found a posteriori, but what we mean is a priori.
The second law is intimately connected with the
origin of irreversibility and the creation of information
[The Second Law of Thermodynamics is a posteriori; what is peculiar is that it seems to result merely from absence of law (i.e. chance), but there might be a law of shuffling.]

What then do we believe about the future that we do not believe about the past; the past, we think, is settled; if this means more than that it is past, it might mean that it is settled for us, that nothing now could change our opinion for us of any past event. But that is plainly untrue. What is true is this, that any possible present volition of ours is (for us) irrelevant to any past event. To another (or to ourselves in the future) it can serve as a sign of the past, but to us now what we do affects only the probability of the future.

This seems to me the root of the matter; that I cannot affect the past, is a way of saying something quite clearly true about my degrees of belief. Again from the situation when we are deliberating seems to me to arise the general difference of cause and effect. We are then engaged not on disinterested knowledge or classification (to which this difference is utterly foreign), but on tracing the different consequences of our possible actions, which we naturally do in sequence forward in time, proceeding from cause to effect not from effect to cause. We can produce A or A' which produces B or B' which etc. . . . ; the probabilities of A, B are mutually dependent, but we come to A first from our present volition.

Other people we say can affect only the future and not the past for two reasons; first, by analogy with ourselves we know they can affect the future and not the past from their own point of view; and secondly, if we subsume their action under the general category of cause and effect, it can only be a cause of what is later than it. This means ultimately that by affecting it we can only affect indirectly (in our calculation) events later than it, In a sense my present action is an ultimate and the only ultimate contingency,

[Ramsey added important notes to these observations.]

Reasonable Degree of Belief
When we pass beyond reasonable = my, or = scientific, to define it precisely is quite impossible. Following Peirce we predicate it of a habit not of an individual judgment. Roughly, reasonable degree of belief = proportion of cases in which habit leads to truth. But in trying to be more exact we encounter the following difficulties :—

(1) We cannot always take the actual habit: this may be correctly derived from some previous accidentally misleading experience. We then look to wider habit of forming such a habit.

(2) We cannot take proportion of actual cases; e.g. in a card game very rarely played, so that of the particular combination in question there are very few actual instances.

(3) We sometimes really assume a theory of the world with laws and chances, and mean not the proportion of actual cases but what is chance on our theory.

(4) But it might be argued that this complication was not necessary on account of (1) by which we only consider very general habits of which there are so many instances that, if chance on our theory differed from the actual proportion, our theory would have to be wrong.

(5) Also in an ultimate case like induction, there could be no chance for it: it is not the sort of thing that has a chance.

Fortunately there is no point in fixing on a precise sense of 'reasonable'; this could only be required for one of two reasons: either because the reasonable was the subject matter of a science (which is not the case); or because it helped us to be reasonable to know what reasonableness is (which it does not, though some false notions might hinder us). To make clear that it is not needed for either of these purposes we must consider (1) the content of logic

and (2) the utility of logic.

THE CONTENT OF LOGIC
(1) Preliminary philosophico-psychological investigation into nature of thought, truth and reasonableness.

(2) Formulae for formal inference = mathematics.

(3) Hints for avoiding confusion (belongs to medical psychology).

(4) Outline of most general propositions known or used as habits of inference from an abstract point of view; either crudely inductive, as 'Mathematical method has solved all these other problems, therefore...' or else systematic, when it is called metaphysics. All this might anyhow be called metaphysics; but it is regarded as logic when adduced as bearing on an unsolved problem, not simply as information interesting for its own sake.

The only one of these which is a distinct science is evidently (2).

THE UTILITY OF LOGIC
That of (1) above and of (3) are evident: the interesting ones are (2) and (4). (2) = mathematics is indispensable for manipulating and systematizing our knowledge. Besides this (2) and (4) help us in some way in coming to conclusions in judgment.
LOGIC AS SELF-CONTROL (Cf. Peirce)
Self-control in general means either

(1) not acting on the temporarily uppermost desire, but stopping to think it out; i.e. pay regard to all desires and see which is really stronger; its value is to eliminate inconsistency in action;

or (2) forming as a result of a decision habits of acting not in response to temporary desire or stimulus but in a definite way adjusted to permanent desire.

The difference is that in (1) we stop to think it out but in (2) we've thought it out before and only stop to do what we had previously decided to do.

So also logic enables us

(1) Not to form a judgment on the evidence immediately before us, but to stop and think of all else that we know in any way relevant. It enables us not to be inconsistent, and also to pay regard to very general facts, e.g. all crows I've seen are black, so this one will be — No; colour is in such and such other species a variable quality. Also e.g. not merely to argue from φa . φb . . . to (x).φx probable, but to consider the bearing of a, b . . are the class I've seen (and visible ones are specially likely or unlikely to be φ). This difference between biassed and random selection. (Vide infra 'Chance'.)

(2) To form certain fixed habits of procedure or interpretation only revised at intervals when we think things out. In this it is the same as any general judgment; we should only regard the process as 'logic' when it is very general, not e.g. to expect a woman to be unfaithful, but e.g. to disregard correlation coefficients with a probable error greater than themselves.

With regard to forming a judgment or a partial judgment (which is a decision to have a belief of such a degree, i.e. to act in a certain way) we must note :-

(a) What we ask is p? not 'Would it be true to think p?' nor 'Would it be reasonable to think p?' (But these might be useful first steps.)

but (b) 'Would it be true to think p? ' can never be settled without settling p to which it is equivalent.

(c) 'Would it be reasonable to think p?' means simply 'Is p what usually happens in such a case?' and is as vague as 'usually'. To put this question may help us, but it will often seem no easier to answer than p itself.

(d) Nor can the precise sense in which 'reasonable' or 'usually' can usefully be taken be laid down, nor weight assigned on any principle to different considerations of such a sort. E.g. the death-rate for men of 60 is 1/10, but all the 20 red-haired 60-year-old men I've known have lived till 70, What should I expect of a new red-haired man of 60? I can but put the evidence before me, and let it act on my mind, There is a conflict of two 'usually's' which must work itself out in my mind ; one is not the really reasonable, the other the really unreasonable.

(e) When, however, the evidence is very complicated, statistics are introduced to simplify it. They are to be chosen in such a way as to influence me as nearly as possible in the same way as would the whole facts they represent if I could apprehend them clearly. But this cannot altogether be reduced to a formula; the rest of my knowledge may affect the matter, thus p may be equivalent in influence to q, but not ph to qh.

(f) There are exceptional cases in which 'It would reasonable to think p' absolutely settles the matter. Thus if we are told that one of these people's names begins with A and that there are 8 of them, it is reasonable to believe to degree 1/8th that any particular one's name begins with A, and this is what we should all do (unless we felt there was something else relevant).

(g) Nevertheless, to introduce the idea of 'reasonable ' is really a mistake; it is better to say 'usually', which wakes clear the vagueness of the range: what is reasonable depends on what is taken as relevant; if we take enough as relevant, whether it is reasonable to think p becomes at least as difficult a question as p. If we take everything as relevant, they are the same.

(h) What ought we to take as relevant? Those sorts of things which it is useful to take as relevant; if we could rely on being reasonable in regard to what we do take as relevant, this would mean everything. Otherwise it is impossible to say ; but the question is one asked by a spectator not by the thinker himself: if the thinker feels a thing relevant he can't dismiss it ; and if he feels it irrelevant he can't use it.

(i) Only then if we in fact feel very little to be relevant, do or can we answer the question by an appeal to what is reasonable, this being then equivalent to what we know ad consider relevant.

(j) What are or are not taken as relevant are not only propositions but formal facts, e.g. a = a: we may react differently to φa than to any other φx not because of anything we know about a but e.g. for emotional reasons.

Theories. The Foundation of Mathematics (p.235 in the 1960 edition),
In a never-published article titled "Is There a God?" commissioned by Illustrated magazine in 1952, Bertrand Russell wrote:
In 1958, Russell again suggested the teapot between Earth and Mars as a basis for his own atheism:
Ramsey on Identity
Ramsey criticized the section on identity in Principia Mathematica
The third serious defect in Principia Mathematicais the treatment of identity. It should be explained that what is meant is numerical identity, identity in the sense of counting as one, not as two. Of this the following definition is given:

' x = y . = : (φ) : φ ! x . ⊃ . φ ! y : Df. ' [Cf., Principia Mathematica, 13.01]

That is, two things are identical if they have all their elementary properties in common...

The real objection to this definition of identity is the same is that urged above against defining classes as definable classes : that it is a misinterpretation in that it does not define the meaning with which the symbol for identity is actually used.

For distinct objects to be identical in Ramsey's sense, we would have to ignore relational properties and positional properties
This can be easily seen in the following way: the definition makes it self-contradictory for two things to have all their elementary properties in common. Yet this is really perfectly possible, even if, in fact, it never happens. Take two things, a and b. Then there is nothing self-contradictory in a having any self-consistent set of elementary properties, nor in b having this set, nor therefore, obviously, in both a and b having them, nor therefore in a and b having all their elementary properties in common. Hence, since this is logically possible, it is essential to have a symbolism which allows us to consider this possibility and does not exclude it by definition.

Is an object's name a property? It is certainly not an essential or even a peculiar quality, in Aristotle's sense.
Leibniz's Law about the indiscernibility of identicals is not enough. Some properties that differ might not be discernible.
It is futile to raise the objection that it is not possible to distinguish two things which have all their properties in common, since to give them different names would imply that they had the different properties of having those names. For although this is perfectly true—that is to say, I cannot, for the reason given, know of any two particular indistinguishable things—yet I can perfectly well consider the possibility, or even know that there are two indistinguishable things without knowing which they are.

For Teachers
For Scholars
Notes

1. Wikipedia article on Ramsey, retrieved March 24, 2009

2. Pointed out by David Armstrong, Belief, Truth and Knowledge, Cambridge University Press, 1973, p.159

Bibliography

Normal | Teacher | Scholar