Predictability is an important characteristic of law-governed phenomena. It is an essential part of the scientific method, sometimes called the hypothetico-deductive-experimental-observation method. In the first step, freely invented hypotheses are proposed. In the second, reason, logic, and mathematics are used to deduce quantitative implications of the hypotheses. These deductions suggest observations and experiments (step 3), especially those that can provide quantitative measurements. The best hypothesis is the one that predicts observations or experiments that confirm (verify) or deny (falsify) the implied theory, and more specifically the one hypothesis that leads to quantitative measurements in agreement with the prediction. But what does agreement mean? Predictions are never in perfect mathematical agreement with the observations and experimental measurements. When measurements are repeated (as they must be, preferably by independent observers), they scatter randomly around some average value in a "normal distribution." Confirmation of a prediction is when the prediction lies within an acceptable error (usually reported as "number of standard deviations"), a range of values around the average measured value. Predictability is related to the idea of reproducibility. In order for an experiment to be accepted as scientific evidence, the experiments must be reproducible and repeatable. But, since experimental results are never exact, a reproducible result is one that gets the same result within the "error bars." Experimental science can thus offer no "proofs" of knowledge - just best predictions, best explanations, and best theories. Like all knowledge, scientific knowledge is merely probable or statistical, though it is, at the same time, the most reliable knowledge that we have. Probabilities are a priori, produced by theories.
Statistics are the a posteriori results of experiments. The reason physical science is so well respected is because the precision and accuracy of its measurements are so high. Quantum physics, the most accurate physical theory, produces measurements accurate to within over fifteen significant figures, that is to say with standard deviations of +/- .0000000000000001. Quantum mechanical predictions are unusual in that they contain a minimal indeterminism, consistent with the Heisenberg uncertainty (or indeterminacy) principle. Some definitions of science maintain that it must be based on perfectly repeatable experiments. At the level of quantum particles, there is no such thing as a perfectly repeatable experiment. Classical deterministic laws of nature have been traditionally thought to be infinitely accurate, that is, to an infinite number of decimal places. They are only as accurate as their experimental evidence. Philosophers (including many philosophers of science) have sometimes misread physical determinism as providing an infinitely accurate predictability. This is impossible. Ludwig Boltzmann, his admirer and contemporary Franz Exner, and Exner's student Erwin Schrödinger often pointed out that deterministic theories go beyond the available evidence. Popularization of physical theories has often confused not just the public, but even philosophers of science. On the three hundredth anniversary of Newton’s Principia, Sir James Lighthill gave a lecture to the Royal Society, lamenting the confusion between Newton's classical mechanical determinism and the apparent claim of perfect predictability:
”We are all deeply conscious today that the enthusiasm of our forebears for the marvellous achievements of Newtonian mechanics led them to make generalizations in this area of predictability which, indeed, we may have generally tended to believe before 1960, but which we now recognize were false. We collectively wish to apologize for having misled the general educated public by spreading ideas about determinism of systems satisfying Newton’s laws of motion that, after 1960, were to be proved incorrect...”The Marquis de Laplace imagined an intelligent being (Laplace's Demon who knows the positions and velocities of the constituent atoms and uses Newton's equations of motion to predict the future (and retrodict the past) of the entire universe.
Predictability, Determinism, and Free Will?Although many thinkers confuse predictability with determinism, neither Newton nor Laplace likely did so. Newton was painfully aware of the errors made in astronomical observations. Laplace invented his super intelligent being to contrast its infinite mind (like that of God) to the finite minds that must remain infinitely distant from such knowledge. He knew that the information for even one single particle is infinite mathematically. Philosophers who appreciate that determinism does not imply predictability have used the unpredictability of the future to argue for a kind of free will called epistemic freedom. It is enough freedom that we do not know what the future will be. "God only knows," say those religious thinkers who are not bothered by the foreknowledge of a God or gods. Although it is true that we cannot know what an open future will bring, to accept human ignorance as a basis for human freedom seems perverse. When physical laws are expressible as mathematical functions of time, knowledge of the initial conditions at some time allows us to predict the conditions at all later (and retrospectively earlier) times. Long before Laplace, Gottfried Leibniz imagined a scientist who could see the events of all times, just as all times are thought to be present to the mind of God.