Clark Glymour worked in the 1970s on traditional issues in the philosophy of science, especially formal accounts of the confirmation of scientific theories. In this same period he worked on philosophically interesting global properties of models of general relaivity. In the 1980s, in collaboration with John Earman, he worked on historical topics in late 19th and early 20th century psychiatry and physics, especially on the genesis and testing of the special and general theories of relativity. In the same period he became interested in the possibility of automated procedures for finding causal explanations in the social sciences.
A collaboration with his students, Kevin Kelly, Richard Scheines and Peter Spirtes developed automated heuristic procedures for respecification of linear latent variable models. In the 1990s Scheines, Spirtes and Glamour had developed the causal interpretation of Bayes nets, and outlined a program of research: to find feasible search algorithms, characterize indistinguishability, and generate algorithms for prediction from interventions on partially characterized causal structures. His current research applies previous work on causal Bayes nets and formal learning theory to a variety of topics.
Here he discusses different kinds of uses of probabilities in science, causality, Hume and Bayes, why thinking causality is a fiction isn’t even wrong, causal Bayes nets, social sciences poor record of making inferences, free will, why Aristotle’s approach to philosophy bests Plato’s and why there’s not enough of that approach in contemporary philosophy at the moment, Laplacian demons, why in general scientists are right to criticise contemporary philosophy on the grounds that it doesn’t do anything, and the threats that Bayesians will avert. This’ll wake you up…
3:AM: What made you become a philosopher?
Clark Glamour: When I was sixteen, after reading the Origin of Species I decided I wanted to know everything, or at least to know what could not be known. As a freshman at the University of Montana I sat in on a one night a week adult course on the history of philosophy taught by the late Cynthia Schuster, who had been Hans Reichenbach’s doctoral student. My fate was decided. I had to hide my interest from my father, who expected me to become an attorney.
3:AM: First, looking at science generally, would you say the use of probabilities is one of the biggest changes in science over the last century or so? Could you sketch for us the landscape as it looks to you now, how it developed and how you’d characterise the explanatory virtues of probability?
CG: There are two kinds of uses of probability in science. One is that probability claims may be intrinsic to a theory, as in statistical mechanics or quantum theory; the other is the probability is used in the assessment of theories, as in most of applied statistics. In the first role, probability claims are built into whatever explanation a theory provides; in the second role, they have no such function. For technical reasons, the division is not quite so sharp as I have stated it. In many forms of data assessment, the theory itself must specify a probability distribution for the data. Those specifications are usually ancillary to the “substantive” claims of a theory; for example, in the social sciences they are typically about the probability distribution of unobserved “disturbance” or “noise” variables that are themselves usually of no substantive interest. This contrasts, for example, with certain classes of theories in psychology, and of course in quantum theory, where the relations among the variables of interest are specified to be probabilistic.
It is often forgotten but should be emphasized that some of the foundational theories in scientific history were not probabilistic in either of the ways I have just described. Among many others, Newtonian dynamics and Darwin’s theory of evolution are but two examples of a-probabilistic theories and theory assessments. Probability entered theory assessment early in the 19th century, I think beginning with Legendre’s (1808, I think) appendix on estimating the orbits of comets by least squares, although I believe Gauss claimed credit, as he did for much else. In the 18th century probability had a role in speculative theories of human abilities, but its first intrinsic role in physical theories seem to have been in the kinetic theory of gases in the 19th century. By the 20th century, probability was increasingly (and now almost universally) required in data assessment.
3:AM: Does this mean that really causality is no longer scientific and that what science will look at instead is probabilities connecting distinct events and so forth? Do causality and probability come apart necessarily, or can they be unified?
CG: Phooey! Try to plan getting out of a room by computing the probability that you try to turn the doorknob conditional on the doorknob turning…versus…computing the probability that the knob will turn given that you try to turn the knob. The conditional probabilities are different. Causality makes the difference, and is why when planning to get out of a room, we use the second, and not the first, conditional probability. For planning actions and policy interventions, probability is useless without causality. Once upon a time yellowed fingers were highly correlated with lung cancer later in life. The surgeon general recommended against smoking; he did not recommend that people wear gloves to prevent yellowed fingers....
...MUCH MORE... 3:AM: When asking the question about whether there can be mental causes, why did you ask ‘Why is a brain like the planet?’ and what’s the answer to both?
CG: Damned if I remember.
HT: The Browser