The final reading assignment for Monday is posted.
Note that the Achinstein reading questions have been eliminated.
I. Case Study: The Copernican
II. Confirmation and Induction: What justifies conclusions that go beyond the data? Does anything?
III. Theory Structure: What do theoretical terms mean?
IV. Explanation: What is an explanation and is explanation an indicator of truth?
Midterm paper assignment.
Final paper assignment.
Time: Tuesday and Thursday at 1:30-2:50 PM
Room: Scaife Hall 125
Instructor: Kevin T. Kelly
T.A.: Samantha Smee
The stunning success of modern science occasions a number of very basic questions.
What is an explanation? What is a cause? What is confirmation? What is probability? What justifies drawing conclusions beyond the data? Does science aim at truth? Explanation? Prediction? Do scientific theories describe reality or do they merely predict observations? Is there a scientific method? Is there a role for subjectivity? Is there such a thing as objective evidence or justification?
Reasonable (and famous) people have disagreed sharply on each of these questions. We will survey some standard answers and the arguments for them. This is not a closed subject, so it lends itself to discussion and exploration. In order to get you thinking, I will put issues on the table in a loose lecture format. Also, I will give some lectures on some of the logical and probabilistic background as the need arises.
The class carries a 200 number. As such, it must be offered at a level accessible to Sophomores. Advanced students can pursue their interests in the rather open midterm and final essay assignments. I will also be happy to discuss further issues and details in office hours.
Although the course material is not so difficult, it demands a certain intellectual maturity to maintain a clear grasp on the point at hand. At the end of the day, you will not be told what the "right" answer is. You will have to master several conflicting positions and not confuse them. You will have to understand and construct arguments instead of calculating the answers to exercises. All of this is typical of any subject, including science, at the advanced level, but most students encounter it first in undergraduate philosophy classes.
To provide an introduction to some issues and well-known literature in the philosophy of science.
To provide practice in argumentation.
To provide practice in structuring vague issues.
To provide practice in succinct, technical writing.
It is not an aim of the course to march through a fixed range of material. The articles we will read stand alone and the schedule gives us little time to explore each topic in depth. If there is support for the idea, we can extend a topic (e.g., confirmation).
This is a discussion course. I expect enthusiastic and well-informed class discussion.
Whenever you read a philosophical article, have a notebook at hand and follow this procedure:
I may call on you in class to state an author's argument from your notes.
Reading Exercises (33% of the grade)
Simple reading questions are published with each reading assignment on the web. These exercises give you some official credit for attendance and for your preparation for class discussion. They also provide excellent practice for the concise writing style I will expect in the required essays (see below). Finally, the exercises focus attention on the more important portions of the text.
Since they account both for attendance and preparation for the
we will adhere to a strict policy for submitting them. For full
credit you must either
Turning in an exercise in any other way results in an automatic 20% deduction. Since critical discussion is a major component of the class, the 20% reduction applies also in cases of illness or other legitimate excuses. The reading exercises are just a way of recording your presence and preparation for the discussion; they are not a substitute.
Having a friend submit your exercise to cut class is a form of cheating. Don't do that.
Answers must be typed or written legibly.
Keep the answers as short and crisp as possible. I expect no more than a couple of sentences per question. Try to say something that convinces me you read the text instead of guessing.
Length: 5 pages plus references.
Essay must be typed, double spaced, 12 pt. Times Roman, with proper references. See paper writing guide.
Option 1: Philosophical theories of scientific justification are often judged by their ability to make sense of the history of science. Analyze the Copernican Revolution (or some other scientific episode you know about) from the point of view of one or more of the approaches to confirmation and induction discussed in the course. Argue either that the philosophical theory does a good job of accounting for the case or that it fails to make sense of important aspects of the case.
Option 2: Bayesianism is a very flexible approach. Try to show that some of the other confirmation theories we have looked at can be understood to be applications of Bayesian principles.
Option 3: Should scientists favor simple theories?
Option 4: Write an expository essay on any other topic that pertains to more than one of the confirmation readings. Give me a written proposal with references first.
This introductory section of the course has two purposes. First of all, we will be visiting a time when no distinction was made between philosophy and science. Philosophy always becomes more important to scientists during times of revolutionary change, when the certitudes of the past come up for re-evaluation. Second, the Copernican Revolution is a celebrated example of scientific change that will serve to illustrate many ideas in the philosophy of science. It is widely agreed that philosophical theorizing must be grounded in knowledge of scientific practice. A third reason for the choice is Kuhn's admirable writing style, which you may wish to emulate.
This is an important topic. We want to figure out what was better about Copernicus' system. The philosophical question will be: does any of that mean that the theory is true?
This is another important topic. We want to see how scientists received the theory after it became known. Try to figure out whether the reception says anything about the unstated methods of the scientists involved. Think about simplistic proposals one hears all the time about scientific method. Was one of the theories refuted by data? Was there a crucial experiment? Did one theory have more positive instances than the other? Did astronomers critically suspend judgment until one side was proved right?
This section of the course will be quite different from what came before. Instead of looking at how science actually works, we will consider a few ideas about how it should work. Whatever else one might say about them, the papers we will be looking at are very widely known in the philosophy of science.
This paper shaped a whole approach to the philosophy of science. Don't be put off by all the technical-sounding stuff. The motivation and principles he discusses along the way have been very influential in the philosophy of science, so it is worth looking at. Stay awake and look for presuppositions you might challenge. The reading questions will help you to focus on some relevant points. They will be more critical in character than before.
I took a course with Hempel in 1981 and
to the airport when he left
Some reading notes:
Here is a more formidable idea. Carnap still thinks of confirmation as a logical relation between hypothesis and evidence. Carnap's idea is that a valid deductive argument completely supports its conclusion, whereas in science, the evidence incompletely supports the conclusion to some degree. This degree of incomplete support is called the logical probability of the hypothesis given the data. Like many earlier writers, including the physicists Bernoulli and Laplace and the economist John Meynard Keynes, Carnap thinks of degrees of support as probabilities. The guiding analogy is:
deductive logic is to complete support as logical probability is to incomplete support.
Perhaps the first recorded exponent of the idea was the ancient skeptic Carneides, who held that complete certainty is not justified but that degrees of certainty are. All of these proposals have the property that one can talk of the probability of h given e rather than somebody's personal degrees of belief in h given e. On such a theory, the degree of support of h given e is objective; subjective differences being a matter of different evidential histories and of taste in choosing a particular method of assigning degrees of support or "confirmation"..
Carnap was a leader in the positivist's
You shouldn't have much trouble with the first article. The article "On Inductive Logic", is needlessly difficult to understand. The concepts are actually quite simple, so I'll explain them myself. My hints should more or less save you the work of reading the article up to section 9. If you aren't used to mathematical definitions, don't panic. The terms introduced below mean nothing more or less than what the definitions say, so no background is required.
Nelson Goodman's short article is among the most famous in the philosophy of science. It represents a direct attack on the very idea of Carnap's "logical" account of induction.
After Goodman's article, philosophers of science lowered their ambitions from finding an objective logic of empirical support to merely identifying some "rational" constraints on changes in admittedly subjective degrees of belief.
The view that probabilities on hypotheses are just somebody or other's willingness to bet on the proposition is now called Bayesian methodology, after the Reverend Thomas Bayes. The idea that there is no justification for belief but, nonetheless, we are psychologically wired to become more confident in light of increasing evidence goes back at least to David Hume in the 18th century. The view was revived in this century by the philosopher/mathematician Frank Ramsey. J. M. Keynes, the famous economist who first proposed the logical interpretation of probability pursued by Carnap, was completely converted to the Bayesian or "personalist" position by Ramsey.
L. J. Savage, a Bayesian statistician, was a major figure in laying the foundations of Bayesian statistics, the official view of the Carnegie Mellon statistics department.
I have included some notes on Bayesian methodology that complement Savage's more foundational discussion. Please study them. They contain ideas that may be useful for your midterm paper.
Harman defends the "hypothetico-deductive" method, which is to to select the theory that best accounts for the data.
Clark Glymour is a professor in our philosophy department. So if you have any complaints about his theory, go tell him! This article was very influential. Unlike the preceding papers, Glymour's emphasizes the importance for confirmation of "unification" or "harmony" of the sort we saw in Copernicus' theory. For this reason, Glymour's theory has lots of applications in real science and my be of use to you in composing your midterm paper. Glymour claimed that his theory is not Bayesian. But a Bayesian, Roger Rosencranz, claimed that Glymour's ideas follow from Bayesianism. While you ponder this, go back and look at my notes on Bayesianism, under the heading of "unification".
The idea is simpler than Glymour makes it sound. His idea is just that the hypotheses that are confirmed in a theory are the ones that can be "cross-checked" by using the data and other hypotheses in the theory to compute values for their theoretical quantities in different ways that might possibly agree. So for example, the observation of two planets tests Kepler's third law relative to the assumptions of Copernican astronomy, but the observation of one planet does not, since Kepler's law relates the radii and velocities of different planets. If the data could not have turned out in such a way that the hypothesis is refuted given the rest of the theory (i.e., if the hypothesis is not "at risk" from the data, given the rest of the theory), then even if the theory is consistent with the data, the hypothesis is not confirmed by the data. Even though two theories make the same predictions with equal accuracy, the hypotheses in one theory may be cross-checked against one another better than the hypotheses of its opponent.
That sounds good, doesn't it? Here's a question for you to ponder. If both theories make exactly the same predictions, how could relying on "internal cross checking" possibly allow you to determine which is true unless God told you in advance that he would produce phenomena in a unified or internally cross-checkable way?
If you find the article rough sledding, make sure to look at the quote of Weyl on p. 334, which presents the idea very simply. Don't worry about the details of the generalization to logical theories under heading III.
There is typo on p. 333: the equation should be
X(f1(E1 ... Ek), ..., fi(E1 ... Ek)) = 0.
Lakatos was concerned that philosophical pronouncements and ideals are unduly harsh on the historical practice of the best science, giving rise to needless sceptical doubts about the justification of science in general. He was particularly critical of the view that genuine scientists must seek to refute theories with experiments and are unscientific unless they specify the conditions under which the theory would be rejected and then carry through when these conditions are met. Lakatos was unhappy, however, with the idea that there are no general standards governing scientific rationality. His approach was to replace what he viewed as naive proposals with a more sophisticated one that does not require scientists do drop theories as soon as they get into trouble. I will leave it to you to figure out what his proposal is. Lakatos is very critical of Popper and of vaguely described "inductivists". But what might a sophisticated personal probabilist say about his proposal? This could be the basis of a nice midterm paper.
Kuhn agrees that rationality never forces us to drop a theory when it is refuted. But unlike Lakatos, he does not think any general rational standards dictate when the theory should be rejected. Instead, he accounts for scientific change in terms of the particular constellation of values of particular scientific communities. Since methodological standards are community-relative, choices between scientific approaches are more like political revolutions than like logic. One side survives and rewrites the textbooks and histories of science to make it look like logic forced the result. You can see how Kuhn came up with this view after writing our text on the Copernican Revolution. Recall that some astronomers emphasized the unified explanations of Copernicanism and others emphasized the incoherence of Copernicanism with any known approach to physics.
In this chapter, Howson and Urbach argue that the basic historical points urged by Lakatos agree with Bayesian recommendations, after all. One merely has to see the full implications of the concept of conditional probability. See if you agree.
In this chapter, Howson and Urbach argue respond to critics.
For fairness, here are Glymour’s criticisms of Bayesianism in “Why I am Not a Bayesian”, a chapter of his book Theory and Evidence.
Here is a non-Bayesian explanation of simplicity in terms of minimization of risk. Risk is expected distance from the truth of an estimate from a random sample. The idea is that risk can be analyzed into two components, the distance of the average value of the estimate from the truth (bias) and the spread or probable error (variance) of the estimate around its average value. Increasing the number of free parameters in the model used for estimation purposes decreases bias but increases variance even more, so that overall risk can be reduced by estimating by means of a simpler theory.
This paper attempts to explain the role of simplicity in scientific inference in terms of efficiency of convergence to the truth, where efficiency is understood in terms of minimizing worst-case reversals of opinion prior to convergence.
The correctness of a theory depends, to be sure,
on how the
world is. But it also depends on how the theory is intended and understood.
For example, James Clerk Maxwell constructed a system of differential
that predicted all known macroscopic electromagnetic phenomena.
equations were discovered in terms of mechanical models of vortices in
ether, using such mechanically literal constructions as "idler
wheels" to keep the ether vortices from scraping each other when they
rotate in the same sense. One could choose to take all of this
literally, as Maxwell seems to have at the outset. Heinrich
confirmed the theory's astounding prediction of exlectromagnetic
remonstrated that Maxwell's theory is Maxwell's equations. For him, the mechanical metaphors might be suggestive but are speculative dead-weight better dropped from the completed theory. The same dispute occurred in the early atomic theory. Some chemists spoke of "atomic weights". Others, like Humphrey Davy, insisted on the more guarded terminology of "combining weights". There has always been a tension between those who wish to interpret theoretical structures literally and those who view them as crutches or shorthand notations for more basic evidential relations.
Semantic realists think that theories should be understood literally. Scientific success is then literal truth. If the theory refers to quarks, then it is not correct unless quarks really exist and have the properties the theory says they have.
Semantic anti-realists think that theories involving unobservable entities should not be taken literally. There are several ways not to take a theory literally.
empiricism: all meaningful discourse must somehow be definable or otherwise reducible to discourse about "observables".
obsevables = direct sensory impressions
operationism: observables = concrete, macroscopic laboratory experiments.
instrumentalism: a theory is to be evaluated only as a calculator for computing predictions in various specific circumstances.
Another distinction concerns justification. One may be pessimistic or optimistic about our ability to justify scientific claims. Usually, higher demands on interpretation (semantic realism) give rise to concerns about justification (skepticism) and weaker demands (semantic anti-realism) lead to optimism. The possible positions are as follows:
Scientific realism = optimism + semantic realism.
Scientific anti-realism = optimism + semantic anti-realism.
Theoretical skepticism = pessimism + semantic realism.
Inductive skepticism = pessimism + semantic anti-realism.
Philosophers of science don't like skepticism very much, since they usually assume at the outset that science is great. Therefore, the realism debate is usually understood to be a debate between scientific realists and scientific anti-realists. The realism debate is perhaps the main debate in the philosophy of science. This is no doubt because it is also a perennial debate in physics.
As usual, most of the papers in our textbook are familiar classics that every philosopher of science will have read (and disagreed with).
As you may recall, Rudolf Carnap had a theory of "inductive logic" to justify empirical generalizations. Therefore, Carnap was not an inductive skeptic. His inductive logic cannot justify conclusions involving theoretical terms like "atom" that do not occur in the data. He avoided theoretical skepticism by adopting a kind of empiricism, according to which a theory is an uniterpreted formal system for deriving conclusions, together with a set of conventionally true semantical rules that interpret the formal system in terms of concrete observations. Without such an interpretation, a theory is meaningless. And with such an interpretation, each concept of the theory is logically tied to some complicated combination of observable properties.
Hempel provides an empiricist critique of physicist Percy Bridgman's operational. Hempel's version of empiricism, which follows Carnap's, is less restrictive than Bridgman's. Nonetheless, Bridgman's views are very close in the philosophical spectrum to those of Hempel and Carnap. Hempel's views are less rigidly reductionistic than Carnap's. Hempel's views represent the last, most liberal phase of the logical positivist movement.
By "calclulus" Carnap means simply "formal axiomatic system".
The Y Carnap talks about is the "wave function" in quantum mechanics. The waves are not waves "in" anything. They are distributed through space and determine a probability distribution on measured values of physical variables whenever we happen to make measurements. Until we measure, there is no fact of the matter about the values of all these measurements. Since the wave function produces physical measurements by this curious procedure, it is not itself the familiar classical state characterized by these measurements. Aside from producing the measurements, its nature is fundamentally unknowable.
Carnap's system is founded on a distinction between observable and abstract vocabulary. Maxwell argues that there is no such distinction, because observability is actually a spectrum of notions. This is called a "slippery slope" argument. Hacking counters that some microscopes (those working on diffraction principles) are not really similar to ordinary vision. Nonetheless, we see with them when we can manipulate nature under them. So Carnap's distinction is still overturned.
"Ontology" is the philosophical study of what "really exists". Maxwell attacks the view that obsevable things "really exist" and "theoretical entities" do not.
"Diffraction" occurs when the troughs and crests of light rays add or cancel to make fringes or colors (like when you look at the grooves on the back of a CD). .
N.b., The "Craigian reduction" of a theory is a theory that entails only the observational consequences of the given theory, eliminating all reference to hidden entities and processes.
Hempel tried to work out the obvious "covering law" model of explanation, which is based on the idea of exercises in a physics book. You explain a phenomenon when you "derive" it from "initial conditions" and the laws presented in the book. The trouble is, Hempel couldn't define what a "law" is supposed to be, except by pointing. Hempel also extended his view to statistical explanations. A statistical explanation is supposed to confer a high probability on the explained fact. This is analogous to the covering law model.
Wesley Salmon proposes, instead, that a statistical explanation consists of a listing of all the statistically relevant factors, even if they lower the probability of the explained fact. Relevance, rather than high probability is the criterion. Thus, for Hempel, birth control pills explain why men taking them don't get pregnant; for Salmon, they don't. Salmon seems to have the problem that barometers are relevant to thunderstorms. He gets out of this, however, by saying that the atmospheric pressure renders barometer readings irrelevant after all.
Think about what this says, from a Bayesian perspective, about quality of explanation being an indicator of truth.
Typo: condition 2 on p. 172 should read: P(B|A.C) is not equal to P(B| A.-C).
Even after all the laws and statistically relevant factors are isolated, it seems that causes explain and effects do not.
Due in class on Dec 6.
The VanFraassen piece represents an anti-realist attempt to counter realist claims that explanation justifies belief in theoretical entitites. VanFraassen wishes to argue that explanation is a purely pragmatic virtue that is independent of grounds for accepting the theory. As such, it cannot justify underdetermined theories over their observable consequences.
Length: 5 pages plus references.
Essay must be typed, double spaced, 10 pt. Times Roman, with proper references. See paper writing guide.
Relate confirmation and/or explanation to the realism/anti-realism debate.
Relate any topic in the class (realism, confirmation, explanation) to a real episode in scientific history.