^Goodman’s problem for induction is often referred to as the new riddle of induction (the old riddle of induction is Hume’s) or as the grue problem.
^Goodman begins by defining a description-word, “grue.” The philosophical term for a description- word is predicate; it is basically a term used to describe something. For example, “tall” and “wears glasses”
are predicates. And, after Goodman introduced the term in his 1955 work Fact, Fiction, and Forecast, so is “grue.”
^Goodman defines grue by saying that
b
it applies to all things examined before [some future time] t just in case they are green but to other things [after that future time] just in case they are blue. Then at time t we have, for each evidence statement asserting that a given emerald is green, a parallel evidence statement asserting that that emerald is grue.And the statements that emerald a is grue, that emerald b is grue, and so on, will each confirm the
general hypothesis that all emeralds are grue. Thus according to our definition, the prediction is that all emeralds subsequently examined will be green and the prediction that all will be grue are alike confirmed by evidence statements describing the same observations. But if an emerald subsequently examined is grue, it is blue and hence not green.
^Let’s give a slightly more concrete example by defining grue as something that, whenever we examine it before midnight on January 1, 2100, appears green, but that were we to examine it on or after midnight on January 1, 2100, would appear blue.
^Because of the way we’ve defined grue, all of the experience that we’ve had so far of green things is also evidence that those things are grue.
^In particular, the following two statements are both true:
I All observed emeralds are green.
II All observed emeralds are grue.
Theories of Knowledge LECTURE 15 The raven Paradox and new riddle of induction
^If observed correlations provide us with reasonable beliefs about unobserved cases, we have equal reason to believe the following two claims:
I' The first emerald observed on or after midnight, January 1, 2100, will be green. (Let’s call this the green-prediction.)
II' The first emerald observed on or after midnight, January 1, 2100, will be grue. (Let’s call this the grue-prediction.)
^But now we have a problem. Because of how we’ve defined grue, the grue- prediction implies the following:
II" The first emerald observed after t will be blue. (Let’s call this the blue-prediction.)
^Obviously, the blue-prediction contradicts the green-prediction.
^So, by introducing grue, Goodman demonstrates that it seems induction can be used to support outlandish beliefs—such as the belief that emeralds will appear blue when observed on or after January 1, 2100.
And induction can even be used to support contradictory claims. This is because, by using grue, we’ve shown that both the claim “All emeralds will appear green when observed on or after January 1, 2100” and the claim
“All emeralds will appear blue when observed on or after January 1, 2100”
are equally well supported by the evidence.
^Goodman doesn’t think we can find a solution to the grue problem by focusing on the logic of the predicates themselves. Instead, he thinks the
Theories of Knowledge LECTURE 15 The raven Paradox and new riddle of induction
best way to distinguish between the predicates we want to employ and the ones we don’t is to focus on our own psychology. How do we use predicates in formulating inductive arguments?
Goodman thinks it’s the answer to this question that provides the key to solving his new riddle of induction.
^Goodman observes that the claims we want to test through induction don’t exist individually, in a vacuum.
Instead, those claims are often related to a number of other claims. And one of the ways in which different claims are related to each other is that we often employ the same predicates to formulate them. Goodman describes those predicates that are used in a wide variety of claims that we test through inductive inference as entrenched.
^Now we have a way to distinguish between green and grue: Green is very well entrenched while grue is not entrenched at all.
^Green and grue conflict with each other because they support contradictory predictions. So, if we have to choose between two claims that we’re testing against the evidence and one of those claims involves the predicate “green” while the other involves the predicate “grue,” then Goodman says that we should choose the claim that involves the better- entrenched predicate, “green.” In this case, the predicate “green” overrides the predicate “grue.”
^This gives Goodman a way to characterize the kinds of predicates he thinks make good candidates for inductive inference—and offer a solution to the grue problem. It’s the predicates that override other predicates that conflict with it without being overridden themselves. He calls those predicates projectible.
^Goodman’s solution to the grue problem might seem too easy.
What justification could we give for characterizing certain descriptions, such as “green,” as projectible?
Why are others, such as “grue,”
non-projectible? Wasn’t the whole point of Goodman’s puzzle that both descriptions fit our evidence equally well?
Theories of Knowledge LECTURE 15 The raven Paradox and new riddle of induction
Bayes’s Theorem
^One way to defend Goodman’s type of solution is to see it as part of a larger theory of inductive inference. That’s the strategy a number of contemporary philosophers pursue, by suggesting that the solution to Goodman’s grue problem is just a special application of a rule of inference known as Bayes’s theorem, named for the reverend Thomas Bayes, who formulated the theorem.
^The insight that Bayes captured was that when we learn new information about the probability of the occurrence of some event, we often already have information about how likely that event is. Bayes’s theorem presents a precise mathematical formula for how to interpret the new information about likelihoods in light of the information that we already have.
^Suppose that your doctor suggested that you get tested
for a very rare but very deadly disease. Let’s say it’s so rare that only one in 10,000 people ever get it. The test for the disease is 99 percent reliable. A few days later, you get a call from the doctor, saying that you have this rare deadly disease. How should you feel about your chances of actually having the disease?
^You might think that you have a 99 percent chance of having the disease. But you’d be wrong.
And this is where Bayes’s theorem comes in.
^Imagine that we have a random sample of 100,000 people. One in 10,000 of those people will have the disease, so that means that 10 will have it and 99,990 won’t have it.
^The test is supposed to be 99 percent reliable. For simplicity’s sake, let’s say that means that for every 100 people who actually do have the disease, the test only gets it wrong one time—called a false negative—and that for
Theories of Knowledge LECTURE 15 The raven Paradox and new riddle of induction
every 100 people who actually do not have the disease, the test again only gets it wrong one time—called a false positive.
^In other words, one out of 100 people that the test says doesn’t have the disease actually does have it, and one out of 100 people that the test says does have the disease actually doesn’t have it.
^That means that out of our sample of 100,000 people, the test will say that all 10 of those who do have the disease actually do have it. However, it will also—wrongly—say that one percent of those who don’t have the disease actually do have it. That means it will falsely diagnose 1,000 people as having the disease when they actually don’t have it.
^Let’s put those numbers together. That 99-percent- reliable test diagnoses 1,010 people out of 100,000 as having the rare deadly disease, but only 10 of those people actually have the disease, so your chances of having the disease—even if that
99-percent-reliable test says you have it—are actually only 10 out of 1,010, or about one percent.
^This example shows the power of using what we already know about the world—called prior probability—to interpret any new information that we learn.
In this example, the prior probability has to do with just how very infrequently that deadly disease occurs. Without taking the prior probability into account, when you get the bad news of the test results for the disease, you would think that you are almost certainly a goner. Actually, though, 99 out of 100 people who get similar bad news go on to live a long life.
^A number of philosophers have suggested that Bayes’s theorem is the basis for the correct understanding of how to deal with Goodman’s grue problem.
In fact, many philosophers think that by applying the insights that Bayes introduced, we may solve a number of problems in both epistemology and the philosophy of science.
Theories of Knowledge LECTURE 15 The raven Paradox and new riddle of induction
]Skyrms, Choice and Chance.
Stalker, ed., Grue.
]
reAdings
yW e’ve seen reasons for thinking that induction can in fact be a powerful tool for acquiring new information. This tells us that internalist
philosophers who develop rules for good inferential practices are making important contributions to our understanding of induction and how to improve it.
However, we’ve also seen reasons for thinking that, as with deduction, human beings are just not very reliable inductive reasoners. This suggests that if we’re trying to evaluate the structure of human knowledge, externalism still seems like the best theory for explaining the contributions of inference to knowledge.
Theories of Knowledge LECTURE 15 The raven Paradox and new riddle of induction
QUIZ
1 Which of the following is equivalent to the claim that all NBA players are tall?
a Everyone who is tall is an NBA player.
b All non-NBA players are non-tall.
c All non-tall things are non-NBA players.
d All of the above.
e None of the above.
2 TrUe or fAlse
According to Karl Popper,
confirmation is more important than disconfirmation.
3 TrUe or fAlse
According to Israel Scheffler and Nelson Goodman, the solution to Carl Hempel’s paradox relies on the fact that the existence of a black raven not only provides evidence for the claim that all ravens are black but is also incompatible with the claim that all ravens are non-black.
4 TrUe or fAlse
A solution to Nelson Goodman’s new riddle of induction is that “grue” is a time-indexed predicate whereas green is not.
5 According to Bayesian reasoning, if a person who is almost always reliable—say 99 percent reliable—
tells you that a one-in-a-million event occurred, then the chance that the event actually occurred is which of the following?
a 99 percent
b Greater than 99 percent c Less than 99 percent
Theories of Knowledge LECTURE 15 QUIZ