
Normal Circumstances Reliabilism:Goldman on Reliability and Justified Belief
Alvin Goldman's paper "What Is Justified Belief" and his book Epistemology and Cognition pioneered reliabilist theories of epistemic justifiedness. In light of counterexamples to necessity (demon worlds, brains-in-vats) and counterexamples to sufficiency (Norman the clairvoyant, Mr. Truetemp), Goldman has offered a number of refinements and modifications. This paper focuses on those refinements that relativize the justification conferring force of a belief-forming process to its reliably producing a high ratio of true beliefs over falsehoods in special circumstances: reliability in the actual world, in normal worlds, and in nonmanipulated environments. This paper argues that Goldman's refinements fall short and suggests instead the relativization to reliability in normal circumstances. Normal circumstances are those where the belief-forming process acquired the etiological function of reliably inducing true beliefs. This theory invites the Swampman objection. Two lines of response are pursued.
Alvin Goldman's 1979 paper "What Is Justified Belief?" is one of the most influential papers in epistemology in the last fifty years. As its title advertises, the paper advanced an account of "justified" beliefs, beliefs that are "in the right" or "correctly" formed or sustained from the epistemic point of view, from the point of [End Page 33] view of promoting true belief and avoiding error. Goldman famously argued for two broad theses:
(1). That a belief, to be justified, had to have the right causal basis.
(2). That the right causal basis was a reliable basis, a process that reliably produces true beliefs.
At the time of writing it was controversial, to say the least, whether a belief, to be justified, had to have the right causal basis. Not so after the paper's appreciation by the field. It is now commonplace to suppose that a "doxastically" justified belief has the "right" causal basis. Which causal basis, however, is still in dispute. Not everyone agrees that a justified belief is a belief based on a belief-forming process that reliably produces mostly true beliefs.
I'll take it for granted that Goldman's first thesis is correct. I'll also take it for granted that Goldman's second thesis is a genuine insight, along the right lines. I won't, however, take it for granted that Goldman got the details exactly right. Indeed, over the years Goldman has advanced revisions and modifications of his second thesis in the light of various challenges. In this paper I engage in a bit of Goldman scholarship, examining some of these revisions and modifications. I shall do so critically, for my overall aim is to advance a further revision of Goldman's basic insight. I shall argue in favor of normal circumstances reliabilism, where justified belief is constitutively associated with the reliability of the belief-forming process in normal conditions.
1. EXTERNALIST EPISTEMOLOGY
I shall begin by focusing on our subject matter, by sharpening our point of view (cf. Graham 2011c, 2012, 2016b).
Internalists in epistemology often connect a belief's being justified with the individual's ability to justify the belief. Justifying a belief requires the capacity to refer to the belief as a belief, to cite reasons or evidence in its favor as reasons or evidence, and so requires the capacity to represent reasons or evidence as reasons or evidence. Justifying a belief requires the capacities to meta-represent and to reason critically.
Higher nonhuman animals, we have good reason to believe, form beliefs and engage in propositional reasoning. We also have good reason to believe that among the higher nonhuman animals that can form beliefs and other propositional attitudes, not all have the capacity to represent their beliefs as beliefs, or to represent reasons or evidence as reasons or evidence. So there are higher nonhuman animals that have beliefs but lack the ability to justify their beliefs.
Very young human children—children from a few weeks to two years—form beliefs and engage in propositional reasoning. And like our nonhuman cousins, very young children lack the ability to justify their beliefs. [End Page 34]
Higher nonhuman animals and young children have beliefs. Can their beliefs be justified?
Internalists in epistemology who connect a belief's being justified with the individual's ability to justify the belief often deny that nonhuman animals and very young human children have justified beliefs. They lack justified beliefs because they lack a capacity necessary for justified beliefs. I reject this view.
Some internalists in epistemology deny that justification for a belief requires that the individual have the ability to justify the belief. These internalists, it seems, allow for the possibility that higher nonhuman animals and very small human children can have justified beliefs. This view seems more reasonable to me.
These internalists typically claim that justification supervenes on conscious sensations or conscious experiences. These internalists insist that the "justifiers" for our beliefs, especially our perceptual beliefs, consist in conscious "internal" sensations and experiences, maybe even conscious propositional attitudes other than beliefs, such as seemings that such and such is the case. No conscious justifier, no justification.
Internalists don't think much of blindsight cases. These are cases where the individual forms a belief—maybe on the basis of a reliable (even a super-reliable) belief-forming process—but the individual does not form the belief on the basis of any conscious sensation, experience, phenomenology, or seeming. In those cases, the internalist insists, the beliefs are not justified. No conscious justifier, no justification.
This is where many reliabilist externalists will disagree. Maybe there are creatures in the animal kingdom with reliable perceptual capacities that are not accompanied by consciousness. And maybe those capacities induce reliably true perceptual beliefs. Many are true. Many will amount to knowledge. And many are "justified" in a clear sense of that word.
Jack Lyons, among others, has argued for this position. Indeed, Lyons thinks "zombies" in (roughly) Chalmers's sense can have justified beliefs, despite the absence of any phenomenology at all. "[E]ven zombies, who in the philosophical literature lack conscious experiences altogether, can have basic, justified, perceptual beliefs" (Lyons 2009, Preface; cf. 52). Lyons argues such cases are even actual:
We needn't go so far into the realm of science fiction, however, to find such examples. The actual world provides us with many instances of what Gibson (1966) calls "sensationless perception." One such example (in fact, one of Gibson's own) concerns the obstacle sense, or facial vision, of the blind. Blind people (and sighted people while blindfolded, too, though less reliably) can detect obstacles—walls, chairs, and the like—without having any (conscious) sensation. In fact, they tend to think that they are picking up information somehow through the skin of the face (hence "facial vision"), when in truth the information is coming in through the ears as a subtle form of echolocation (1966, p. 2). Because the subjects don't have introspective access to any relevant sensation, this looks to be a case of sensationless perception. Though philosophical zombies may be things of fiction, such "minor zombies" actually exist.
(Lyons 2009, 52) [End Page 35]
Lyons was thinking of adults. Why not also think of very small children, or even higher nonhuman animals? They can be zombies too. And they can have justified beliefs.
Why not engage a fully externalist epistemological project? Exclude introspection, meta-awareness, discursive reasoning, critical reason, consciousness, and propo sitional seemings—(Do animals and small children have propositional seemings? How often do adults have them? Do they have them while driving?)—and then focus on the epistemological properties and relations of higher nonhuman animals and very small children, even the "zombie" variety. With that project accomplished, build your way back up to the epistemological properties and relations that depend on consciousness, propositional seemings, introspection, meta-awareness, and critical reasoning. That, then, is our subject matter, or our point of view, in what follows: fully externalist epistemology. I shall read Goldman in this light.
2. ARGUMENTS FOR SIMPLE RELIABILISM
I now turn to arguments Goldman has offered in favor of "simple" reliabilism. Simple reliabilism is the view that in all possible worlds W, a belief is prima facie justified iff based on a process that is reliable in W.
the master argument
In "What Is Justified Belief?" and in Epistemology and Cognition, Goldman provides what I've called "the master argument" for "simple" reliabilism. The master argument is driven by our intuitions about cases:
(1). It is intuitive that perception, memory, and reasoning confer justifiedness.
(2). It is intuitive that wishful thinking, emotional attachment, and hasty generalization produce unjustified beliefs.
(3). We know empirically that perception, memory, and reasoning are reliable.
(4). We know empirically that wishful thinking, emotional attachment, and hasty generalization are unreliable, producing more false beliefs than true.
(5). So being justification conferring covaries with reliably inducing true beliefs, and failing to justify covaries with being unreliable.
(6). The best explanation of this covariation is that conferring justifiedness just is, or strongly supervenes upon, being produced by a reliable belief-forming process.
(7). Hence "simple" reliabilism: For all possible worlds W, a belief B is prima facie justified iff based on a psychological process that reliably causes and sustains true beliefs in W. [End Page 36]
Here is a representative passage:
Granted that principles of justified belief must make reference to causes of belief, what kinds of causes confer justifiedness? We can gain insight into this problem by reviewing some faulty processes of belief-formation, i.e., processes whose belief-outputs would be classified as unjustified. Here are some examples: wishful thinking, reliance on emotional attachment, mere hunch or guesswork, and hasty generalization. What do these faulty processes have in common? They share the feature of unreliability: they tend to produce error a large proportion of the time. By contrast, which species of belief-forming (or belief-sustaining) processes are intuitively justification-conferring? They include standard perceptual processes, remembering, good reasoning, and introspection. What these processes seem to have in common is reliability: the beliefs they produce are generally true. My positive proposal, then, is this. The justificational status of a belief is a function of the reliability of the process or processes that cause it, where (at a first approximation) reliability consists in the tendency of a process to produce beliefs that are rather true than false.
the argument from degrees
Goldman often combines the Master Argument with a parallel argument "from degrees." Here is Goldman's summary from "What Is Justified Belief?":
notice that justifiedness is not a purely categorial concept, although I treat it here as categorical in the interest of simplicity. We can and do regard certain beliefs as more justified than others. Furthermore, our intuitions of comparative justifiedness go along with our beliefs about the comparative reliability of the belief-causing processes.
Also from Epistemology and Cognition:
Support for reliabilism is bolstered by reflecting on degrees of justifiedness. Talk of justifiedness commonly distinguishes different types or grades of justifiedness: 'fully' justified, 'somewhat' justified, 'slightly' justified, and the like. These distinctions appear to be neatly correlated with degrees of reliability of belief-forming processes.
the knowledge argument
In other places, especially when arguing against internalism, Goldman suggests the "knowledge argument" for reliabilism:
(1). Being reliably caused or sustained is a necessary condition for knowledge.
(2). Justifiedness is a necessary condition on knowledge.
(3). Hence justifiedness entails, or just is, being reliably produced. [End Page 37]
the argument from epistemic goods
Another argument for reliabilism appears at times in Epistemology and Cognition, and is also common in the literature. I call it the "epistemic goods" argument for reliabilism:
(1). Forming true beliefs is an epistemic good.
(2). Forming true beliefs reliably, and being based on a reliable belief-forming process, is also an epistemic good.
(3). Justifiedness is a kind of epistemic good distinct from, but associated with, truth.
(4). Hence justifiedness entails, or just is, being based on a reliable belief-forming process.
These four arguments have been influential (Graham 2011a). I do not think any of these arguments are conclusive (this is philosophy, after all). The first (and so the supplementary second) faces counterexamples (as we are about to see) and the second two are invalid as they stand. Even so, I find them deeply suggestive, especially the (albeit rather vague) epistemic goods argument. They move me, among many others, to pursue the reliabilist project. Indeed, Goldman has even argued that opponents of reliabilism are implicitly moved by the kinds of considerations that support reliabilism:
Why do so many examples of non-inferential J-principles center on perceptual experience, especially where the epistemic subject is in "good" perceptual circumstances? Because these are cases in which beliefs formed in accordance with these principles are usually true … What I say is intended to apply not only to reliabilists … I speak also of epistemologists who offer an entirely different theory … I claim that the underlying appeal of these J-principles is a tacit recognition that they are truth-conducive, even when this is not the official doctrine being endorsed.
I will take these arguments, altogether, as adequately motivating Goldman's project, and so adequately motivating our own efforts to improve on the project.
3. COUNTEREXAMPLES TO SIMPLE RELIABILISM
I now turn to the familiar counterexamples to simple reliabilism. Bonjour (1980) offered Norman and his defeated colleagues Samantha, Maud, and Caspar (and Lehrer offered Mr. Truetemp [Lehrer 1990]). Goldman's former Arizona colleagues Stewart Cohen, Keith Lehrer, and John Pollock offered the demonically deceived victim, psychologically identical to you or me (Cohen and Lehrer 1983; Cohen 1984; Pollock 1984). These cases challenge the move from the fact that the property of being a justification conferring process and the property of being a reliable belief-forming process covary in the actual world (these two properties covary in [End Page 38] the actual world) to the conclusion that these two properties covary in all possible worlds, for one just is, or one strongly supervenes upon, the other.
In the demon-world case the internalist thought the internalist property that he cared about was present, but reliability was absent, so "justification" or "rationality" or "reasonableness" can't supervene upon, or entail, de facto reliability, but instead supervenes on conscious experiences, propositional seemings, rational arguments, rational relations among beliefs, and so on. And in the clairvoyance case Bonjour thought the beliefs were reliably formed, but everything he cared about was absent: the individual, having no argument at hand to justify his "clairvoyance" beliefs as beliefs based on a reliable process, lacked a "justification" and so was not "justified" in believing as he did.
I am not presently interested in those properties that the internalist cares so much about: conscious sensations, conscious experiences, inner phenomenology, seemings, reasons, arguments, justification, higher-order awareness, "rationality," and critical reason. Remember, I'm pursuing a "fully externalist" epistemology. Does that mean I think the reliabilist can simply ignore the counterexamples, as trading on properties a serious externalist should happily ignore?
Not at all. For I think they challenge reliability theories of "justifiedness" even so, theories of well-formed beliefs, of beliefs "formed or sustained by proper, suitable, or adequate methods, procedures or processes" (Goldman 1988, 128). Surely there is a way, I've thought, for the reliabilist to classify Norman as forming nonjustified beliefs, even though mostly true, and simultaneously a way to classify the victim of deception as forming justified beliefs, even though mostly false. I accept that there is an epistemic property—call it "justifiedness"—that "clairvoyance" beliefs lack despite being mostly true and that perceptual beliefs in a vat enjoy despite being mostly false. And so I am going to focus throughout on the counterexamples and how to treat them. I read Goldman as agreeing.
I will describe variants of the cases that sheer off nearly everything the internalist cares about that involve cases of very young children (we could even imagine higher nonhuman animals) and take, at face value, the alleged "intuitions" that in the clairvoyance case justifiedness is lacking, but in the demon case justifiedness is present. Then I'll turn to Goldman's treatment of these kinds of cases over the years, and say why, though I admire them, I think they fall short. These are the revisions and modifications I referred to earlier. I'll then turn to what I think is the right approach to revising process reliabilism to respond to these kinds of cases.
NORMAN. An otherwise ordinary three-year-old human boy just so happens to have a reliable "clairvoyant" belief-forming cognitive system in his head with hidden sensory transducers, due to some bizarre and completely random mutation caused after he stepped into a pool of radioactive waste. This process reliably induces true beliefs about the whereabouts of certain people outside of visual range who send off clairvoyance waves. For example, the mutation reliably tracks Obama, partly because clairvoyance waves have recently filled our atmosphere (also by cosmic accident), and Obama emits signals carried by those waves (again by cosmic accident). [End Page 39]
Norman has no meta-beliefs about his possession of this process, nor does he have any meta-beliefs about the reliability of such processes. Indeed, he's three, and lacks meta-beliefs about any of his belief-forming processes.
Unlike many other belief-forming processes, this one entirely lacks any accompanying conscious sensations, conscious representations, or other "seeming-to-be-true" phenomenology. All the process does is stick true beliefs in Norman's head, without his awareness or acknowledgment. They don't even seem to come to him from out of the blue; he's got no clue that he's formed such a belief or why. It's as if they've been there all along.
These beliefs play no significant role in his life or overall mental economy. He receives no feedback of any sort or in any way that he's right; these beliefs are otherwise entirely idle. He doesn't watch the news, and his parents have no interest in politics. He does nothing with the information; it serves no intellectual or practical end. (Graham 2011b, 2014a)
HANNAH is an ordinary three-year-old human girl. Her sight is perfect. She reliably discriminates colors, shapes, surfaces, distances, and so on. She's developed impressive motor skills through repeated interactions with her environment, which in turn improve her depth perception and other perceptual abilities. She forms countless perceptual beliefs about her physical environment, nearly all of which are true.
Her perceptual beliefs play an enormous role in her daily life and her overall mental economy. She receives considerable feedback that she is right when she is, and considerable feedback that she is wrong when she is. She makes an enormous use of the information perception provides. Like any ordinary human child, she is going through normal stages of development and perceptual learning.
Unfortunately one day her planet is invaded by the Cartesians—a powerful alien race bent on imposing the conditions for Cartesian meditation on all creatures (until one classical internalist proves the existence and characteristics of the external world from introspectively known patterns of sensory experiences and first principles)—and she is placed in a vat of nutrients and hooked up to a massive Cartesian Coordinate supercomputer. The Cartesians possess a Laplacean Predictor 9000® and so are capable of stimulating Hannah's brain via the supercomputer with all of the proximal stimuli she would have had if the Cartesians had not invaded and placed her in a vat of nutrients. In the vat she is no longer in normal conditions. In the vat, she forms reliably false beliefs on the basis of perception.
She forms exactly the same beliefs (types) on the basis of the same perceptual representations (types) she would have formed if not envatted. Even so, her beliefs are massively in error, unreliably formed.
I will assume that there is something amiss from the fully externalist, reliabi-list point of view about Norman's clairvoyance beliefs, even if his beliefs are de facto reliably formed. In other words, I will assume that there is a property understood in terms of reliably producing true beliefs that his clairvoyance beliefs lack. And then on the other hand I will assume that there is something right about Hannah's [End Page 40] perceptual beliefs from the same point of view, even if de facto unreliably formed; there is a property understood in terms of reliably inducing true beliefs that her perceptual beliefs enjoy. That's how I read Goldman, as committed to finding something wrong about Norman so that his clairvoyance beliefs are not justified, and something right about Hannah, such that her perceptual beliefs are justified.
Some philosophers tell me they think Norman's in good shape. Kent Bach (1985) even argued that Norman's clairvoyance is just like ordinary human perception. If you feel the way they do, try the following thought experiment: imagine Norman's "clairvoyance" as wholly unreliable. Maybe that's because it never worked, or because Lex Luthor has dampened the clairvoyance waves in the atmosphere, or because the Cartesians have envatted him too. Do you think, in any of these cases, that Norman's clairvoyance beliefs are epistemically on a par with Hannah's perceptual beliefs? I bet you'll say no. Hannah's perceptual beliefs have a property that survives the loss of de facto reliability; Norman's clairvoyance beliefs do not.
If we start with ordinary human perception in normal circumstances, we then have four cases to consider: (1) ordinary reliable human perception in normal conditions (Hannah in good conditions); (2) ordinary human perception in a vat (Hannah in a vat); (3) Norman's accidentally reliable clairvoyance; (4) Norman's unreliable clairvoyance (for whatever reason). We are looking for the property, understood in reliabilist terms, that's present in (1) and (2) but absent in (3) and (4).
4. SHOULD RELIABILISTS WORRY ABOUT DEMON WORLDS?
But before I turn to Goldman, I want to criticize something Lyons has recently said about "demonworlders" in his paper "Should Reliabilists Be Worried about Demon Worlds?" (Lyons 2013). Lyons distinguishes three cases:
(1). Mere demonworlders: These are creatures characterized exclusively in terms of experience-belief functional mappings. Given a certain experience E, the creature forms a belief B. They are being controlled by a demon, so that their experiences are caused by the demon, and not by the "normal" or "ordinary" causes for these creatures.
(2). Recently envatted humans: These are humans that have just been envatted, or hooked up to a computer or a virtual reality device, and are forming false perceptual beliefs; these are human demon-worlders, recently envatted.
(3). Long term envattment: These are humans that have been in the vat, or hooked up to the computer, suffering from massive deception, for a very, very long time.
The bulk of Lyons's paper is designed to undermine the internalist intuition that if two creatures A and B have the same experiences, they are equally justified in forming the same belief. This is the "intuition" or "principle" that drives the [End Page 41] internalist to say, when thinking about massive deception cases, that reliability is irrelevant to justification, for A can have mostly true beliefs while B has mostly false beliefs, but A and B are equally "justified" or "rational" or "reasonable," forming beliefs as they should on their "evidence."
To undermine the intuition, Lyons focuses on mere demonworlders. He imagines Larry, a mere demonworlder, controlled by a demon at the Grand Interworld Station. At the Grand Interworld Station there are all sorts of creatures from all kinds of possible worlds, not simply demonworlders and controlling demons. Many of the creatures have gathered to observe the demon as it controls Larry's experiences. The demon is inducing in Larry a range of experiences E1, E2, E3, and so on, with the intention that Larry form a predicted set of beliefs that turn out to be "justified" (and so the "epistemically correct" response to those experiences) but without a care for their truth value while in the Station (he's trying to produce a "demonworld case"). The demon, however, expresses dismay to the observing crowd that Larry is forming the "wrong" beliefs B1, B2, B3, etc., and not the "justified" set of beliefs that he intends. Something seems to have gone wrong.
"Not so fast!" Lyons reports a creature from Tralfamadore as saying: Larry is forming exactly the beliefs he should on those experiences, for those are exactly the beliefs creatures from Tralfamadore form given those experiences. "No way!" says the Unicorn. "Larry should form a different set of beliefs altogether on those experiences." Other creatures offer different assessments of Larry's predicament.
We now have a question on our hands: given that a set of experiences can cause different sets of beliefs, which set of beliefs is the "right" set—the right or justified response—to the set of conscious experiences? Is it the set of beliefs the demon thinks are the right ones? The Tralfamadorean? The Unicorn? And what about the other possible answers one might uncover during a stop at the Grand Interworld Station? Who has the right answer?
Lyons's point in concocting this fairy tale is that there is no basis for an answer to this question, taking a set of experiences on their own as the only basis to form an answer. Suppose we say that Larry should form the beliefs that we humans would form on those experiences. That, Lyons says, would be parochial and chauvinistic. We would simply project ourselves into Larry's shoes. But there is no basis for that. After all, Larry was not described as human. But by parity of reasoning, so in any other possible answer. Lyons's point is that as long as the experience-belief connection is entirely contingent (it is contingent whether these experiences/sensations cause those beliefs), so too is the epistemological connection (it is contingent whether these experiences justify those beliefs). But if the epistemological connection between experience-belief pairs is entirely contingent, and so the epistemological connection does not simply supervene on the contents of the pairings, internalism is false. For internalism lacks the resources to specify the "right" doxastic response to a set of experiences. Psychological duplicates (experience-belief duplicates) are not ipso facto epistemological duplicates. To specify the right pairings, more information that goes beyond the pairings themselves is required. Lyons says "anchors" of some sort or another are also required to settle the question; we need anchors, not projectors. [End Page 42]
I like what Lyons says here. I'll make a similar point myself before concluding. But whether he's right or wrong about internalism is not my present concern. And if you disagree, you can easily find him on Facebook and tell him why.
Instead my present concern is what Lyons would say as a member of the reliabilist camp about the second and third cases, cases where humans are being massively deceived. In (2) and (3) do the human "victims" have justified beliefs? I want to discuss a possible interpretation of what Lyons says.
Lyons canvasses the possibility that a recently envatted human has justified perceptual beliefs, for we might say that the victim's perceptual belief-forming processes are still reliable while envatted (Lyons 2013, 35–37). He seems to imagine the following interpretation of "reliable." A recently envatted human relying on perception is going to form one false perceptual belief after another, but there is still a sense in which the process overall is still reliable (for most of the individual's perceptual beliefs are true, after all, so the truth ratio is still very high). Hence we can say, on a reliabilist framework, that the recently envatted victim has justified perceptual beliefs. The long envatted victim, however, has formed way more false beliefs on perception than on true beliefs (and so the truth ratio is now very low). Since de facto reliability of the belief-forming process is a necessary condition on justifiedness, the long envatted victim lacks justified perceptual beliefs.
This is an actual statistical frequency conception of reliability. Though this is surely a possible reading of "reliability" and Lyons is right to imagine it, I am inclined to pass. I think of reliability differently. I think the recently envatted victim's perceptual capacities, because of the circumstances, are no longer reliable. Human perception in normal circumstances is (not infallible but) reliable. Human perception in a vat is not reliable; the circumstances are wrong. Human perception lacks the tendency or the propensity to get things right while envatted. Reliability is then more like a disposition than a track record. I think most reliabilists think of reliability this way too (Goldman 1979; 1988, 137; cp. Pollock 1984, 112). Hence according to simple reliabilism, Hannah's perceptual beliefs are not justified, for they are not reliably formed while in a vat, even if her track record is still pretty good.
On my understanding of reliably produced beliefs, I don't think we are simply entitled to say the recently envatted victim has justified beliefs. I'm looking for an account where the recently envatted victim has justified beliefs, but that's not because the victim's perceptual capacities are "still" reliable while in the vat in the track-record sense that Lyons deploys. Though I agree with Lyons that this move is worth considering, I'm not going to make it myself.
5. GOLDMAN'S REPLIES TO THE COUNTEREXAMPLES
I now turn to the revisions and modifications of simple reliabilism that Goldman has advanced over the years to treat the counterexamples. [End Page 43]
I start with Goldman's reply to Bonjour-type cases. His main reply in print has been to treat them as cases of defeated justification: the individual subject has an alternative reliable belief-forming process that they can and should have used, that had they used it they would not have formed the target belief, e.g. the belief about the location of the US president (Goldman 1986, 111–12). In his Stanford Encyclopedia entry on process reliabilism he glosses the idea slightly differently as follows:
[I] proposed such a condition in Epistemology and Cognition (1986: 111–112) in the form of a non-undermining (or "anti-defeater") condition. This says that a cognizer, to be justified, must not have reason to believe that her first-order belief isn't reliably caused. This promises to handle the clairvoyance and Truetemp cases very smoothly. Surely Truetemp, like the rest of us, has reason to think that beliefs that come out of the blue—as far as one can tell introspectively—are unreliably caused. Hence he has reason to believe that his spontaneous beliefs about the precise ambient temperature are unreliably caused. So his first-order beliefs about the ambient temperature violate the supplementary condition, and therefore are unjustified.
I find this reply to the Norman case unsatisfactory for three reasons:
(a). A "defeater" or "underminer" need not be present. In the three-year-old Norman case, there is no intuitive basis for saying his belief is defeated or undermined. There is no intuitive basis for saying he should have used an alternative process.
(b). Three-year-old Norman has not developed the higher-level critical reasoning capacities that would constitute the alternative belief-forming process. There is no intuitive basis for saying he has the relevant alternative process. There is no intuitive basis for saying he has the relevant higher-order belief. Indeed, he may lack the required concepts to formulate the belief in the first place.
(c). Goldman's main reply grants prima facie justifiedness and then tries to defeat it. But intuitively the problem is the assignment of prima facie justifiedness; de facto reliability is not sufficient for prima facie justifiedness. Goldman's reply comes in a step too late. (Lyons [2009] has made this point too.)
Goldman once expressed awareness that the defeaters maneuver might not be sufficient as a reply to Norman. In a footnote to "Epistemic Folkways and Scientific Epistemology," he said, "It is not entirely clear, however, how well these qualifications [the machinery from "What Is Justified Belief?" and Epistemology and Cognition] succeeded in the Norman case" (1992, 175).
But this is not Goldman's only reply. In 1979 he imagined the possibility of a benevolent demon making wishful thinking reliable. Intuitively, he said, wishful thinking would not confer justifiedness, even if reliable. He then suggested possible solutions that also work, if they do, for counterexamples to necessity (to "demon-world" cases). So let's treat them as two sides of the same coin: maybe de [End Page 44] facto reliability is neither necessary nor sufficient for prima facie justifiedness, but some other kind of reliability is both necessary and sufficient.
Here are Goldman's suggestions, from 1979 to the present:
(1). 1979: Nonmanipulated environments. In all worlds W, a belief B is prima facie justified in W iff based on a process that is reliable in nonmanipulated (no purposeful arrangement for reliability or unreliability) environments (1979, 17). This rules out benevolent and malevolent demon scenarios.
(2). 1979: Reliability in the actual world. In all worlds W, a belief B is prima facie justified in W iff based on a process that is reliable in the actual world. This rules out wishful thinking (unreliable in the actual world) and clairvoyance (it does not exist in the actual world) and demon-worlds (for perception is reliable in the actual world) (1979; 1999/2002; 2011).
(3). 1979: Processes we believe to be reliable. In all worlds W, a belief B is prima facie justified in W iff based on a process that we believe (in the actual world) to be reliable (in the actual world) (1979, 18). This rules out wishful thinking and clairvoyance (we believe them unreliable) and demon-worlds (we believe perception is reliable).
(4). 1986: Reliability in normal worlds. In all worlds W, a belief B is prima facie justified in W iff based on a process that is reliable in normal worlds. Normal worlds are worlds that share the general characteristics that we believe about the actual world. This rules out wishful thinking and clairvoyance (they are not reliable in normal worlds) and demon-worlds (they are paradigm cases of non-normal worlds) (1986, 107, 113).
(5). 1986: Revisionary denial. "At times I have been tempted to handle demon-world cases by saying that beliefs in that world are not 'really' justified; they are merely 'apparently' justified. If this response were intuitively plausible, reliabilism could dispense with the normal-worlds theory" (Goldman 1986, ch. 5, n. 32). Maybe Norman is 'really' justified but only 'apparently' nonjustified.
(6). 1988: Two concepts solution. Perceptual beliefs in demon-worlds are weakly justified (faultless, blameless) but not strongly justified (de facto reliable). Maybe clairvoyance beliefs are strongly justified but not weakly justified (Goldman 1988). (Maybe "internal-ists" and "externalists" are talking past each other.)
(7). 1992: Two-stage reliabilism. This is a theory of justification evaluation/attribution. Stage one: the evaluator selects a list of justification conferring processes and those that do not based on believed reliability of the processes. Stage two: the list is then applied rigidly across actual and possible cases. We believe perception is reliable but not clairvoyance, so perception makes the list but not clairvoyance. In the world where perception is not reliable, we judge it justification conferring. In the world where clairvoyance is reliable, we do not judge it justification conferring (Goldman 1992; 1999/2002). [End Page 45]
(8). 2002: Back to actual world reliabilism. Many objected to two-stage reliabilism on the grounds that it was a theory of justification attribution, and not a theory of justification. (Compare, a theory of race attribution versus a theory of race.) Goldman replies in 1999/2002: "I have tried to reconstruct the way in which communities and individuals select and deploy their standards of … justification. But, it will be asked, when are beliefs really justified, as opposed to being held justified by this or that community? … A natural response is: a belief is 'really' justified if and only if it results from processes (or methods) that really are reliable, and not merely judged reliable by our present epistemic communities. [Footnote 10 added in 2002:] What I should have said here is that a belief is 'really' justified if (and only if) it meets a correct standard, where a correct standard specifies a process that is genuinely reliable in the actual world. Rigid use of a correct standard would render perceptual beliefs in demon worlds 'really' justified" (Goldman 1999/2002, 49, n. 10). And then in the Stanford Encyclopedia entry: "Departing now from [Epistemology and Cognition's] theory of 'normal worlds,' we can add that the right system of epistemic norms is made right in virtue of facts and regularities obtaining in the actual world. Furthermore, the system that is right in the actual world is right in all possible worlds. In other words, epistemic rightness is rigidified" (Goldman 2012, 82).
Out of these seven proposals [(2) and (8) are the same] we can extract three general strategies:
(A). Explain judgments: take as our goal explaining why people have the intuitions (and make the judgments) they do about these cases (not necessarily to say what the property of justified belief is). This is the idea behind (3) and (7), and is part of the background discussion for (4).
(B). Constrain reliabilism: keep the core doctrine intact that "real" reliability matters for justifiedness, but constrain the reliability to a special kind, or to reliability in a special set of circumstances. This is an attempt to explain what justified belief is in terms of the reliability of the belief-forming process. This is the general idea behind (1), (2), (4), and (8).
(C). Explain away counterintuitions: stick to reliabilism about justifiedness (either simple or a constrained version) and then add machinery to explain away counterintuitions. This is the general idea behind (6), compatible with (5), also compatible with (7).
6. SPECIAL CIRCUMSTANCES RELIABILISM
I shall table (A). Though that's a good project, that's not my project. I am open to (C), but I shall not pursue it here, for after all I am assuming simple reliabilism is fundamentally wrong about Norman and Hannah. I shall focus on (B). That is my [End Page 46] project: discover the right kind of reliability, the kind of reliability that constitutes the property Norman's clairvoyance beliefs lack but Hannah's perceptual beliefs enjoy.
Let's then focus on Goldman's two main versions of (B): normal-worlds reliabilism and actual-world reliabilism. These are versions of "special circumstances" reliablism. The idea is that in all possible circumstances C, a belief is prima facie justified in C iff based on a process that is reliable in special circumstances. If human perception is reliable in special circumstances, then Hannah's perceptual beliefs are justified. Norman's beliefs aren't justified because his clairvoyant mutation isn't reliable in special circumstances. Formally the idea is straightforward. The hard part is specifying the special circumstances. The special worlds or circumstances for Goldman in 1986 were "normal worlds" and in 2002 to the present the "actual" world or circumstances.
Though Goldman soon abandoned normal-worlds reliabilism, it is worth rehearsing why. Here are three reasons Goldman (1988) raised against normal-worlds reliabilism:
(i). Which general beliefs count for determining normal worlds? There seem to be too many choices.
(ii). Whichever ones we select, it looks like dramatically different worlds might fall in the class of normal worlds. Does justification turn on reliability in all of these worlds? Is any process even a candidate for reliability in all of these worlds?
(iii). Who is the "we"? All humans ever? Does the referent change over time? Does it mean a special subset?
Here is a reason from Pollock and Cruz (1999):
(iv). The theory puts no constraints on how we get our general beliefs. What if the beliefs are unjustified? Should justification turn on crazy or wild beliefs? Do normal worlds involve wizards and witchcraft?
Here is a reason that occurred to me:
(v). What if the beliefs include (hidden, unnoticed) contradictions? Surely everything we've ever believed about the general features of the actual world can't be consistent. Does that mean there are no normal worlds? Does that mean no belief is ever justified?
Here is the "realist" sentiment:
(vi). Should justification turn on our beliefs at all? Why relativize justification to what "we" believe? Isn't that too subjective, too nonrealist, to fall within the spirit of reliabilism? Why should what we believe determine what beliefs are really justified?
And the theory faces a counterexample (from Stew Cohen) presented by Goldman:
(vii). ALIEN. "Finally, even if all of these problems could be resolved, it isn't clear that the normal-worlds approach gets things right. Consider a possible non-normal world W, significantly different [End Page 47] from ours. In W people commonly form beliefs by a process that has a very high truth-ratio in W, but would not have a high truth-ratio in normal worlds. Couldn't the beliefs formed by the process in W count as justified? To be concrete, let the process be that of forming beliefs in accord with feelings of clairvoyance. Such a process presumably does not have a high truth ratio in the actual world; nor would it have a high truth ratio in normal worlds. But suppose W contains clairvoyance waves analogous to sound or light waves. By means of clairvoyance waves people in W accurately detect features of their environments just as we detect features of our environment by light and sound. Surely, the clairvoyance belief-forming processes of people in W can yield justified beliefs" (Goldman 1988, 62).
This kind of clairvoyance is then very different from Norman's, for it functions much like ordinary perception does for us, despite its difference. After offering this case among his other criticisms, Goldman concluded that "it seems wise to abandon the normal-worlds version of reliabilism" (Goldman 1988, 62).
In 2002 Goldman embraced the actual-world reliability theory. In 1979 and 1986 he expressed doubts. (He raised the "what if the actual world is a demon world?" objection.) Not so in 2002, nor in 2011. However, actual-world reliabilism also confronts this very counterexample involving aliens. For the clairvoyance of these alien creatures does not exist in the actual world, and if it were it to exist in the actual world (per impossible), it would not be reliable. ALIEN shows that what is actual does not constrain what is possible. There are possible belief-forming processes that are correct—that render beliefs justified—even if they are not actual, and so are not reliable in the actual world. Shouldn't it seem just as wise to abandon the actual-world reliability theory?
In general, we should not try to avoid intuitive counterexamples involving possible worlds by "actualizing," for it is merely a logical trick. Take any two properties F and G that covary in the actual world. Claim that F strongly supervenes on G, so that they march in step in all possible worlds. Then in the face of possible world counterexamples where they pull apart, argue instead that F strongly supervenes on G-in-the-actual-world. F and being-G-in-the-actual-world then march in step in all possible worlds. Counterexample diffused? Take this computer and the tree across the street. What is it to be a computer, or this computer? Well, in all possible worlds this computer has the property of existing at the same time (right now) as that tree in the actual world. Those two properties march in step across all possible worlds. In every world where this computer exists, it has the property of existing at a certain distance at a certain time from this tree in the actual world. Does that temporal and modal property—a property this computer has in all possible worlds—have much of anything to do with what it is to be this computer? Not at all. (This is a theme from Kit Fine. I pursue this issue in Graham 2016a. Cf. Sosa 2001.)
I have criticized Goldman's two prominent versions of special circumstances reliabilism. What about Goldman's first suggestion, nonmanipulated environments? It does not work either. We can imagine freak cosmic accidents that result [End Page 48] in clairvoyant powers or massively deceived brain type cases. That's what happens in Norman's case; there is no benevolent manipulation of the environment. And in a freak cosmic accident that creates a brain in a vat type case, there is no malevolent manipulation of the environment.
Furthermore, what's so bad about benevolent intervention as such? What if God exists and is the designer and sustainer of the Universe? Then the reliability of perception in ordinary human circumstances is due to God's benevolent manipulation. Or what if there is a world with a very powerful being who designs and creates a race of creatures with perception and changes their environment so their capacities reliably represent the environment? Then the reliability of perception is the result of purposeful, benevolent intervention, and their reliably true perceptual beliefs are not justified on the theory.
All versions of special circumstances reliabilism face a general challenge, including the version I am about to offer. Special circumstances reliabilism starts off with the idea that reliably forming true beliefs is an epistemic good. That's the idea behind the epistemic goods argument. But they switch from de facto reliability (getting closer to the truth, as a matter of fact, by relying on the process) to reliability in special circumstances as the epistemic good in question; that's the theory. But then the advocate of this move needs to explain why reliability in special circumstances is an epistemic good, even when the individual is not in special circumstances. Why is being reliable there (in special circumstances) what makes a process here (outside of special circumstances) justification conferring? Why does justifiedness in all circumstances supervene on reliability in special circumstances? Why is being reliable in special circumstances an epistemic good that persists across all possible circumstances?
Think about Hannah. Her beliefs are not de facto reliable, and so lack that epistemic good, but even so are reliable in special circumstances, and so on the theory have the property we are looking for. But why should being reliable over there in special circumstances (where Hannah isn't) explain why Hannah's beliefs here (in the vat, mostly false) have the property we are looking for? Or think about Norman. According to special circumstances reliabilism, Norman's beliefs are not justified because his clairvoyant mutation is not reliable in special circumstances (whether it be the actual world, normal worlds, nonmanipulated environments, and/or some other specification). Why should Norman's beliefs fall short for lacking reliability in special circumstances, even though his clairvoyant-based beliefs are mostly true? An explanatorily adequate special circumstances reliabilism needs to provide compelling answers to these questions.
7. NORMAL CIRCUMSTANCES RELIABILISM
Let's sum up so far. Taking a fully externalist point of view, we can still agree that there is an epistemic property (an epistemic good) that is present in ordinary [End Page 49] human perception cases and even cases like Hannah's, but absent in cases like Norman's. We've compared these cases and others to four of Goldman's theories:
Simple Reliabilism: In all possible worlds W, a belief is prima facie justified iff based on a process reliable in W.
Normal-Worlds Reliabilism: In all possible worlds W, a belief is prima facie justified iff based on a process reliable in worlds that share the general features we believe to hold of the actual world.
Actual-World Reliabilism: In all possible worlds W, a belief is prima facie justified iff based on a process reliable in the actual world.
Non-Manipulated Environments: In all possible worlds W, a belief is prima facie justified iff based on process reliable in non-manipulated environments.
These four refinements of process reliabilism are all versions of special circumstances reliabilism. And all four, I've argued, extensionally fall short.
I believe the right refinement of process reliabilism we are looking for adverts to normal circumstances. The right special circumstances are normal circumstances. If this is right, then the epistemic property we are looking for, the property that Hannah's beliefs possess but Norman's does not, is the normal functioning of the belief-forming process, provided the process has forming true beliefs reliably as an etiological function, such that the process is reliable in normal circumstances.
Let's call this position Normal Circumstances Reliabilism:
In all possible circumstances C, a belief is prima facie justified in C iff based on a normally functioning process that has the etiological function of reliably producing true beliefs (so that it is reliable in normal conditions when functioning normally).
This refinement of process reliabilism, I shall argue, is extensionally adequate. To telegraph, since Hannah's perception has the etiological function of reliably inducing true beliefs, then it is reliable in normal conditions. Hence it is reliable in normal conditions when functioning normally. In the vat, her perceptual processes continue to function normally. If justified belief consists in normal functioning, provided the belief-forming process is reliable in normal conditions, then her normally formed beliefs in the vat continue to be justified. Norman's clairvoyance has no etiological function, and so nothing counts as normal functioning or normal conditions for his clairvoyance. Besides forming mostly true beliefs, there's no other epistemic good his beliefs enjoy. That's why Norman's beliefs fall short.
In the sections following I will elaborate on all of this: the idea of an etiological function; the interconnections between etiological functions, normal functioning and normal circumstances; and why justified belief should be partly constituted by reliability in normal conditions. (Cf. Graham 2011a, 2011b, 2011c, 2012, 2014a, 2014b, 2016, and forthcoming.) [End Page 50]
The function of a thing denotes what it's for, its purpose. The heart is supposed to pump blood; that's its function; that's what it's for. Functions in this sense are effects. By beating, the heart causes the circulation of blood. But not every effect (even highly regular effects) is a function in this sense. Your heart regularly and reliably makes a rhythmic noise, but making noise is not a function of your heart; that is not what it is for. Your nose regularly and reliably holds up nose rings, but that is not what the nose is for. There are functional effects that explain why something exists, and then there are nonfunctional, "accidental" side effects that do not.
Larry Wright (1973) argued that this distinction strongly supports an etiological condition on functions, where functions are consequences that explain why the item exists. Here is Wright's analysis:
A function of X is Z if and only if:
(1). X does Z (Z is a consequence [result] of X's being there, i.e. X's are disposed, do, or can do Z).
(2). X is there because it does Z (that X's are disposed, do, or can do Z explains why X is there).
Wright's condition (2) then says that for any function, there must be some feedback mechanism that takes the satisfaction of (1) as input and generates existence or continued existence as output. Functions thus arise from consequence etiologies, etiologies that explain why something exists or continues to exist in terms of its consequences, because of a feedback mechanism that takes consequences as input and causes or sustains the item as output. Functions are then explanatory features or effects.
Nonfunctional features or effects are nonexplanatory features or effects, and so in that sense "accidental," even if nonaccidentally regular. By beating regularly, hearts pump blood, and we have hearts because they pump blood. Though by beating regularly hearts make noise, we do not have hearts because they make noise.
I have come to believe that we should include a benefit or welfare condition. Functions are not just explanatory features or effects. Functions are means to some good or benefit of the containing system, where goods or benefits are understood very broadly. In order for Z to be a function of X, doing Z must do the system of which it is a part some good, and this good must be relevant to the feedback mechanism that explains why X exists in the system. Functions arise through a feedback mechanism that involves explanatorily beneficial effects. Don't ask why I've come to believe this. That's a topic for another occasion (cf. Graham 2014b; McLaughlin 2001).
We arrive at the following abbreviated analysis of natural functions:
A function of X in S is Z iff:
(1). X does Z in S.
(2). Z benefits S. [End Page 51]
(3). X exists in S because Z benefits S (X is the product of a feedback mechanism involving the beneficial character of Z to S).
This analysis as stated is entirely neutral on possible feedback mechanisms. Natural selection is often offered as the central mechanism. However, natural selection is not the only feedback mechanism generating etiological functions. Learning is another. You try something, get some feedback, and either keep doing it or try something else. And there are other feedback mechanisms. Indeed, there is a growing literature on system maintenance as a feedback mechanism that generates etiological function attributions that does not involve reproduction, and so does not involve natural selection (e.g., McLaughlin 2001; Mossio, Saborido, Moreno 2009). Natural selection is not necessary for etiological functions.
9. FUNCTIONS, NORMAL FUNCTIONING, NORMAL CONDITIONS
The etiological account of functions entails an account of normal functioning and normal conditions. Functions arise when an item produces a beneficial effect that in turn enters into a feedback mechanism, where the mechanism explains why the item persists or reoccurs because of the beneficial effect. The full explanation for why and how all of this happened will cite how the item worked or operated so as to produce that effect and the circumstances—both internal or "inside" and external or "outside" the individual or organism.
What counts as normal functioning and normal circumstances then falls out of the historical explanation. Normal functioning is the way the item worked or operated when it underwent feedback for its beneficial effect; normal working just is working that way. Normal conditions are those circumstances (and circumstances of relevantly similar kind) where all of this happened. Look at the item's history, at the beneficial effects that help explain why it persists and recurs, at how it worked to produce these effects, and where it all happened. Voila, normal functioning and normal conditions (Millikan 1984; Graham 2012).
For example, a muscle in an organism's chest pumps blood by beating regularly. In turn it is connected in a systematic way with other parts of the organism, embedded in a certain type or kind of environment. If pumping blood explains, in part, why the muscle recurs through benefiting the kind or the individual, then it comes to have pumping blood as a function. The way the muscle worked when it entered the feedback mechanism for pumping blood equals normal functioning. Normal conditions are then those circumstances (and circumstances of similar type) where all of this occurred.
Given the way normal functioning and normal conditions are determined, normal functioning and normal conditions are then constitutively, explanatorily interrelated with function fulfillment. Normal functioning, normal conditions, and function fulfillment are all holistically interrelated. In particular, normal functioning [End Page 52] is individuated and explanatorily understood in terms of the function of the item, for normal functioning just is operating or working the way the item operated in normal conditions so as to produce the functional effect. Normal functioning is then constitutively associated with function fulfillment.
Normal functioning constitutively "aims" at, contributes to, and conduces to function fulfillment. For normal functioning is nonaccidentally and explanatorily understood in terms of the function (and so the "aim") of the item. By functioning normally, the item nonaccidentally and constitutively fulfills its function (and so achieves its "aim"). By functioning normally, it nonaccidentally and constitutively contributes to function fulfillment; normal circumstances contribute the rest. And by functioning normally in normal conditions, it nonaccidentally and constitutively conduces function fulfillment.
Though holistically interrelated, it's important to see that normal functioning and function fulfillment are token distinguishable; on particular occasions you can have one without the other. Consider a world-famous surgeon who needs to remove your heart during a very complicated surgery to cure a disease in the middle of your chest. She may place your heart in a sterile dish and stimulate it with electrical wires so that it beats normally—it operates exactly the way it should—but no blood is passing through. Your heart then functions normally (it's in perfect shape), though it doesn't fulfill its function. And so on occasions a normally functioning heart may fail to fulfill its function for it's not in normal conditions.
10. FUNCTIONAL AND EPISTEMIC GOODS
Let us say that it is good for a functional item to fulfill its function; after all, that is what the functional item is supposed to do. Let us also say that it is good for a functional item to function normally; after all, that is the way the item is supposed to work or operate, the way the item contributes to fulfilling its function. And let us further say that it is good for a functional item to fulfill its function (partly) because it is functioning normally. And so for any item with a function, we have three functional goods. And for any item with an etiological function, we have three constitutively related goods. In particular, normal functioning is a good consititutively understood in terms of function fulfillment, as just explained.
Assume that human perception, and Hannah's in particular, has the etiological function of reliably inducing true beliefs. Then we can identify three functional goods that are also epistemic goods, for these three functional goods are understood in terms of inducing true belief and avoiding error:
(1). Function fulfillment: reliably inducing true beliefs.
(2). Normal functioning: working in such a way that, in normal conditions, reliably leads to true beliefs.
(3). Reliably inducing true beliefs because functioning normally. [End Page 53]
Hannah's beliefs before envatment enjoy all three epistemic goods. But after envatment only the second. Call that property "justifiedness." Her beliefs are correct or proper—in the right—vis-à-vis the aim of forming true beliefs and avoiding error, for in functioning normally, she is forming beliefs with that aim—that function—the way she should. (Sosa aficionados will note the parallel with accuracy, adroitness, and aptness.)
Assume that Norman's clairvoyance does not have the etiological function of reliably inducing true beliefs (no feedback, just a mutation). Then even though his beliefs so formed are mostly true, there are no functional goods, and so no functional epistemic goods, his beliefs enjoy. That's why when we imagined it broken, unreliable, or in a vat, there was nothing to be said in favor of his clairvoyance beliefs. Because his clairvoyance has no etiological function, nothing counts as functioning normally, and his clairvoyance, even though de facto reliable, is not reliable in normal circumstances, for nothing counts as normal circumstances.
The alien race with clairvoyance that Cohen imagined (that Goldman granted and that Lyons makes a big splash about in his book), on the other hand, easily fits the present idea. They have their capacities due to feedback; their clairvoyance makes a difference in their lives. Their clairvoyance, like human perception, has the etiological function of inducing true beliefs, even if only merely possible and not actual.
We thus arrive at normal-circumstances reliabilism. Here it is again:
In all possible circumstances C, a belief is prima facie justified in C iff based on a normally functioning process that has the etiological function of reliably producing true beliefs (so that it is reliable in normal conditions when functioning normally).
No consciousness required. No phenomenology. No seemings or seemings-that. No reasons, arguments, justifications, self-awareness, or critical reason. A "fully" externalist epistemology that "captures" clairvoyance and brain-in-a-vat cases. Not bad.
And this, I believe, is where the momentum of Goldman's various refinements and modifications should lead him. Our actual environments are normal environments; the very idea of "normal worlds" can be construed as an attempt to give a theory of normal environments (if not a satisfying theory in the end); and the theory of normal circumstances provides a baseline to judge whether an environment was "manipulated" or not. It also fits in with a broadly naturalist frame of mind, something Goldman would also endorse. The vet version of special circumstances process reliabilism is normal circumstances reliabilism, itself a version of proper function reliabilism.
11. SHOULD PROPER FUNCTION RELIABILISTS WORRY ABOUT SWAMPMAN?
But what about Swampman?
A bolt of lightning hits a log in the swamp in Florida and lo and behold a cosmic accident of gargantuan proportions occurs: a molecule for molecule duplicate [End Page 54] of Donald Davidson appears: Swampman. Swampman bears no causal, historical, or explanatory relation to anything. He (it?) is a cosmic miracle. Having almost zero history (he just popped into existence a nanosecond ago), nothing about Swampman has entered a feedback mechanism of any kind. No matter how short the time period is required for a feedback mechanism to kick in and do the work an etio-logical theory of functions requires, nothing about Swampman has an etiological function. Hence his cognitive capacities—if he even has any—have no etiological function, let alone the etiological function of representing reliably. Hence the normal functioning I've been going on about as the property externalist reliabilists should be looking for isn't a property of any "structure" within Swampman's brain. Swampman's beliefs at creation, if he has any, aren't justified on normal circumstances reliabilism.
But within moments, many will say, he's reliably forming true beliefs. Why can't he have justified beliefs? Can't, and shouldn't, an "externalist" allow for that? Indeed, this is just how many reliabilists have argued: Goldman, Sosa, Goldberg, Lyons. So what about Swampman?
I must confess I wasn't prepared for Swampman's popularity when I started working on this project about ten years ago. That's because I came to epistemology from the theory of mental representation. Though Swampman had a run in the mental representation literature, externalism about mental representation won the day. To have any mental states that represented external properties and relations (and not just particular, individual objects, kinds, and substances) like red, square, distant, and so on, individuals had to stand in actual causal (not merely counter-factual supporting) explanatory relations with at least some instances of those properties and relations. So Swampman, having no explanatory connection of any kind with a broader environment, would lack mental representations of external properties and relations (and a lot more besides). Or at least that is what I thought I learned from the theory of mental representation.
Here's Davidson:
Suppose lightning strikes a dead tree in a swamp; I am standing nearby. My body is reduced to its elements, while entirely by coincidence (and out of different molecules) the tree is turned into my physical replica. My replica, The Swampman … moves into my house and seems to write articles on radical interpretation. No one can tell the difference. But there is a difference. My replica can't recognize my friends; it can't recognize anything, since it never cognized anything in the first place … I don't see how my replica can be said to mean anything by the sounds it makes, or to have any thoughts.
Here's Ruth Millikan:
Suppose that by some cosmic accident a collection of molecules formerly in random motion were to coalesce to form your exact physical double … that being would have no ideas, no beliefs, no intentions, no aspirations, no fears, and no hopes … This is because the … history of the being would be wrong … To the utterances of that being, Quine's [End Page 55] theory of the indeterminacy of translation would apply—and with a vengeance never envisioned by Quine.
Here's Fred Dretske:
A favored way of dramatizing this is by invoking Swampman, a creature imagined by Donald Davidson (1987), who materializes by chance when a bolt of lightning strikes a decaying log in a swamp. Miraculously, Swampman is, molecule for molecule, the same as Davidson. Since Swampman has no significant history—certainly none of the kind that would (according to historical accounts of the mind) give him thoughts and (maybe even) experiences—he lacks these mental states even though he is physically and (therefore) behaviorally indistinguishable from Davidson.
I worked through that literature from the 1980s and 1990s and almost started a dissertation project on the theory of mental representation before turning to epistemology full time. My first essays even brought externalism about mind to bear on epistemology (cp. Burge 2003). You can imagine my surprise when I started giving talks on my research and everyone's favorite question seemed to be: "So, what about Swampman?" To me it was sort of like being asked, "So, what about Malebranche?" when giving talks on mind-body causation.
Alas, appeals to the philosophy of mind have fallen largely on deaf ears in a good deal of epistemology. And so one strategy to reply to the Swampman objection would involve shifting the burden of proof: if you are going to rely on an intuition discredited elsewhere in philosophy, then you need to show why the consensus in that other area is mistaken; you can't simply rely on the consensus of your colleagues in your subdiscipline, where you subdiscipline (in general) has not investigated the issue.
Alas, that strategy is unlikely to move minds. Appeals to the philosophy of mind have fallen largely on deaf ears in a good deal of epistemology. (Even Sandy Goldberg, an anti-individualist in the philosophy of mind, helps himself to Swampman in epistemology [Goldberg 2012].) I need to try something else.
And so for the sake of argument I will assume that Swampman can have representations and beliefs and all that. Should a normal circumstances proper function reliabilist like myself worry about Swampman, granting Swampman's possibility? I'll briefly pursue two lines. (For other replies, see Graham 2012, 2014b.)
11.1 historical reliabilism
First, I'll argue from epistemology. Goldman is a historical reliabilist (and so too are many other reliabilists). The Swampman objection also applies to the historical reliabilist. So if we assume that at least historical reliabilism is true (that we have good reasons qua epistemologists to think that historical reliabilism is true), then we have good grounds to discount Swampman intuitions about justified belief.
So what is historical reliabilism? Here is Goldman from his Stanford Encyclopedia entry describing his view in the third person: [End Page 56]
[Goldman's] process reliabilism … is a "historical" theory. A reliable inference process confers justification to an output belief, for example, only if its input beliefs were themselves justified. How could their justifiedness have arisen? By having been caused by earlier reliable processes. This chain must ultimately terminate in reliable processes having only non-doxastic inputs, such as perceptual inputs. Thus, justifiedness is often a matter of a history of personal cognitive processes. This historical nature of justifiedness implied by process reliabilism contrasts sharply with traditional theories like foundationalism and coherentism, which are "current time-slice" theories. But Goldman welcomed this implication. The traditional notion that justifiedness arises exclusively from one's momentary mental states has always been problematic.
Supposing Swampman at inception has the very same beliefs Davidson does at that very same time, does Swampman, according to historical reliabilism, have all the same justified beliefs that Davidson does?
First, Swampman's "memory" is riddled with errors. Swampman's memory "faculty" (if he has one) may be just as reliable at preserving memories, but since Swampman was just created, Swampan's "memory" beliefs are all false. From a historical reliabilist point of view, his memory beliefs are, at best, conditionally justified, and, at worst, not justified at all.
Second, even that point about memory isn't true, because none of his "memory" beliefs were produced by the normal operation of memory. He just popped into existence, after all, so there has not been enough time for his beliefs to be produced (or sustained) by the normal operation of memory. Memory, as a psychological capacity, is not involved in their origin.
Third, the very same point applies to any beliefs apparently based on reasoning. Davidson had tons of justified beliefs based on inductive and deductive reasoning. The "duplicate" beliefs in Swampman are not the result of any reasoning at all. They just popped into existence. So they are not even conditionally justified. Again, there has not been enough time for those beliefs to have been produced by the normal operation of reasoning.
Fourth, the very same point also applies to Swampman's "perceptual" beliefs at inception. At inception, he has "duplicate" perceptual beliefs (in a sense) as Davidson, but at inception those beliefs are not caused by the normal operation of perception. Those beliefs, at inception, are not caused by any reliable belief-forming process. They were caused by a cosmic accident.
So for Swampman to have any justified beliefs according to historical reliabilism, time has to pass. Swampman at inception does not have justified beliefs. Hence simply being a molecule for molecule duplicate of an ordinary human being, in an ordinary human environment, where certain psychological processes over time reliably produce true beliefs, is not enough for prima facie justification, at least according to historical reliabilism.
So if Swampman at inception is a counterexample to proper function reliabilism, Swampman is also a counterexample to historical reliabilism. But thinking [End Page 57] about causal bases as a precondition for justified belief, the role of memory and reasoning, it seems clear that there's a very strong case for a historical requirement on justified belief. It thus seems clear that we have good reason to count the case for historical reliabilism as evidence against a Swampman intuition. So if we assume historical reliabilism as well motivated as a constraint on a complete process reliabilist theory of justified belief, then we have motivated grounds for discounting a Swampman intuition. There is thus good reason to believe that the Swampman intuition, within the subdiscipline of epistemology, has been defeated.
11.2 avoiding projection
Second, I want to take a page out of Lyons's playbook. Remember the mere demon-worlders? They were creatures characterized by experience-belief functions: given this experience, this belief results. Lyons argued that we couldn't just assume that any old pairing was epistemically correct, such that that belief was the right "justified" response to that experience. Lyons went on to claim that we project from our own case : if the demonworlder responds the way we do (forms the belief we would) in response to a particular experience, then the belief is justified, otherwise not. Lyons went on to argue that our projection is unprincipled. We need to assume neutrality on the species, evolutionary history, learning history, innateness, environment, cognitive architecture, and any other form of "anchoring" of the demon-worlder. And once we did that, we had no basis to say whether the experience-belief pairing was correct or incorrect. Experience-belief pairings on their own are too weak to answer the epistemological question of whether the pairing is the right or the wrong one.
I think we should say the same thing about a Swampman case. To avoid projecting our own case onto Swampman—which is so very easy to do because he looks, talks, bleeds, and so on, just "like" a real human being—we need to imagine a "swamp" case that doesn't involve a human being, but some made-up creature and imagine a "swamp" version of that creature.
Imagine Barry. Barry has representational and belief-forming structures in its "brain." Barry just popped into existence when a bolt of lightning hit some dead organic matter on some planet in some possible world. Barry's representations and beliefs bear no explanatory or causal relationship to any entities anywhere in the Universe. Even so (I am granting for the sake of argument) they represent properties, relations, objects, and kinds in the environment Barry happens to be in—the Barry world. Barry turns its "head" from side to side and forms beliefs about the environment; Barry we will imagine has existed long enough to form beliefs based on reliable belief-forming processes. These beliefs are all true; the structures in its "brain" reliably induce true beliefs.
Keep in mind the resolute externalism. Barry is blindsighted, has no conscious experiences, no seemings, or anything like that. Barry lacks higher-order abilities, introspection, and critical reason. Even so, Barry has all the things the simple reliabilist says are enough for justified beliefs. Even on Lyons's "inferential reliabilism" [End Page 58] Barry is covered. Remember I am arguing here with the reliabilist who rolls out Swampman cases, not the internalist (the internalist got off the bus a long time ago). So, what do you think, does Barry have justified beliefs?
First question about Barry: What distinguishes Barry from Norman? Norman's clairvoyance beliefs were all reliably true. But that's it. As it stands right now, in the example as described, Barry's beliefs only have that property, being reliably true. If that's not enough in Norman's case, then that's not enough in Barry's case. Norman's clairvoyance wasn't based or sustained on any feedback. Barry, qua swamp creature, is a full-bodied "clairvoyant" power. What, if anything, distinguishes Barry from Norman? Why say Barry's beliefs are justified but Norman's are not? Feedback? You can't say that, for that goes beyond simple reliabilism, and it is not a part of the picture just yet. And don't forget, feedback takes time, which is just the problem that led to Swampman in the first place.
Second question about Barry: What if Barry is a molecule for molecule duplicate of Terry, where Terry is a diseased, horribly malformed member of its species, where in its world (the Terry-world), Terry is very clearly supposed to form its beliefs differently? Should we say that Barry, as a molecule for molecule duplicate of Terry, is forming beliefs as it should, even though Terry is not? And what if, by some cosmic accident, Terry is reliably forming true beliefs, even though Terry is clearly malfunctioning even so? What is the basis for saying Barry is forming beliefs as it should, whereas Terry is not, even though they are exact physical duplicates, both reliably forming true beliefs? Why not say that Barry's beliefs are not justified, for Barry is a "swamp" double of Terry? Indeed, the Barry-world might just as well be the Terry-world. Why not say, because Barry is a double of Terry, that Barry's beliefs are not justified? (cf. Bergmann 2004; Boyce and Moon 2016).
Third question about Barry: What if Barry, instead of popping into existence in a world where it will soon reliably form true beliefs, pops into existence in a deceiving-vat scenario, so that the beliefs it forms will be false? Barry, a cosmic accident, by accident starts forming all false beliefs. Are they justified? Why or why not? Remember we are looking for a property that, on a reliabilist theory, an individual can still have while in a vat. What property does Barry have in the vat? Why isn't Barry in the vat just like Norman in the vat, discussed earlier?
Barry's belief-forming processes are, at best, reliable processes in an environment. But the Norman case tells us that that is not sufficient for justifiedness. And the Hannah case tells us that that is not necessary for justifiedness. Without saying more about Barry, there is no basis for saying whether his beliefs are justified or not. Just as reliabilists don't have to worry about demonworlders like Larry, as Lyons has persuasively argued, proper function reliabilists shouldn't have to worry about swamp creatures like Barry either.
I agree with Lyons that process reliabilists need anchors (and that we need to abandon our tendency to just project). As I've argued, the best anchor involves feedback mechanisms that generate etiological functions, so that a justified belief is a belief based on a normally functioning belief-forming process that has forming [End Page 59] true beliefs reliably as an etiological function, so that the process is reliable in normal circumstances when functioning normally. In other words, the best refinement for process reliabilism is normal circumstances reliabilism, itself a version of proper function reliabilism.
ACKNOWLEDGMENTS
I presented earlier versions of this paper to a meeting of the Southern California Epistemology Network at UCLA in 2006 and to a meeting of the Work-In-Progress group at the Claremont Colleges. I remember beneficial feedback from Mikkel Gerken, Paul Hurley, Chris Kelp, Peter Kung, Nikolaj Pedersen, and Julie Tannenbaum. Most recently I presented this paper at the conference in honor of Alvin Goldman at the College of William and Mary in 2016 arranged by Chris Tucker and Jack Lyons. I remember beneficial feedback from Paul Davies, Richard Fumerton, Hilary Kornblith, Alvin Goldman, Jack Lyons, and Jennifer Nagel. Thanks also to Jack Lyons for comments that led to improvements on the penultimate draft. Alvin Goldman's influence on my intellectual trajectory has been substantial. I am grateful for his support and encouragement throughout the years.