
Can Consciousness Extend?
The extended mind thesis prompted philosophers to think about the different shapes our minds can take as they reach beyond our brains and stretch into new technologies. Some of us rely heavily on the environment to scaffold our cognition, reorganizing our homes into rich cognitive niches, for example, or using our smartphones as swiss-army knives for cognition. But the thesis also prompts us to think about other varieties of minds and the unique forms they take. What are we to make of the exotic distributed nervous systems we see in octopuses, for example, or the complex collectives of bees? In this paper, I will argue for a robust version of the extended mind thesis that includes the possibility of extended consciousness. This thesis will open up new ways of understanding the different forms that conscious minds can take, whether human or nonhuman. The thesis will also challenge the popular belief that consciousness exists exclusively in the brain. Furthermore, despite the attention that the extended mind thesis has received, there has been relatively less written about the possibility of extended consciousness. A number of prominent defenders of the extended mind thesis have even called the idea of extended consciousness implausible. I will argue, however, that extended consciousness is a viable theory and it follows from the same ‘parity argument’ that Clark and Chalmers (1998) first advanced to support the extended mind thesis. What is more, it may even provide us with a valuable paradigm for how we understand some otherwise puzzling behaviors in certain neurologically abnormal patients as well as in some nonhuman animals.
[End Page 243]
1. INTRODUCTION
Until relatively recently, most philosophers agreed that mental states existed only in the brain (even if they disagreed about mental content). It caused a stir when Andy Clark and David Chalmers (1998) gave reason to doubt that belief, arguing instead that one’s mental states are sometimes partially constituted by the devices we use. But, despite the many recent debates around the extended mind thesis, philosophers have continued to take solace that consciousness, at least, is still in the head. Consciousness is, after all, a neurobiological phenomenon. Of course, some argue that in the future we might be able to build minds with artificial consciousness, but rest assured our consciousness is entirely brain based, at least as matters stand. And so too with other nonhuman animals that enjoy consciousness—their subjective experiences are also constituted by their brains. But it turns out this belief may not be so secure. I will show that Clark and Chalmers’s argument for the extended mind thesis generalizes to arguments for extended consciousness. As such, it casts into doubt the consensus view that consciousness exists only in the brain.
Advocates of the extended mind thesis, including Clark and Chalmers (1998), usually draw a line between the conscious and nonconscious, maintaining that while nonconscious mental states and processes can ‘extend’ beyond one’s biological brain and body, conscious states cannot. Extended consciousness, they argue, is implausible. I will argue, however, that once certain functionalist premises are in place, the possibility of extended consciousness seems to follow from the extended mind.1 Clark and Chalmers have resisted this by offering a number of objections to extended consciousness, but I will argue these are unsuccessful. My other aim in this paper, however, will be to show that they need not fear a looming reductio of the extended mind thesis, as extended consciousness is not as implausible as it initially seems. Rather, extended consciousness is a promising theory that has some chance of being true in both humans and nonhuman animals. I’ll end by considering some possible objections to my arguments.
2. THE EXTENDED MIND (EM) AND VEHICLE EXTERNALISM
To start, it will be helpful to provide a brief background on the extended mind thesis and to clarify some key terms. The extended mind thesis, henceforth ‘EM’, [End Page 244] maintains that mental states and processes are sometimes partially realized by tools or artifacts outside one’s brain or body.2 To defend this thesis, Clark and Chalmers (1998, 8) argue that we should treat functionally equivalent processes with “the parity they deserve” irrespective of whether they are internal or external to the skull. Armed with this functionalist parity principle, they then give an example involving two people: Otto, who frequently carries a notebook for storing and retrieving important information (e.g. addresses, directions, phone numbers) that helps him structure his life; and Inga, who stores similar information in her internal (biological) memory. Despite many minor differences in where and how they store their personal information (e.g. Otto writes it down, whereas Inga doesn’t), Clark and Chalmers insist that the functional role the information plays in the two cases is analogous, or in other words, “the essential causal dynamics of the two cases mirror each other precisely” (1998, 12). To characterize these causal dynamics, the two authors focus on three ‘conditions’ that both Otto and Inga meet: (1) the information they access is a constant in their lives (it is there whenever they need it); (2) it is immediately available to guide their behavior; and (3) they rely on it; that is, they readily endorse the information without questioning its veracity. The example is meant to show how information-bearing structures outside of the head, such as the writing in Otto’s notebook, can serve as the vehicle of mental content in a way that is functionally equivalent to the information-bearing structures in the brain. According to the parity principle, both the external and the internal information should therefore count equally as part of the constitutive machinery of Otto and Inga’s minds, respectively (Clark and Chalmers 1998, 12).
EM is a version of what is referred to in the literature as vehicle externalism insofar as it maintains that the vehicles of our mental representations can sometimes exist outside the biological brain and body. A representational vehicle is something that represents something else; let us say a picture of Queen Elizabeth II, for example; whereas representational content is that which is represented: the Queen herself in this case. Any picture of the mind that commits to the representational theory of mind has to take some position on where the vehicles of those representations are (or could be) located. Vehicle externalism is to be contrasted with vehicle internalism, which maintains that all the vehicles of our mental representations are instantiated exclusively by the neurological states and processes of the brain. Vehicle internalism is the traditional view in cognitive science, regarding both conscious and nonconscious states and processes. [End Page 245]
3. EXTENDED CONSCIOUSNESS (EC)
Let us now turn to the question of consciousness. In principle, the parity argument, as it has come to be known, could just as well support the possibility that a wide range of mental state types can extend, including both nonconscious beliefs and desires to conscious ones. But Clark and Chalmers (1998, 10) have long maintained that it is “far from plausible” that conscious mental states (or processes) extend at all. They have offered several reasons for the apparent implausibility of extended consciousness, henceforth ‘EC’, which I’ll turn to in the next section. But first let us try to define the meaning of EC. Like EM more generally, EC refers to the vehicles of mental states and processes and not to their content, the only difference being that EC specifically focuses on the vehicles of phenomenally conscious states and processes. According to EC then, conscious states and processes can be constituted by vehicles located partially outside the brain. EC might sound like a form of naïve realism, a view which maintains (broadly) that we experience the world directly and that phenomenal contents are individuated in terms of the objects, properties, or perspectival factors that a perceptual experience directly involves (see Brewer 2011; Campbell 2009). The crucial difference, however, is that EC is representational, whereas naïve realism is decidedly nonrepresentational (Campbell 2009; Locatelli and Wilson 2017). According to EC, like other indirect realist views, our conscious experience is not of the world directly, but rather of an internal representation of the world. This internal representation has physically instantiated vehicles, which EC maintains can consist of factors that lie beyond the neurobiological processes of the brain (contra vehicle internalism).
Here it’s important to clarify what EC entails. Chalmers (2008, xiv) states that if an agent’s consciousness is extended by some device, then that agent can have a different phenomenally conscious state than his brain-duplicate who does not have the same device. This twin test as Clark (2009a) calls it, lends itself to multiple interpretations, however. Let us imagine that Otto uses a tool to extend his conscious experiences, while twin-Otto does not. On the first interpretation, in order for the EC hypothesis to be vindicated, the use of the tool would have to give rise to a difference in phenomenal experiences even if the subjects’ brains go through an identical series of states. This would certainly seal the case for EC, but perhaps it isn’t necessary. On this second interpretation, EC might be vindicated even if it was never the case that two individuals with identical brain states differed in phenomenal experience, specifically if there are certain kinds of brain states that depend in some suitable way on the use of tools. In this instance, it may even be that Otto’s brain could not go through the series of states that it does, bringing about the conscious experiences that he has, without that tool. The factor that makes the difference is Otto’s tool in both cases. But, while the first scenario would provide the clearest evidence for EC, it also seems less plausible, as a matter of contingent fact. In the second scenario (as we will see when we return to this issue in §5), it is contested whether the tool is really partially constitutive of Otto’s consciousness, [End Page 246] or if it is merely contributing causally (in an instrumental way) to his conscious experience, with the brain remaining as the sole constitutive machinery.3 For now, I simply want to note the presence of these distinct interpretations.
4. DIFFERENCES IN BANDWIDTH
Let us now return to the parity argument and consider if there are compelling reasons for dismissing EC. I’ll begin with an early objection to EC that comes from Chalmers (2008). Chalmers asserts that while in principle it is possible that consciousness extends, “it is unlikely that any everyday process akin to Otto’s interaction with his notebook will yield extended consciousness, at least in our world” (2008, xiv). Here he explains why:
Perhaps part of the reason is that the physical basis of consciousness requires direct access to information on an extremely high bandwidth. Perhaps some future extended system, with high-bandwidth sensitivity to environmental information, might be able to do the job. But our low-bandwidth conscious connection to the environment seems to have the wrong form as it stands.
(Chalmers 2008, xiv–xv)
Although he does not elaborate on this, the main objection to EC here seems related to a difference in bandwidth—that is, the rate of information flow—between our brains and the environment. In a later paper surveying some of the possible arguments for EC (especially sensorimotor and dynamic entanglement arguments, which we will turn to later), Clark adopts and develops Chalmers’s objection, ultimately concluding that arguments for EM do not generalize to arguments for EC:
Perhaps conscious awareness is special among cognitive functions in so far as it requires (in us humans at least) certain information-accessing and information-integrating operations whose temporal scale makes neural (brain/CNS) processes (just as a matter of contingent fact, in us humans) the only adequate ‘vehicle’.
(Clark 2009a, 983)
Clark mentions both information access and information integration here, but he spends more time developing the first concern. Degree of information access is a feature that is closely connected to the concept of bandwidth. Clark explains, for example, that the nonneural body is slower at transferring information between the world and the brain, acting as a “low-pass filter,” slowing down the flow of information that passes through (Clark 2009a, 985). Here he is also concerned with the speed of information access. Now, one might wonder why this low-pass filter doesn’t also prevent nonconscious states (like Otto’s extended belief) and processes from extending. Clark argues that “long-term informational poise” is [End Page 247] an important determining factor for nonconscious states and processes, and this poise is unaffected by the body’s low-pass filter; whereas consciousness requires “online informational access and integration”; hence, the body’s low-pass filter blocks the possibility of EC (2009a, 983). Clark, for this reason, does not believe that arguments for EM can generalize to EC.
In earlier work (Vold 2015), I argued against this appeal to bandwidth as an objection to EC, citing the fact that vision is an extremely high-bandwidth process that is in fact reduced by processes in the brain. Information about the surfaces of objects, for example, is transferred via electromagnetic radiation to the eye, at which point there is drastic reduction in information flow (in this sense, Clark is right about the body acting as a low-pass filter). But this information flow is further reduced as it is subsequently transmitted from the eye to the brain. And yet, this reduction in bandwidth apparently does not impede integration, as perceptual content plays an important role—both in our phenomenal conscious experiences and our real-time capacities—to interact with and navigate through the world. Hence, it is not true that perception has a lower bandwidth than the information-integrating operations of the brain; but rather, perception transfers an enormous amount of information to the brain very quickly, and the brain slows down this information processing. This suggests that Chalmers and Clark’s appeal to properties like bandwidth and information access cannot serve to distinguish neural from extraneural processing. Chalmers (2019) has agreed with these concerns and offered a new objection to EC.
5. A TWO-PRONGED OBJECTION
Chalmers (2019, 19) argues that EC is rendered “impossible” due to the conjunction of two key claims. First, he claims that consciousness requires correlates that support “direct availability for global control” (2019, 19). Here he shifts the focus from bandwidth to “direct availability,” the other feature he had previously mentioned (in Chalmers 2008). And, notably, this is also a feature of consciousness that Chalmers and others have previously argued for (e.g. Chalmers 1996). Explaining what this feature amounts to, Chalmers writes, “for example, Tye (1995) suggests that consciousness requires representational states that are ‘poised’ for control of reasoning and action” (2019, 19). I will say more about this claim below, but for now just note that this claim alone is not sufficient for preventing EC. Consider the possible case that Chalmers describes in 2008 in which one’s “neural correlates of consciousness are replaced by a module on one’s belt” (xiv). In this case the neural correlates on the belt, it’s assumed, would be just as functionally integrated as the ones in the brain that they’re replacing. Hence, the imaginary scenario assumes that there is no reason to think that the neural correlate replacements would be unable to support direct availability for global control (or in turn, informational integration) simply because they are located on one’s belt rather than in one’s head. [End Page 248] A change of location, in itself, does not seem to impede function. A case like this would satisfy even the stronger interpretation of EC mentioned in §3; that is, the use of the module would make the phenomenal experience differ between brain duplicates (Otto and twin-Otto, let’s say) even if the subjects’ brains go through an identical series of states. Chalmers (2019) agrees that this direct availability claim alone does not block EC in these kinds of extended circuitry cases. So the second claim in his objection aims to do just this.
Chalmers second move is to reject the idea that extended circuitry cases should actually count as examples of cognitive extension. He instead defends a new criterion for genuine cases of extension, namely, the involvement of perception and action. His suggestion for this new criterion comes in response to an objection raised by Farkas (2012), who, pointing to extended circuitry cases, argues that EM in its original formulation is too weak to be of much interest. The intuition here seems to be that extended circuit cases are not significantly more interesting or controversial than, say, the silicon brain chip thought experiment proposed by Pylyshyn (1980). Extended circuit cases simply combine multiple realizability with ‘multiple localizability’: your neurons could be replaced by functionally equivalent silicon chips (i.e. a new realization) that operate outside the skull (i.e. in a new location), rather than replacing neurons within the skull. Chalmers notes that critics of the extended mind thesis, such as Adams and Aizawa (2008), who describe themselves as contingent intracranialists, could embrace the extended circuit cases that appeal to possible future technologies, but still deny that the mind actually extends in real-world cases like Otto’s, that rely on existing technologies. So, wanting to better capture what he thinks is really “interesting and controversial” about the thesis, Chalmers revised EM slightly (2019, 11). Real-world cases of EM, he says, all involve action and perception loops. Otto, for example, must write the information down in his notebook and later refer to it. And it is the use of these action and perception loops that capture the real controversy over EM. With this in mind, Chalmers articulates the new ‘sensorimotor’ version of EM:
A subject’s cognitive processes and mental states can be partly constituted by entities that are external to the subject, in virtue of the subject’s interacting with these entities via perception and action.
Armed with this new sensorimotor version, Chalmers can seemingly now block the move from EM to EC. In brief, he concludes that EC is impossible due to the conjunction of the following two claims:
Claim 1: Phenomenal consciousness requires correlates that support direct availability for global control.
Claim 2: The extended mind thesis requires that mental states and processes be extended in a certain way, namely, via perception and action.
As noted above, Claim 1 on its own does not prevent cases of extended consciousness, at least not logically possible cases such as the extended circuits cases. But, so Chalmers argues, processes that are extended via perception and action only [End Page 249] support information that is indirectly available for global control, and hence the two claims together cannot both be satisfied. This explains why extended consciousness is impossible.
6. REPLY TO CHALMERS
I will now turn to a few questions I have about each of Chalmers’s claims. But my main aim will be to show that even if one accepts both claims, their conjunction still does not block EC. I’ll look at each claim independently first and then turn to the conjunction. I will begin with the second claim—then move to the direct availability criterion—because it raises fewer issues, and so presents a more straightforward discussion.
6.I THE SENSORIMOTOR VERSION OF THE EXTENDED MIND THESIS
Both Clark and Chalmers have independently maintained that extended circuit cases are sufficient for the extended mind thesis, or in other words, that they should count as genuine cases of the extended mind; i.e. logically possible cases, though not occurring in practice today. Chalmers was the first to describe extended circuit cases (2008, quoted above), using these as an example of how consciousness might be extended in some possible world. In a later piece, Clark (2009b) describes a similar case, asking us to imagine a person who has suffered brain damage and lost the ability to perform certain cognitive functions, but by using an “external silicon circuit” she is able to restore the previous functionality. In this case, some of the machinery of her mind is distributed across her brain and the silicon circuit. Clark goes on to say that this extended circuit case “establishes the key principle” of his book on EM, Supersizing the Mind (2008). These examples show that the move to the sensorimotor version of EM is revisionary, which Chalmers (2019) himself acknowledges. Hence, we are left with two versions of EM: the original version supported by the parity argument, and the new sensorimotor version that adds the stipulation for the extension to occur through perception and action loops.
Now it’s fair to ask what justifies this revision. Chalmers says that he wants to better illustrate what is “interesting and controversial” about it (2019, 11). The new sensorimotor version does appear to capture the difference between extended circuit cases and Otto’s case, and to highlight some of the controversy surrounding EM. After all, one common view of the mind is that it exists between action and perception—the dual interfaces that divide the mind from the world. With this backdrop, the sensorimotor version of EM is controversial precisely because it challenges this ‘mental sandwich’ view; maintaining instead that mental-to-mental relations can also be mediated by action and perception. That said, both Clark and Chalmers found extended circuitry cases interesting and controversial enough to describe them in their earlier discussions of EM. And, given the recent rapid [End Page 250] advancements in brain-computer interfaces,4 it seems that the world of extended circuits may not be limited to science fiction for much longer. So if Chalmers’s motivation in moving to the sensorimotor view is to focus on real-world cases like Otto’s rather than on future technologies, this rationale may not hold up for much longer. But furthermore, it is not obvious that critics of the extended mind thesis, such as Adams and Aizawa (2008), would embrace the extended circuit cases if they came to fruition, or whether their skepticism is about non-brain-based mentality more generally (see also Polger and Shapiro 2017). As a final point, because Chalmers makes this revision in the same paper that he offers his new objection to EC, and the revision itself plays a critical role in his objection to EC, it risks appearing ad hoc. Despite these concerns over what justifies the move to the sensorimotor version, however, for the sake of argument I will henceforth focus on this new version of EM and try to show that even with this move, EC cannot be blocked. This means that our discussions of EC for the rest of this paper now concern the following thesis: A subject’s conscious processes and mental states can be partly constituted by entities that are external to the subject by virtue of the subject’s interacting with these entities through perception and action.
6.II THE DIRECT AVAILABILITY CRITERION
The other part of Chalmers’s objection to EC is the claim that consciousness requires correlates that support direct availability for global control. There are a few challenges to this claim, however. First, if Chalmers is going to claim that direct availability is limited to conscious states, then extended nonconscious states must lack this kind of availability. Yet, Clark and Chalmers (1998, 17) have stated that extended nonconscious states must also have some kind of “direct availability” (as mentioned in §2). So how should we distinguish the kind of direct availability required for consciousness from the kind they attribute to extended nonconscious states?
Although Chalmers does not address this concern head on, he does state that we need to clarify this distinction. At one point, for example, he says that nonconscious states have a two-step availability, “first to consciousness, then to control.” In other words, standing states require availability to consciousness first, and then once conscious they are available for control. In Otto’s case the first step is achieved through perception: he perceives the information in the notebook, thereby making it available to consciousness, and in turn it becomes available for the control of action. By contrast, conscious beliefs, as Chalmers explains, have a one-step availability: that is, things in consciousness are directly available for global control. This still leaves us wondering what kind of direct availability Otto’s extended nonconscious state is meant to involve.
Perhaps the best approach here is for objectors to EC to simply drop that earlier criterion for extended nonconscious states. But short of that, two options [End Page 251] remain for proponents of EC: (1) denying that consciousness requires correlates that support direct availability for global control, or (2) arguing that the criterion can be met by EC. My main strategy (in the next section) will be to argue that the criterion can be met, but let us briefly consider the first option here. Why might one question the claim that consciousness requires correlates that support direct availability for global control? Here’s one potential reason: following Block (1995), direct availability for global control is the mark of access-consciousness (henceforth, A-consciousness), so Chalmers’s claim assumes that phenomenal consciousness (henceforth, P-consciousness) must correlate with A-consciousness. Chalmers (1997) has previously argued for the plausibility of this correlation, but acknowledges nonetheless that this view fails to explain certain cases satisfactorily. Some patients, for example, report feeling pain while under general anesthesia, as this interrupts their executive control functions.5 Ultimately, the correlation between A- and P-consciousness is an empirical claim that might turn out to be true, but if it doesn’t, then Chalmers’s objection to EC would also fail. Despite this minor point though, something like the direct availability criterion is a common commitment across many of the leading theories of consciousness today (perhaps bearing its closet resemblance to Baars’s [1997] global workspace theory, or Dehaene’s [2015] more recent global neural workspace theory). So I will return now to the second option that remains for proponents of EC—that is, arguing (in the next section), that this criterion can still be met by extended conscious states and, hence, that Chalmers’s two-pronged objection fails to block the move from EM to EC.
6.III DENYING THE CONJUNCTION
I will now argue that even if we accept both claims 1 and 2, this does not rule out EC. Chalmers (2019, 20) reasons that the conjunction blocks EC because processes that are extended through perception and action support only information that is indirectly available for global control. He states, “perceptually mediated availability is indirect availability.” By ‘indirect’, however, Chalmers cannot mean only ‘perceptually mediated’; otherwise the move to the new sensorimotor version of EM risks begging the question against EC. In this case, it must therefore be explained how the neural correlates of p-consciousness bring about conscious experience in a ‘direct’ or ‘unmediated’ way. In other words, there must be some fact to justify the claim that any perceptually mediated form of availability is indirect (or two-step), while the way in which the internal neural correlates bring about consciousness are direct. To this end, Chalmers offers the following insight:
Processes that are extended via perception and action . . . support information that is only indirectly available for global control: in order to be used in control, it must travel causal pathways from object to eye, [End Page 252] from eye to visual cortex, and from visual cortex to the loci of control. By contrast, the internal neural correlates of consciousness need only travel some portion of the third pathway, from certain intermediate areas of the brain to the loci of control. Since consciousness requires direct availability, extended consciousness is impossible.
(2019, 19)
The explanation, then, seems to be that external information is indirectly available due to being perceptually mediated, which means it has to travel through more “causal pathways” than information in the brain that is not perceptually mediated. But why should a higher number of causal pathways impede availability (or access) for global control? After all, even if information travels a very long distance, it does not necessarily travel slowly or at a reduced bandwidth. Also recall that Chalmers (2019) agrees that the senses provide extremely high-bandwidth connections to the environment.
At this stage in his paper, Chalmers explains how standing information is only available for global control through the two-step explanation (described in §6.ii), according to which it must enter into consciousness before becoming available for global control. Crucially, Chalmers seems to presuppose that the type of perceptual access required (for states to count as ‘extended’) needs to be a conscious process itself. This requirement would indeed appear to block cases of extended consciousness: one would require a conscious process in order to perceptually access the relevant external information. But according to a longstanding view, articulated by Helmholtz (1995/1868), perception involves subconscious processes, including nonconscious inferences. This view continues to be debated (see Shepherd and Mylopoulos 2021), but as long as one can accept the idea that subconscious processes are involved in, or underlie, our perceptual access to the environment, there is no obvious reason to assume these processes would be any less direct than the lower-level unconscious processes that take place in the brain and correlate to consciousness.
Furthermore, it is widely agreed that the neural correlates of phenomenal experience are likely to involve large collections of operations which span across the brain’s network, rather than being localized to a small collection of cells. It therefore seems reasonable to wonder if the relevant internal pathway for information that gives rise to consciousness should itself be considered a mediated step, given that it too would likely need to travel along some causal pathways. After all, it is certainly possible that some of our internal pathways are mediated in a way that seems parallel to Otto’s perception of external information. Consider a case where Inga needs to engage in introspection to access some information stored in her brain—in this case the standing information would first be brought to consciousness, and then would be available for global control.6 But this is not likely [End Page 253] to happen in every instance. Certain information in the brain must be directly available, so perhaps some perceptual information can be too. Anderson (2017, 4) makes a similar point,7 arguing that there will always be an input with a cause more proximal to the brain than the retina or fingertips which could serve as the relevant boundary, such as, “the chemicals at the nearest synapse, or the ions at the last gate” for example. In his explanation (quoted above), Chalmers emphasizes the number of extra-neural causal pathways while minimizing the presence of internal causal pathways, but does not explain why the latter do not also constitute a relevant form of mediation. But it would be a narrow view to identify consciousness just with the loci of control (and their immediate inputs). In general, the number of causal pathways within the brain are only relevant if they somehow impede function, and the same is true for extra-neural pathways.
If this two-pronged objection does not block EC, then how does EC operate in practice? Is our consciousness extending into the world, unbeknown to us? Admittedly, this does sound rather strange and implausible, but the threat of a reductio looms unless we can begin to make sense of EC. Hence, I will now turn to the second aim of the paper: to demystify EC; showing it to be a viable possibility.
7. THREE EXAMPLES OF EXTENDED CONSCIOUSNESS
Let us now return to the mental sandwich view of the mind—the idea that mind exists between action and perception, which serve as the dual interfaces that divide the mind from the world. Chalmers (2008) raises this view of a natural boundary between the mind and world as a potential objection to EM. I summarize the dual boundaries objection (as I will refer it), as follows:
P1. The natural boundaries that separate the mind from the external environment are the dual interfaces of perception and action.
P2. We perceive through bodily senses and act through bodily motions.
C. Thus, the mind is bound to the brain and body.
An objector would argue that Otto’s case violates the first premise. Because Otto has to perceive the information in his notebook, the information must be external to his mind. But the objection also highlights a lack of parity between Otto and Inga’s cases: they are functionally different because Otto has to act and perceive, whereas Inga does not. In 2008, Chalmers’s suggested response to the dual boundaries objection was to reject the proposed boundary (P1); arguing instead that the parity principle still applies because both Otto and Inga engage in some form of [End Page 254] perception and action. For Otto, this takes the form of visual perception (reading information) and bodily motions (writing information down); while for Inga it takes the form of introspection and mental action. Chalmers’s more recent move to the sensorimotor version of EM aligns well with this suggestion. But as we’ve seen, Chalmers thinks that nonconscious states can be extended only through perception and action and hence continues to uphold P1 for conscious states only. In what follows, I’ll offer three examples of EC aimed at overcoming P1.
7.I WALDO AND THE WALKING STICK
Consider the case of Waldo, a man who relies on a walking stick to get around every day.8 His habitual use of the stick helps him to accurately perceive the world, while it flexibly guides his behavior as he successfully navigates his way around complex environments. In some cases, the stick might physically support or stabilize his walking, in other cases it provides him information about the surfaces beneath him. Similar to Otto, who perceives and acts with his notebook, Waldo perceives information about the world by handling his walking stick, and he also uses it to act in the world. But Waldo’s relationship with the stick is also importantly different from Otto’s single interactions with his notebook. Waldo is engaged in an ongoing dynamic cognitive process (perceiving and navigating his way around), whereas Otto’s notebook stores static information. Hence, there is a much clearer separation of cognition and consciousness in Otto’s case.9 Waldo, unlike Otto, forms a high-bandwidth bi-directional connection with his surroundings through his device. In these conditions we should view the stick itself as part of the physical means through which Waldo realizes his phenomenal experience. The walking stick meets both sides of Chalmers’s objection. It satisfies claim 1, by supporting information that is directly available for global control; and it satisfies claim 2 because the stick is extended via perception and action. Waldo must act by holding the stick in his hand and he must perceive the tactile information through the stick. This analysis overrides the first premise of the dual boundaries objection, thereby adopting the same strategy Chalmers had suggested in defending EM.10 [End Page 255]
Such an explanation, of course, needs to take seriously the concerns raised above: there are numerous possible proximal inputs to the correlates of consciousness in Waldo’s brain that could serve as the relevant boundary for constituting consciousness. It could be at the biological body (as an ‘embodied consciousness’ advocate might maintain), or the brain itself (as the majority of views about consciousness maintain), or in some narrow area within the brain that shows the highest correlation to phenomenal experience, or even at the nearest synapse outside that area. But, there needs to be some functional grounds on which to assert that boundary otherwise, having conceded that we enjoy a high-bandwidth perceptual connection to the environment, it would be arbitrary to assert that any device beyond the skin should be prima facie excluded. It’s true that information gained from using the stick has to travel a longer distance than corresponding pathways from correlates within the brain, but as long as the relevant information is available for global control, it shouldn’t matter how far away it is. In creatures with more distributed cognitive systems, we sometimes see information traveling much longer pathways in order to reach availability for global control, e.g. in a giant Pacific octopus, information from one neuron-packed arm can travel 10 feet to reach the brain (Godfrey-Smith 2016, 2017).
Now an objector to EC might insist that any information traveling from the stick can be directly available to local control, but not to global control. Chalmers (1997, 148) describes global control as availability “for use in directing a wide range of behaviors, especially deliberate behaviors.” But many people can skillfully navigate their environments based on information provided through their walking sticks. And the information Waldo’s stick provides not only informs how he taps things, it also helps form his understanding of how steady the terrain is and how he should step exactly. The stick can even be used in a range of ways to adaptively guide and shape his behavior. Furthermore, while some of the information gained by the stick might come into Waldo’s consciousness before it is used, thereby falling into the trap of Chalmers’s two-step argument, it is unlikely that all of the informational processes involved in his perception are conscious. Hence, his nonconscious perceptual access through the stick can play the same constitutive role in bringing about Waldo’s phenomenal experience of navigating around his locality with the stick as the neurons in the brain could for his brain duplicate, Waldo2, without a stick.11 [End Page 256]
7.II SPLIT-BRAIN PATIENTS
Not everyone uses tools to enhance or aid their perceptual access to the world as Waldo does with his walking stick, but Waldo’s case nonetheless represents what could be a fairly commonplace example of EC. Now I want to look at another example of extended perceptual consciousness that is rarer, but a real-world case still worth considering. This one comes from occurrences of cross-cueing in split-brain patients, which Downey (2018) argues is an example of EC. Split-brain patients have a severed corpus callosum, the band of nerve fibers that acts as the main channel of communication between the left and right hemispheres of the brain. Despite having anatomically isolated hemispheres, split-brain subjects function well in everyday life and are generally indistinguishable from those of us with an intact corpus callosum. The same is true for animals that have undergone split-brain procedures. Yet despite their behaving indistinguishably from healthy people in everyday life, split-brain patients often behave abnormally in the more constrained conditions of experiments. This has led several thinkers to defend an experimental aberration account of the mind of split-brain patients (e.g. Tye 2003). They maintain that while the consciousness of these patients is generally unified, it splits during experiments. Nagel (1971, 408), among others, however, has criticized the experimental aberration account on the grounds that it is ad hoc to state that “a second mind is brought into existence only during experimental situations.”
Downey (2018), on the other hand, argues that the aberrant behavior of split-brain patients is due to the various constraints in experiments that prevent the patients from cross-cueing; a common externalized technique that allows them to function normally in everyday life. Cross-cueing occurs when one hemisphere uses external factors to pass information to the other hemisphere. For example, split-brain patients will sometimes manipulate objects in their left hand in order to communicate tactile information to their left hemisphere (Bogen 1990). Downey argues that one reason thinkers have resorted to ad hoc accounts of the mind in order to explain the aberrant behavior of split-brain patients is because they have assumed an internalist view of consciousness. EC, on the other hand, can offer a better account of split-brain patients: subjects have learned to use environmental mechanisms such as the objects in their hands, to pass information between their brain hemispheres. In these cases, there is no principled reason to suspect that the passing of information through such external mechanisms takes longer than the passing of information through the corpus callosum—after all, split-brain patients appear to function just like healthy people in everyday life, and do not report experiencing any change after surgery. Hence, external cross-cueing mechanisms seem to play a constitutive role in unifying the conscious perceptual fields of split-brain patients. What is more, split-brain patients rely on perception and action in order to achieve cross-cueing, and their externalized cross-cueing mechanism appears to directly support information that is available for global control. Hence, these cases too meet both parts of Chalmers’s (2019) objection to EC by rejecting P1 of the dual boundaries objection. [End Page 257]
7.III NONHUMAN MINDS
Interestingly, there are further compelling cases of EC when we look to non-human minds. For one, cross-cueing behavior is not restricted to human subjects. Gazzaniga (1969), for example, reports finding cross-cueing behavior in split-brain monkeys who were able to circumvent his experimental conditions by tilting their heads to pass information between their disconnected hemispheres. But beyond this, EC may offer a worthwhile paradigm for understanding some of the “weird and wondrous” kinds of intelligence that exist (Godfrey-Smith 2017, 1). Consider the embodied cognition of an octopus, in which central control has substantially devolved to the periphery, with all eight arms having their own nervous systems and demonstrating significant cognitive complexity and independence (Cheng 2018; Godfrey-Smith 2016, 2017). The peripheral nervous system in an arm can take over key controlling roles in tasks like fetching food, say, but it can also be controlled centrally via the visual system (Godfrey-Smith 2016, 2017; Gutnick et al. 2011). While an octopus’s arm can move around successfully without oversight from the brain, usually relying on taste and touch to do so, one experimental task forced the subject to move one of its arms outside of water to fetch food, thereby losing input from its chemical sensors. Researchers found that octopuses were reliably able to compensate for this lack of sensory input by relying on their eyes to direct the arm. Hence, in this case the relevant and necessary sensory information is passing through the eyes (somewhat similar to how cross-cueing had relied on the passing of information through external mechanisms). Now, one major shortcoming here is that this experiment did not involve any tool use on the part of the octopus. Another concern is that nervous systems of octopuses are obviously complex and more distributed than ours, and there is much we still do not understand. Still, they present an interesting and plausible case of some form of distributed, embodied, and perhaps sometimes extended form of consciousness (Godfrey-Smith 2016, 2017).
In fact, the number of possible nonhuman animal cases here are far too rich to all be explored in this paper. They include complex intersignaling collectives of bees and other social invertebrates (Seeley 1995, 2010), the swarm intelligence of flocks of birds (Clark 1997), and even the web-spinning of spiders (Cheng 2018; Japyassú and Laland 2017). I focused on the distributed nervous systems of octopuses here, but there is a more general takeaway. That is, once we can recognize EC, and not just EM, as a viable theoretical possibility, the paradigm readily lends itself as a way of interpreting some of these otherwise puzzling nonhuman cases. Indeed, there have already been recent calls to situate comparative cognition beyond its central focus on the brain, and we might now similarly redirect animal consciousness research from its neurobiological focus as well. Perhaps most critically, one’s views on EC might in some cases even determine whether or not one thinks some of these nonhuman animals (or systems) are conscious at all. Finally, there are other opportunities for nonhuman minds to be explored here too. Thoughts of distributed or network-based artificial systems come readily to [End Page 258] mind. And though speculative, the theoretical possibility of EC should at least open us up to the possibility that artificial conscious minds, if they develop, could take on unfamiliar forms (Shanahan 2016), just as our own minds likely will as they continue to stretch into the new technologies we develop.
8. A FORESEEABLE OBJECTION
So far I have argued against objections to EC from both Clark and Chalmers. In particular I’ve focused on arguing against Chalmers’s recent objection that relies on an implicit claim that all perceptual processes need to be conscious, thereby blocking his claim that all perceptually mediated availability will consist in a two-step indirect kind of availability. I’ve also outlined some real-world examples of EC in order to show how it can be a viable theory; one that may even offer a better explanation of the intriguing cases of split-brain patients and of the complex decentralized cognition of octopuses. EC is still subject to other objections, however, including some that have previously been levied against EM (i.e. nonconscious state extension). In the remaining sections, I’ll turn my focus to one such objection: the concern that EC ‘overextends’ or ‘bloats’ the conscious mind.
8.I PHENOMENAL BLOATING
One longstanding objection to EM is its liberality in what it includes as part of the mind (e.g. Gertler 2007; Rupert 2004). If any object a person uses becomes a part of their mind this would overextend the mind in a way that no longer seems plausible. The objection typically involves some external object ‘x’ that arguably plays the same relevant functional role as an internal state that intuitively constitutes a mental state (e.g. neurons in the brain), but intuitively is not a mental state, and then to argue that according to EM, we must call x a mental state. Because it is absurd to think of x as a mental state, by reductio we can conclude that EM is false. Just as many of us use Google Maps to help us navigate, for example, and plausibly, just as Otto uses his notebook—so should all Google Maps count as being part of our minds? Defenders of EM try to avoid this kind of ‘bloating’, typically by limiting the kinds of cases that can count as genuine extensions. Clark and Chalmers (1998), for instance, offer the three conditions (discussed in §2) of constancy, direct availability, and reliability that are needed for an extended non-conscious belief. But even with these conditions in place, objectors such as Rupert (2004) and Gertler (2007) have argued that the parity argument leads to bloating, and a subsequent debate has arisen over whether the relevant functional role that the parity argument relies on can be characterized in a way that avoids bloating but still allows for some cases of extension. It can be predicted that the parity argument for extended consciousness would lead to similar objections of ‘phenomenal [End Page 259] bloating’, i.e. an over-ascription of states in the world that are seen as being partially constitutive of one’s phenomenal states. In order to avoid this, I suggest we appeal to what Clark (2009a) calls the argument from dynamic entanglement and unique temporal signature (DEUTS).
8.II APPEAL TO DYNAMIC ENTANGLEMENT AND UNIQUE TEMPORAL SIGNATURE
The dynamic entanglement and unique temporal signature argument, which Clark (2009a) mostly attributes to Cosmelli and Thompson (2011) as well as Noë (2004), consists of two claims: (1) dynamic entanglement and (2) unique temporal signature.12 I’ll discuss each in turn.
Pushing back against the traditional ‘mental sandwich’ (or ‘input-output’) picture of the mind, dynamic entanglement stresses the looping dynamics that exist between motor processing and perceptual uptake: each unfolds courtesy of an ongoing loop of interactions in which neural processes and bodily action work together to enable the agent to structure the information flow in ways most apt to the task.13 Clark explains that the cause of this complexity is continuous reciprocal causation in nonlinear systems—when all state variables interact in such a way that any change in a single variable leads to changes in the state of the entire system. This is precisely what happens in the brain, where different regions and subregions are causally intertwined. But, as we’ve seen, the contributions of the extra-neural body and world are sometimes so complexly intertwined with this neural activity that we cannot easily delimit neural contributions from the rest (see Cosmelli and Thompson 2011). Indeed, this is precisely what we expect to see with Waldo and his walking stick. As he receives perceptual feedback from the ground via the stick, Waldo then repositions his body and the stick to structure the flow of information. Any change in the positioning of the stick (motor outputs) can dynamically change his perceptual uptake and ongoing phenomenal experience, while changes in perceptual uptake via the stick can in turn structure his motor actions, and so forth. This process of continuous reciprocal causation allows Waldo to structure the flow of information over time, thereby effectively guiding his behavior. The same is true for split-brain patients who compensate for a loss of functional communication between their two hemispheres by relying on cross-cueing, either through environmental mechanisms or through informational self-structuring via sensorimotor coordinations (e.g. head tilting).
The unique temporal signature (or ‘UTS’) component of DEUTS maintains that certain experiences require “a kind of ‘signature’ temporal evolution of neural [End Page 260] states that simply cannot (in the natural order) occur in the absence of the right extra-neural scaffolding” (Clark 2009a, 979). One might expect that changes in Waldo’s body would be reflected in corresponding neural changes. Hence the temporal signature of DEUTS allows the argument to meet the ‘twin test’, as Clark calls it—the test of whether the brain duplicates of Waldo1 (with walking stick) and Waldo2 (without the stick) are in different phenomenal states. Clark quotes Noë, who comes closest to making the ‘temporal signature’ suggestion:
. . . perhaps the only way—or the only biologically possible way—to produce just the flavour sensation one enjoys when one sips a wine is by rolling a liquid across one’s tongue. In that case, the liquid, the tongue, and the rolling action would be part of the physical substrate for the experience’s occurrence.
(Noë 2004, 220)
Noë wants to reject what he calls the ‘snapshot picture’ of perceptual experience (2004, 35–39), in favor of this temporal picture. This approach is compatible with the second interpretation of EC described in §3—namely, that the differing phenomenal experiences between the subjects is due to their brains going through a series of states that differ, one from the other, where this difference is caused by the fact that one subject, Waldo1, is using a tool, but his brain-duplicate, Waldo2, is not. Thus, I argue that the phenomenal experience had by Waldo1 could not be quite the same with neural states alone. For this reason, Waldo2 would not be in the same phenomenal state as Waldo1, and the stick itself should count as constitutive of Waldo’s phenomenal state.
The DEUTS argument gives us a strong reason for thinking that Waldo’s stick counts as one of the physical correlates of his consciousness, but it also suggests a way of delineating a case of extended consciousness that avoids phenomenal bloating. Because we have been working with the parity argument as a starting point, it is crucial that external connections really are functionally on a par with internal ones. And dynamic entanglement can serve as a non-question-begging criterion we can appeal to. In Waldo’s case there are high-bandwidth loops; he has a continuous bi-directional high-bandwidth connection with his walking stick that is characterized by continuous reciprocal causation. On the other hand, there is no reason to think that Otto has a similar relationship with his notebook. Otto acts on his notebook, but he may have written down the address of the museum years before it influenced his behavior and the subsequent actions he takes based on the information in his notebook (i.e. heading to the museum) do not affect the entry in his notebook. It’s true that Otto’s perception of the information is high-bandwidth, but his downstream connection is relatively low: to record and subsequently access a small piece of information he has to perform a significant number of bodily actions. The same is true of our downstream (output) connection with our smartphones, for example. In order to use smartphones for navigation, for example, we must type in, often letter by letter, an address. Once we have done so, there are no significant causally reciprocal feedback loops that sustain a connection to the device. For split-brain patients, perhaps the strongest argument [End Page 261] is their ability to incorporate—quite seamlessly into everyday life—extra-neural body parts and environmental mechanisms as functional replacements for their corpus callosum.
In effect, Clark is right in asserting that our body acts as a ‘low-pass filter’, but wrong about the directionality. We typically receive sensory information from the environment at a very high-bandwidth, and so much information comes in that it must be reduced for our brains to process it. Likewise, our motor actions can sometimes convey large amounts of information too, e.g. Waldo acts on the world by tapping his stick in different ways in order to structure the perceptual uptake he needs. But this is not always the case. When our motor outputs are linguistically mediated, e.g. typing words into a computer, speaking to a partner, writing words down, it’s plausible that less is communicated despite the complex bodily actions involved. Dennett (1987, 21) captures this idea well in stating that “our linguistic environment is forever forcing us to give, or concede, precise verbal expression to convictions that lack the hard edges verbalization endows them with.” This limitation of our current interfaces with the world may be a reason for building extended circuitry, such as brain-computer interfaces, which have the potential to bypass the body’s downstream informational bottleneck and allow the brain to communicate directly with devices. Such a pathway would never count as a genuine case of extension on the sensorimotor version of EC, however.
9. CONCLUSION
EC initially appears as a highly implausible view and it is one that many prominent defenders of EM have long resisted. But the one aim of this paper has been to argue that the possibility of EC follows as a surprising but sound consequence from the parity argument for EM. This claim had to be defended from various objections, mostly from Clark and Chalmers, who have long denied that their parity argument can support EC. Most recently, Chalmers argues that consciousness requires direct availability for global control and that processes which are extended via perception and action cannot support direct availability of this kind. In this paper, I’ve tried to show why this objection fails to block EC. In particular, I have argued against his claim that all perceptually mediated access to information in the world must be indirect access. I’ve offered three examples of the kinds of real-world cases I think could count as genuine extensions of consciousness; namely, the case of Waldo who relies on a walking stick, the case of cross-cueing in split-brain patients (following Downey 2018), and the decentralized control capacities of octopuses. The final scenario is perhaps the most interesting, revealing where EC might bear the most fruit—namely, in our understanding of exotic nonhuman forms of consciousness. Finally, I ended by considering a possible objection to EC and arguing that this can be overcome by appealing to the notion of continuous reciprocal causation. [End Page 262]
ACKNOWLEDGMENTS
This paper took a few years to develop and has benefited from many valuable comments along the way. I am grateful for the feedback I received from audiences at the University of Cambridge, Carleton University, the University of Sussex, and the Australian National University. I owe special thanks to David Chalmers, Marta Halina, Eric Schwitzgebel, and, most of all, Henry Shevlin, for their helpful feedback. This work was supported by the Leverhulme Trust, under Grant RC-2015-067.
REFERENCES
Footnotes
1. A few others have supported the possibility of extended consciousness in different ways, including Loughlin (2013), Downey (2018), and Kirchhoff and Kiverstein (2019).
2. Only ‘partially’ because according to EM, the brain is still the central locus of the mind. Hence, extended states or processes are constituted by a conjunction of neurobiological and extra-neurobiological realizers.
3. I owe thanks to an anonymous reviewer for suggesting this disambiguation between these two versions of EC.
4. See Lebedev and Nicolelis (2017) or Burwell et al. (2017) for some recent overviews.
5. Block (1995), however, offers these cases as evidence that we sometimes have P-consciousness without A-consciousness.
6. Clark and Chalmers (1998) use cases of introspection like this to draw parallels between Otto and Inga in order to defend nonconscious extended states.
7. Though in a different context, Anderson is raising an objection to Hohwy, who appeals to the presence of Markov Blankets as a way of setting the boundary of the mind. See discussion in Kirchhoff and Kiverstein (2019, 71–72), where this quote also appears.
8. The use of a walking stick is a common example employed in the embodied/extended mind literature, by Merleau-Ponty (1962), for example, and more recently by Malafouris (2013). In both these discussions the person is blind, and I have previously used the example of a blind person myself, but I wish to avoid any assumptions about the nature of disability here.
9. Loughlin (2013) makes a similar point when he argues that an artist using her sketch pad can enact both extended consciousness and an extended cognitive process, while Otto using his notebook exemplifies an extended cognitive state.
10. Type A extended mind theorists could also endorse this as a case of extended consciousness, but they might give a slightly different explanation. Some people who rely on walking sticks report perceiving the ground directly rather than merely perceiving their hand holding the stick, and Type A theorists might better capture this intuition by rejecting premise two of the objection, arguing instead that the location of the dual boundaries of perception and action are flexible. In Waldo’s case, the boundary is pushed to the end of the walking stick, such that he perceives whatever lies at the end of the stick.
11. One might worry here that whatever input Waldo’s stick provides could be replaced by a brain-computer interface, thereby showing that he could have an indiscriminable experience without the stick. But such a device couldn’t easily replicate the way Otto’s stick physically supports him as he walks, for example. And furthermore, if it could, then this device would count as extending Waldo’s consciousness through extended circuitry. (Thanks to Henry Shevlin for this potential worry).
12. In surveying a number of arguments for extended consciousness, Clark (2009a) sees the argument from dynamic entanglement and unique temporal signature the most promising, but he ultimately argues that this argument is threatened by certain empirical facts—namely our supposed low-bandwidth connection to the environment. But we’ve seen arguments against this claim above (Vold 2015), thus, the argument from dynamic entanglement still has a leg to stand on.
13. For more on the self-structuring of information flows, see Clark (2008).