Updated January 2018.
We aspire to extend empathy to every being that warrants moral concern, including animals. And while many experts, government agencies, and advocacy groups agree that some animals live lives worthy of moral concern,1 there seems to be little agreement on which animals warrant moral concern.2 Hence, to inform our long-term giving strategy, I (Luke Muehlhauser) investigated the following question: “In general, which types of beings merit moral concern?” Or, to phrase the question as some philosophers do, “Which beings are moral patients?”3
For this preliminary investigation, I focused on just one commonly endorsed criterion for moral patienthood: phenomenal consciousness, a.k.a. “subjective experience.” I have not come to any strong conclusions about which (non-human) beings are conscious, but I think some beings are more likely to be conscious than others, and I make several suggestions for how we might make progress on the question.
In the long run, to form well-grounded impressions about how much we should value grants aimed at (e.g.) chicken or fish welfare, we need to form initial impressions not just about which creatures are more and less likely to be conscious, but also about (a) other plausible criteria for moral patienthood besides consciousness, and also about (b) the question of “moral weight” (see below). However, those two questions are beyond the scope of this initial report on consciousness. In the future I hope to build on the initial framework and findings of this report, and come to some initial impressions about other criteria for moral patienthood and about moral weight.
This report is unusually personal in nature, as it necessarily draws heavily from the empirical and moral intuitions of the investigator. Thus, the rest of this report does not necessarily reflect the intuitions and judgments of the Open Philanthropy Project in general. I explain my views in this report merely so they can serve as one input among many as the Open Philanthropy Project considers how to clarify its values and make its grantmaking choices.
1. How to read this report
The length of this report, compared to the length of my other reports for the Open Philanthropy Project, might suggest to the reader that I am a specialist on consciousness and moral patienthood. Let me be clear, then, that I am not a specialist on these topics. This report is long not because it engages its subject with the depth of an expert, but because it engages an unusual breadth of material — with the shallowness of a non-expert.
The report’s unusual breadth is a consequence of the fact that, when it comes to examining the likely distribution of consciousness (what I call “the distribution question”), we barely even know which kinds of evidence are relevant (besides human self-report), and thus I must survey an unusually broad variety of types of evidence that might be relevant. Compare to my report on behavioral treatments for insomnia: in that case, it was quite clear which studies would be most informative, so I summarized only a tiny portion of the available literature.4 But when it comes to the distribution-of-consciousness question, there is extreme expert disagreementabout which types of evidence are most informative. Hence, this report draws from a very large set of studies across a wide variety of domains — comparative ethology, comparative neuroanatomy, cognitive neuroscience, neurology, moral philosophy, philosophy of mind, etc. — and I am not an expert in any of those fields.5
Given all this, my goal for this report cannot be to argue for my conclusions, in the style of a scholarly monograph on consciousness, written by a domain expert.6 Nor is it my goal to survey the evidence which plausibly bears on the distribution question, as such a survey would likely run thousands of pages, and require the input of dozens of domain experts. Instead, my more modest goals for this report are to:
- survey the types of evidence and argument that have been brought to bear on the distribution question,
- briefly describe example pieces of evidence of each type,7 without attempting to summarize the vast majority of the evidence (of each type) that is currently available,
- report what my own intuitions and conclusions are as a result of my shallow survey of those data and arguments,
- try to give some indication of why I have those intuitions, without investing the months of research that would be required to rigorously argue for each of my many reported intuitions, and
- list some research projects that seem (to me) like they could make progress on the key questions of this report, given the current state of evidence and argument.
Given these limited goals, I don’t expect to convince career consciousness researchers of any non-obvious substantive claims about the distribution of consciousness. Instead, I focused on finding out whether I could convince myself of any non-obvious substantive claims about the distribution of consciousness. As you’ll see, even this goal proved challenging enough.
Despite this report’s length, I have attempted to keep the “main text” (sections 2-4) modular and short (roughly 20,000 words).8 I provide many clarifications, elaborations, and links to related readings (that I don’t necessarily endorse) in the appendices and footnotes.
In my review of the relevant literature, I noticed that it’s often hard to interpret claims about consciousness because they are often grounded in unarticulated assumptions, and (perhaps unavoidably) stated vaguely. To mitigate this problem somewhat for this report, section 2 provides some background on “where I’m coming from,” and can be summarized in a single jargon-filled paragraph:
This report examines which beings and processes might be moral patients given their phenomenal consciousness, but does not examine other possible criteria for moral patienthood, and does not examine the question of moral weight. I define phenomenal consciousness extensionally, with as much metaphysical innocence and theoretical neutrality as I can. My broad philosophical approach is naturalistic (a la Dennett or Wimsatt) rather than rationalistic (a la Chalmers or Chisholm),9 and I assume physicalism, functionalism, and illusionism about consciousness. I also assume the boundary around “consciousness” is fuzzy (a la “life”) rather than sharp (a la “water” = H2O), both between and within individuals. My meta-ethical approach employs an anti-realist kind of ideal advisor theory.
If that paragraph made sense to you, then you might want to jump ahead to section 3, where I survey the types of evidence and arguments that have been brought to bear on the distribution question. Otherwise, you might want to read the full report.
In section 3, I conclude that no existing theory of consciousness (that I’ve seen) is satisfying, and thus I investigate the distribution question via relatively theory-agnostic means, examining analogy-driven arguments, potential necessary or sufficient conditions for consciousness, and some big-picture considerations that pull toward or away from a “consciousness is rare” conclusion.
To read only my overall tentative conclusions, see section 4. In short, I think mammals, birds, and fishes,10 are more likely than not to be conscious, while (e.g.) insects are unlikely to be conscious. However, my probabilities are very “made-up” and difficult to justify, and it’s not clear to us what actions should be taken on the basis of such made-up probabilities.
I also prepared a list of potential future investigations that I think could further clarify some of these issues for us — at least, given my approach to the problem.
This report includes several appendices:
- Appendix A explains how I use my moral intuitions, reports some of my moral intuitions about particular cases, and illustrates how existing theories of consciousness and moral patienthood could be clarified by frequent reference to code snippets or existing computer programs.
- Appendix B explains what I find unsatisfying about current theories of consciousness, and says a bit about what a more satisfying theory of consciousness could look like.
- Appendix C summarizes the evidence concerning unconscious vision and the “two streams of visual processing” theory that is discussed briefly in my section on whether a cortex is required for consciousness.
- Appendix D makes several clarifications concerning the distinction between nociception (which can be unconscious) and pain (which cannot).
- Appendix E makes some clarifications about how I’m currently estimating “neuroanatomical similarity,” which plays a role in my “theory-agnostic estimation process” for guessing whether a being is conscious (described here).
- Appendix F explains illusionism in a bit more detail, and makes some brief comments about how illusionism interacts with my intuitions about moral patienthood.
- Appendix G elaborates my views on the “fuzziness” of consciousness.
- Appendix H examines how the possibility of hidden qualia might undermine the central argument for higher-order theories.
- Appendix Z collects a variety of less-important sub-appendices, for example a list of theories of consciousness, a list of varieties of conscious experience, a list of questions about which consciousness scholars exhibit extreme disagreement, a list of candidate dimensions of moral concern (for estimating moral weight), some brief comments on unconscious emotions, some reasons for my default skepticism about published studies, and some recommended readings.
Acknowledgements: Many thanks to those who gave me substantial feedback on earlier drafts of this report: Scott Aaronson, David Chalmers, Daniel Dewey, Julia Galef, Jared Kaplan, Holden Karnofsky, Michael Levine, Buck Shlegeris, Carl Shulman, Taylor Smith, Brian Tomasik, and those who participated in a series of GiveWell discussions on this topic. I am also grateful to several people for helping me find some of the data related to potentially consciousness-indicating features presented below: Julie Chen, Robin Dey, Laura Ong, and Laura Muñoz. My thanks also to Oxford University Press and MIT Press for granting permission to reproduce some images to which they own the copyright.
2 Explaining my approach to the question
2.1 Why we care about the question of moral patienthood
How does the question of moral patienthood fit into our framework for thinking about effective giving?
The Open Philanthropy Project focuses on causes that score well on our three criteria — importance, neglectedness, and tractability. Our “importance” criterion is: “How many individuals does this issue affect, and how deeply?” Elaborating on this, we might say our importance criterion is: “How many moral patients does this issue affect, and how much could we benefit them, with respect to appropriate dimensions of moral concern (e.g. pain, pleasure, desire fulfillment, self-actualization)?”
As with many framing choices in this report, this is far from the only way to approach the question,11 but we find it to be a framing that is pragmatically useful to us as we try to execute our mission to “accomplish as much good as possible with our giving” without waiting to first resolve all major debates in moral philosophy.12 (See also our blog post on radical empathy.)
In the long run, we’d like to have better-developed views not just about which beings are moral patients, but also about how to weigh the interests of different kinds of moral patients against each other. For example: suppose we conclude that fishes, pigs, and humans are all moral patients, and we estimate that, for a fixed amount of money, we can (in expectation) dramatically improve the welfare of (a) 10,000 rainbow trout, (b) 1,000 pigs, or (c) 100 adult humans. In that situation, how should we compare the different options? This depends (among other things) on how much “moral weight” we give to the well-being of different kinds of moral patients. Or, more granularly, it depends on how much moral weight we give to various “appropriate dimensions of moral concern,” which then collectively determine the moral weight of each particular moral patient.13
This report, however, focuses on articulating my early thinking about which beings are moral patients at all.14 We hope to investigate plausibly appropriate dimensions of moral concern — i.e., the question of “moral weight” — in the future. For now, I merely list some candidate dimensions in Appendix Z.7.
2.2 Moral patienthood and consciousness
2.2.1 My metaethical approach
Philosophers, along with everyone else, have very different views about “metaethics,” i.e. about the foundations of morality and the meaning of moral terms.15 In this section, I explain my own metaethical approach — not because my moral judgments depend on this metaethical approach (they don’t), but merely to give my readers a sense of “where I’m coming from.”
Terms like “moral patient” and “moral judgment” can mean different things depending on one’s metaethical views, for example whether one is a “moral realist.” I have tried to phrase this report in a relatively “metaethically neutral” way, so that e.g. if you are a moral realist you can interpret “moral judgment” to mean “my best judgment as to what the moral facts are,” whereas if you are a certain kind of moral anti-realist you can interpret “moral judgment” to mean “my best guess as to what I would personally value if I knew more and had more time to think about my values,” and if you have different metaethical views, you might mean something else by “moral judgment.” But of course, my own metaethical views unavoidably lead my report down some paths and not others.
Personally, I use moral terms in such a way that my “moral judgments” are not about objective moral facts, but instead about my own values, idealized in various ways, such as by being better-informed (more on this in Appendix A).16 Under such a view, the question of (e.g.) whether some particular fish is a moral patient, given my values, is a question about whether that fish has certain properties (e.g. conscious experience, or desires that can be satisfied or frustrated) about which I have idealized preferences. For example: if the fish isn’t conscious, I’m not sure I care whether its preferences are satisfied or not, any more than I care whether a (presumably non-conscious) chess-playing computer wins its chess matches or not. But if the fish is conscious (in a certain way), then I probably do care about how much pleasure and how little pain it experiences, for the same reason I care about the pleasure and pain of my fellow humans.
Thus, my aim is not to conduct a conceptual analysis17 of “moral patient,” nor is my aim to discover what the objective moral facts are about which beings are moral patients. Instead, my aim is merely to examine which beings I should consider to be moral patients, given what I predict my values would be if they were better-informed, and idealized in other ways.
I suspect my metaethical approach and my moral judgments overlap substantially with those of at least some other Open Philanthropy Project staff members, and also with those of many likely readers, but I also assume there will be a great deal of non-overlap with my colleagues at the Open Philanthropy Project and especially with other readers. My only means for dealing with that fact is to explain as clearly as I can which judgments I am making and why, so that others can consider what the findings of this report might imply given their own metaethical approach and their own moral judgments.
For example, in Appendix A I discuss my moral intuitions with respect to the following cases:
- An ankle injury that I don’t notice right away.
- A fictional character named Phenumb who is conscious in general but has no conscious feelings associated with the satisfaction or frustration of his desires.
- A short computer program that continuously increments a variable called
my_pain
. - A Mario-playing program that engages in fairly sophisticated goal-directed behavior using a simple search algorithm called A* search.
- A briefly-sketched AI program that controls the player character in a puzzle game in a way that (seemingly/arguably) exhibits some commonly-endorsed indicators of consciousness, and that (seemingly/arguably) satisfies some theories of moral patienthood.
I suspect most readers share my moral intuitions about some of these cases, and have differing moral intuitions with respect to others.
2.2.2 Proposed criteria for moral patienthood
Presumably a cognitively unimpaired adult human is a moral patient, and a rock is not.18 But what about someone in a persistent vegetative state? What about an anencephalic infant? What about a third-trimester human fetus?19 What about future humans? What about chimpanzees, dogs, cows, chickens, fishes, squid, lobsters, beetles, bees, Venus flytraps, and bacteria? What about sophisticated artificial intelligence systems, such as Facebook’s face recognition system or a self-driving car?20 What about a (so-called) self-aware, self-expressive, and self-adaptive camera network?21 What about a non-player character in a first-person shooter video game, which makes plans and carries them out, ducks for cover when the player shoots virtual bullets at it, and cries out when hit?22 What about the enteric nervous system in your gut, which employs about 5 times as many neurons as the brain of a rat, and would continue to autonomously coordinate your digestion even if its main connection with your brain was severed?23 Is each brain hemisphere in a split-brain patient a separate moral patient?24 Can ecosystems or companies or nations be moral patients?25
Such questions are usually addressed by asking whether a potential moral patient satisfies some criteria for moral patienthood. Criteria I have seen proposed in the academic literature include:
- Personhood or interests. (I won’t discuss these criteria separately, as they are usually composed of one or more of the criteria listed below.26)
- Phenomenal consciousness, a.k.a. “subjective experience.” See the detailed discussion below.27
- Valenced experience: This criterion presumes not just phenomenal consciousness but also some sense in which phenomenal consciousness can be “valenced” (e.g. pleasure vs. pain).
- Various sophisticated cognitive capacities such as rational agency, self-awareness, desires about the future, ability to abide by moral responsibilities, ability to engage in certain kinds of reciprocal relationships, etc.28
- Capacity to develop these sophisticated cognitive capacities, e.g. as is true of human fetuses.29
- Less sophisticated cognitive capacities, or the capacity to develop them, e.g. learning, nociception, memory, selective attention, etc.30
- Group membership: e.g. all members of the human species, or all living things.31
Note that moral patienthood can be seen as binary or scalar,32 and the boundary between beings that are and are not moral patients might be “fuzzy” (see below).
It is also important to remember that, whatever criteria for moral patienthood we endorse upon reflection, our intuitive attributions of moral patienthood are probably unconsciously affected33 by factors that we would not endorse if we understood how they were affecting us. For example, we might be more likely to attribute moral patienthood to something if it has a roughly human-like face, even though few if any of us would endorse “possession of a human-like face” as a legitimate criterion of moral patienthood. A similar warning can be made about factors which might affect our attributions of phenomenal consciousness and other proposed criteria for moral patienthood.34
An interesting test case is this video of a crab tearing off its own claw. To me, the crab looks “nonchalant” while doing this, which gives me the initial intuition that the crab must not be consciousness, or else it would be “writhing in agony.” But crabs are different from humans in many ways. Perhaps this is just what a crab in conscious agony looks like. Or perhaps not.35
2.2.3 Why I investigated phenomenal consciousness first
The only proposed criterion of moral patienthood I have investigated in any depth thus far is phenomenal consciousness. I chose to examine phenomenal consciousness first because:
- My impression is that phenomenal consciousness is perhaps the most commonly-endorsed criterion of moral patienthood, and that it is often considered to be the most important such criterion (by those who use multiple criteria). Self-awareness (sometimes called “self-consciousness”) and valenced experience are other contenders for being the most commonly-endorsed criterion of moral patienthood, but in most cases it is assumed that the kinds of self-awareness or valenced experience that confer moral patienthood necessarily involve phenomenal consciousness as well.
- My impression is that phenomenal consciousness, or a sort of valenced experience that presumes phenomenal consciousness, are especially commonly-endorsed criteria of moral patienthood among consequentialists, whose normative theories most easily map onto our mission to “accomplish as much good as possible with our giving.”
- Personally, I’m not sure whether consciousness is the only thing I care about, but it is the criterion of moral patienthood I feel most confident about (for my own values, anyway).
However, it’s worth bearing in mind that most of us probably intuitively morally care about other things besides consciousness. I focus on consciousness in this report not because I’m confident it’s the only thing that matters, but because my report on consciousness alone is long enough already! I hope to investigate other potential criteria for moral patienthood in the future.
I’m especially eager to think more about valenced experiences such as pain and pleasure. As I explain below, my own intuitions are that if a being had conscious experience, but literally none of it was “valenced” in any way, then I might not have any moral concern for such a creature. But in this report, I focus on the issue of phenomenal consciousness itself, and say very little about the issue of valenced experience.
2.3 My approach to thinking about consciousness
In consciousness studies there is so little consensus on anything — what’s meant by “consciousness,” what it’s made of, what kinds of methods are useful for studying it, how widely it is distributed, which theories of consciousness are most promising, etc. (see Appendix Z.5) — that there are no safe guesses about “where someone is coming from” when they write about consciousness. This often made it difficult for me to understand what writers on consciousness were trying to say, as I read through the literature.
To mitigate this problem somewhat for this explanation of my own tentative views about consciousness, I’ll try to explain “where I’m coming from” on consciousness, even if I can’t afford the time to explain in much detail why I make the assumptions I do.
2.3.1 Consciousness, innocently defined
Van Gulick (2014) describes six different senses in which “an animal, person, or other cognitive system” can be regarded as “conscious,” and the four I can explain most quickly are:
- Sentience: capable of sensing and responding to its environment
- Wakefulness: awake (e.g. not asleep or in a coma)
- Self-consciousness: aware of itself as being aware
- What it is like: subjectively experiencing36 a certain “something it is like to be” (Nagel 1974), a.k.a. “phenomenal consciousness” (Block 1995), a.k.a. “raw feels” (Tolman 1932)
When I say “consciousness,” I have in mind the fourth concept.37
In particular, I have in mind a relatively “metaphysically and epistemically innocent” definition, a la Schwitzgebel (2016):38
Phenomenal consciousness can be conceptualized innocently enough that its existence should be accepted even by philosophers who wish to avoid dubious epistemic and metaphysical commitments such as dualism, infallibilism, privacy, inexplicability, or intrinsic simplicity. Definition by example allows us this innocence. Positive examples include sensory experiences, imagery experiences, vivid emotions, and dreams. Negative examples include growth hormone release, dispositional knowledge, standing intentions, and sensory reactivity to masked visual displays. Phenomenal consciousness is the most folk psychologically obvious thing or feature that the positive examples possess and that the negative examples lack…
There are many other examples we can point to.39 For example, when I played sports as a teenager, I would occasionally twist my ankle or acquire some other minor injury while chasing after (e.g.) the basketball, and I didn’t realize I had hurt myself until after the play ended and I exited my flow state. In these cases, a “rush of pain” suddenly “flooded” my conscious experience — not because I had just then twisted my ankle, but because I had twisted it 5 seconds earlier, and was only just then becoming aware of it. The pain I felt 5 seconds after I twisted my ankle is a positive example of conscious experience, and whatever injury-related processing occurred in my nervous system during those initial 5 seconds is, as far as I know, a negative example.
However, I would qualify Schwitzgebel’s extensional definition of consciousness by noting that the negative examples, in particular, are at least somewhat contested. A rock is an obvious negative example for most people, but panpsychists disagree, and it is easy to identify other contested examples.40
More plausible than rock consciousness, I think, is the possibility that somewhere in my brain, there was a conscious experience of my injured ankle before “I” became aware of it. Indeed, there may be many conscious cognitive processes that “I” never have cognitive access to. If this is the case, it can in principle be weakly suggested by certain kinds of studies (see Appendix H), and could in principle be strongly suggested once we have a compelling theory of consciousness.41 But for now, I’ll count the injury-related cognitive processing that happened “before I noticed it” as a likely negative example of conscious experience, while allowing that it could be discovered to be a positive example due to future scientific progress.
So perhaps we should say that “Phenomenal consciousness is the most folk psychologically obvious thing (or set of things) that the uncontested positive examples possess, and that the least-contested negative examples plausibly lack,”42 or something along those lines. Similarly, when I use related terms like “qualia” and “phenomenal properties,” I intend them to be defined by example as above, with as much metaphysical innocence as possible. Ideally, one would “flesh out” these definitions with many more examples and clarifications, but I shall leave that exercise to others.43
Importantly, this definition is as “innocent” and theory-neutral as I know how to make it. On this definition, consciousness could still be physical or non-physical, scientifically tractable or intractable, ubiquitous or rare, ineffable or not-ineffable, “real” or “illusory” (see next section), and so on. And in my revised version of Schwitzgebel’s definition, we are not committed to absolute certainty that purported negative examples will turn out to actually be negative examples as we learn more.
Furthermore, I do not define consciousness as “cognitive processes I morally care about,” as that blends together scientific explanation and moral judgment (see Appendix A) in a way can be confusing to disentangle and interpret.
No doubt our concept of “consciousness” and related concepts will evolve over time in response to new discoveries, and our evolving concepts will influence which empirical inquiries we prioritize, and those inquiries will suggest further revisions to our concepts, as is typically the case.44 But in our current state of ignorance, I prefer to use a notion of “consciousness” that is defined as innocently as I can manage.
I must also stress that my aim here is not to figure out what we “mean” by “consciousness,” any more than Antonie van Leeuwenhoek (1632-1723) was, in studying microbiology, trying to figure out what people meant by “life.”45 Rather, my aim is to understand how the cluster of stuff we now naively call “consciousness” works. Once we understand how those things work, we’ll be in a better position to make moral judgments about which beings are and aren’t moral patients (insofar as consciousness-related properties affect those judgments, anyway). Whether we continue to use the concept of “consciousness” at that point is of little consequence. But for now, since we don’t yet know the details of how consciousness works, I will use terms like “consciousness” and “subjective experience” to point at the ill-defined cluster of stuff I’m talking about, as defined by example above.
2.3.2 My assumptions about the nature of consciousness
Despite preferring a metaphysically innocent definition of consciousness, I will, for this report, make four key assumptions about the nature of consciousness. It is beyond the scope of this report to survey and engage with the arguments for or against these assumptions; instead, I merely report what my assumptions are, and provide links to the relevant scholarly debates. My purpose here isn’t to contribute to these debates, but merely to explain “where I’m coming from.”
First, I assume physicalism. I assume consciousness will turn out to be fully explained by physical processes.46 Specifically, I lean toward a variety of physicalism called “type A materialism,” or perhaps toward the varieties of “type Q” or “type C” materialism that threaten to collapse into “type A” materialism anyway (see footnote47).
Second, I assume functionalism. I assume that anything which “does the right thing” — e.g., anything which implements a certain kind of information processing — is an example of consciousness, regardless of what that thing is made of.48 Compare to various kinds of memory, attention, learning, and so on: these processes are found not just in humans and animals, but also in, for example, some artificial intelligence systems.49 These kinds of memory, attention, and learning are implemented by a wide variety of substrates (but, they are all physical substrates). On the case for functionalism, see footnote.50
Third, I assume illusionism, at least about human consciousness. What this means is that I assume that some seemingly-core features of human conscious experience are illusions, and thus need to be “explained away” rather than “explained.” Consider your blind spot: your vision appears to you as continuous, without any spatial “gaps,” but physiological inspection shows us that there aren’t any rods and cones where your optic nerve exits the eyeball, so you can’t possibly be transducing light from a certain part of your (apparent) visual field. Knowing this, the job of cognitive scientists studying vision is not to explain how it is that your vision is really continuous despite the existence of your physiological blind spot, but instead to explain why your visual field seems to you to be continuous even though it’s not.51 We might say we are “illusionists” about continuous visual fields in humans.
Similarly, I think some core features of consciousness are illusions, and the job of cognitive scientists is not to explain how those features are “real,” but rather to explain why they seem to us to be real (even though they’re not). For example, it seems to us that our conscious experiences have “intrinsic” properties beyond that which could ever be captured by a functional, mechanistic account of consciousness. I agree that our experiences seem to us to have this property, but I think this “seeming” is simply mistaken. Consciousness (as defined above) is real, of course. There is “something it is like” to be us, and I doubt there it is “something it is like” to be a chess-playing computer, and I think the difference is morally important. I just think our intuitions mislead us about some of the properties of this “something it’s like”-ness. (For elaborations on these points, see Appendix F.)
Fourth, I assume fuzziness about consciousness, both between and within individuals.52 In other words, I suspect that once we understand how “consciousness” works, there will be no clear dividing line between individuals that have no conscious experience at all and individuals that have any conscious experience whatsoever (I’ll call this “inter-individual fuzziness”),53 and I also suspect there will be no clear dividing line, within a single individual, between mental states or processes that are “conscious” and those which are “not conscious” (I’ll call this “intra-individual fuzziness”).54
Unfortunately, assuming fuzziness means that “wondering whether it is ‘probable’ that all mammals have [consciousness] thus begins to look like wondering whether or not any birds are wise or reptiles have gumption: a case of overworking a term from folk psychology that has [lost] its utility along with its hard edges.”55 One could say the same of questions about which cognitive processes within an individual are “conscious” vs. “unconscious,” which play a key role in arguments about which beings are conscious.
As the scientific study of “consciousness” proceeds, I expect our naive concept of consciousness to break down into a variety of different capacities, dispositions, representations, and so on, each of which will vary along many different dimensions in different beings. As that happens, we’ll be better able to talk about which features we morally care about and why, and there won’t be much utility to arguing about “where to draw the line” between which beings and processes are and aren’t “conscious.” But, given that we currently lack such a detailed decomposition of “consciousness,” I reluctantly organize this report around the notion of “consciousness,” and I write about “which beings are conscious” and “which cognitive processes are conscious” and “when such-and-such cognitive processing becomes conscious,” while pleading with the reader to remember that I think the line between what is and isn’t “conscious” is extremely “fuzzy” (and as a consequence I also reject any clear-cut “Cartesian theater.”)56 For more on the fuzziness of consciousness, see Appendix G.
My assumptions of physicalism and functionalism are quite confident, but probably don’t affect my conclusions about the distribution of consciousness very much anyway, except to make some common pathways to radical panpsychism less plausible.57 My assumption of illusionism is also quite confident, at least about human consciousness, but I’m not sure it implies much about the distribution question (see Appendix F). My assumption of fuzziness is moderately confident, and implies that the distribution question is difficult even to formulate, let alone answer, though I’m not sure it directly implies much about how extensive we should expect “consciousness” to be.
As with any similarly-sized set of assumptions about consciousness (see Appendix Z.5), my own set of assumptions is highly debatable. Physicalism and functionalism are fairly widely held among consciousness researchers, but are often debated and far from universal.58 Illusionism seems to be an uncommon position.59 I don’t know how widespread or controversial “fuzziness” is.
I’m not sure what to make of the fact that illusionism seems to be endorsed by a small number of theorists, given that illusionism seems to me to be “the obvious default theory of consciousness,” as Daniel Dennett argues.60 In any case, the debates about the fundamental nature of consciousness are well-covered elsewhere,61 and I won’t repeat them here.
A quick note about “eliminativism”: the physical processes which instantiate consciousness could turn out be so different from our naive guesses about their nature that, for pragmatic reasons, we might choose to stop using the concept of “consciousness,” just as we stopped using the concept of “phlogiston.” Or, we might find a collection of processes that are similar enough to those presumed by our naive concept of consciousness that we choose to preserve the concept of “consciousness” and simply revise our definition of it, as happened when we eventually decided to identify “life” with a particular set of low-level biological features (homeostasis, cellular organization, metabolism, reproduction, etc.) even though life turned out not to be explained by any Élan vital or supernatural soul, as many people throughout history62 had assumed.63 But I consider this only a possibility, not an inevitability. In other words, I’m not trying to take a strong position on “eliminativism” about consciousness here — I see that as a pragmatic issue to be decided later (see Appendix Z.6). For now, I think it’s easiest to talk about “consciousness,” “qualia,” and so on as truly existing phenomena that can be defined by example as above, despite those concepts having very “fuzzy” boundaries.
3 Specific efforts to sharpen my views about the distribution question
Now we turn to the key question: What is the likely distribution of phenomenal consciousness — as defined by example — across different taxa? (I call this the “distribution question.”)
Note that in this report, I’ll use “taxa” very broadly to mean “classes of systems,” including:
- Phylogenetic taxa, such as “primates,” “fishes,” “rainbow trout,” “plants,” and “bacteria.”
- Subsets of phylogenetic taxa, such as “humans in a persistent vegetative state” and “anencephalic infants.”
- Biological sub-systems, such as “human enteric nervous systems” and “non-dominant brain hemispheres of split-brain patients.”
- Classes of computer software and/or hardware, such as “deep reinforcement learning agents,” “industrial robots,” “versions of Microsoft Windows,” and “such-and-such application-specific integrated circuit.”
In the academic literature on the distribution question, the three most common argumentative strategies I’ve seen are:64
- Theory: Assume a particular theory of consciousness, then consider whether a specific taxon is likely to be conscious if that theory is true. [More]
- Potentially consciousness-indicating features: Rather than relying on a specific theory of consciousness, instead suggest a list of behavioral and neurobiological/architectural features which intuitively suggest a taxon might be conscious. Then, check how many of those potentially consciousness-indicating features (PCIFs) are possessed by a given taxon. If the taxon possesses all or nearly all the PCIFs, conclude that its members are probably conscious. If the taxon possesses very few of the proposed PCIFs, conclude that its members probably are not conscious. [More]
- Necessary or sufficient conditions: Another approach is to argue that some feature is likely necessary for consciousness (e.g. a neocortex), or that some feature is likely sufficient for consciousness (e.g. mirror self-recognition), without relying on any particular theory of consciousness. If successful, such arguments might not give us a detailed picture of which systems are and aren’t conscious, but they might allow us to conclude that some particular taxa either are or aren’t conscious. [More]
Below, I consider each of these approaches in turn, and then I consider various big-picture considerations that “pull” toward or away from a “consciousness is rare” conclusion (here).
3.1 Theories of consciousness
I briefly familiarized myself with several physicalist functionalist theories of consciousness, listed in Appendix Z.1. Overall, my sense is that the current state of our scientific knowledge is such that it is difficult to tell whether any currently proposed theory of consciousness is promising. My impression from the literature I’ve read, and from the conversations I’ve had, is that many (perhaps most) consciousness researchers agree,65 even though some of the most well-known consciousness researchers are well-known precisely because they have put forward specific theories they see as promising. But if most researchers agreed with their optimism, I would expect theories of consciousness to have been winnowed over the last couple decades, rather than continuing to proliferate,66 under a huge variety of metaphysical and methodological assumptions, as they currently do. (In other words, consciousness studies seems to be in what Thomas Kuhn called a “pre-paradigmatic stage of development.”67)
One might also argue about the distribution question not from the perspective of theories of how consciousness works, but from theories of how consciousness evolved (see the list in Appendix Z.5). Unfortunately, I didn’t find any of these theories any more convincing than currently available theories of how consciousness works.68
Given the unconvincing-to-me nature of current theories of consciousness (see also Appendix B), I decided to pursue investigation strategies that do not require me to put much emphasis on any specific theories of consciousness, starting with the “potentially consciousness-indicating features” strategy described in the next section.
First, though, I’ll outline one example theory of consciousness, so that I can explain what I find unsatisfying about currently-available theories of consciousness. In particular, I’ll describe Michael Tye’s PANIC theory,69 which is an example of “first-order representationalism” (FOR) about consciousness.
3.1.1 PANIC as an example theory of consciousness
To explain Tye’s PANIC theory, I need to explain what philosophers mean by “representation.”70 For philosophers, a representation is a thing that carries information about something else. An image of a flower carries information about a flower. The sentence “The flower smells good” carries information about a flower — specifically, the information that it smells good. Perhaps a nociceptive signal represents, to some brain module, that there is tissue damage of a certain sort occurring at some location on the body. There can also be representations that carry information about things that don’t exist, such as Luke Skywalker. If a representation mischaracterizes the thing it is about in some important way, we say that it is misrepresenting its target. Representational theories of consciousness, then, say that if a system does the right kind of representing, then that system is conscious.
Michael Tye’s PANIC theory, for example, claims that a mental state is phenomenally conscious if it has some Poised, Abstract, Nonconceptual, Intentional Content (PANIC). I’ll briefly summarize what that means.
To simplify just a bit, “intentional content” is just a phrase that (in philosophy) means “representational content” or “representational information.” What about the other three terms?
- Poised: Conscious representational contents, unlike unconscious representational contents, must be suitably “poised” to play a certain kind of functional role. Specifically, they are poised to impact beliefs and desires. E.g. conscious perception of an apple can change your belief about whether there are apples in your house, and the conscious feeling of hunger can create a desire to eat.
- Abstract: Conscious representational contents are representations of “general features or properties” rather than “concrete objects or surfaces,” e.g. because in hallucinatory experiences, “no concrete objects need be present at all,” and because under some circumstances two different objects can “look exactly alike phenomenally.”71
- Nonconceptual: The representational contents of consciousness are more detailed than anything we have words or concepts for. E.g. you can consciously perceive millions of distinct colors, but you don’t have separate concepts for red17 and red18, even though you can tell them apart when they are placed next to each other.
So according to Tye, conscious experiences have poised, abstract, nonconceptual, representational contents. If a representation is missing one of these properties, then it isn’t conscious. For example, consider how Tye explains the consistency of his PANIC theory of consciousness with the phenomenon of blindsight:72
…given a suitable elucidation of the “poised” condition, blindsight poses no threat to [my theory]. Blindsight subjects are people who have large blind areas or scotoma in their visual fields due to brain damage… They deny that they can see anything at all in their blind areas, and yet, when forced to guess, they produce correct responses with respect to a range of simple stimuli (for example, whether an X or an O is present, whether the stimulus is moving, where the stimulus is in the blind field).
If their reports are to be taken at face value, blindsight subjects… have no phenomenal consciousness in the blind region. What is missing, on the PANIC theory, is the presence of appropriately poised, nonconceptual, representational states. There are nonconceptual states, no doubt representationally impoverished, that make a cognitive difference in blindsight subjects. For some information from the blind field does reach the cognitive centers and controls their guessing behavior. But there is no complete, unified representation of the visual field, the content of which is poised to make a direct difference in beliefs. Blindsight subjects do not believe their guesses. The cognitive processes at play in these subjects are not belief-forming at all.
Now that I’ve explained Tye’s theory, I’ll use it to illustrate why I find it (and other theories of consciousness) unsatisfying.
In my view, a successful explanation of consciousness would show how the details of some theory (such as Tye’s) predict, with a fair amount of precision, the explananda of consciousness — i.e., the specific features of consciousness that we know about from our own phenomenal experience and from (reliable, validated) cases of self-reported conscious experience (e.g. in experiments, or in brain lesion studies). For example, how does Tye’s theory of consciousness explain the details of the reports we make about conscious experience? What concept of “belief” does the theory refer to, such that the guesses of blindsight subjects do not count as “beliefs,” but (presumably) some other weakly-held impressions do count? Does PANIC theory make any testable, fairly precise predictions akin to the testable prediction Daniel Dennett made on the final page of Consciousness Explained?73
In short, I think that current theories of consciousness (such as Tye’s) simply do not “go far enough” — i.e., they don’t explain enough consciousness explananda, with enough precision — to be compelling (yet). In Appendix B, I discuss this issue in more detail, and describe how one might construct a theory of consciousness that explains more consciousness explananda, with more precision, than Tye’s theory (or any other theory I’m aware of) does.
3.2 Potentially consciousness-indicating features (PCIFs)
3.2.1 How PCIF arguments work
The first theory-agnostic approach to the distribution question that I examined was the approach of “arguments by analogy” or, as I call them, “arguments about potentially consciousness-indicating features (PCIFs).”74
As Varner (2012) explains, analogy-driven arguments appeal to the principle that because things P and Q share many “seemingly relevant” features (a, b, c, …n), and we know that P has some additional property x, we should infer that Q probably has property x, too.
After all, it is by such an analogy that I believe other humans are conscious. I cannot directly observe that my mother is conscious, but she talks about consciousness like I do, she reacts to stimuli like I do, she has a brain that is virtually identical to my own in form and function and evolutionary history, and so on. And since I know I am conscious, I conclude that my mother is conscious as well.75 The analogy between myself and a chimpanzee is weaker than that between myself and my mother, but it is, we might say, “fairly strong.” The analogy between myself and a pig is weaker still. The analogies between myself and a fish are even weaker but, some argue, still strong enough that we should put some substantial probability on fish consciousness.
One problem with analogy-driven arguments, and one reason they are difficult to fully separate from theory-driven arguments, is this: to decide how salient a given analogy between two organisms is, we need some “guiding theory” about what consciousness is, or what its function is. Varner (2012) explains:76
[The] point about needing such a “guiding theory” can be illustrated with this obviously bad argument by analogy:
- Both turkeys (P) and cattle (Q) are animals, they are warm blooded, they have limited stereoscopic vision, and they are eaten by humans (a, b, c, …, and n).
- Turkeys are known to hatch from eggs (x).
- So probably cattle hatch from eggs, too.
One could come up with more and more analogies to list (e.g., turkeys and cattle both have hearts, they have lungs, they have bones, etc., etc.). The above argument is weak, not because of the number of analogies considered, but because it ignores a crucial disanalogy: that cattle are mammals, whereas turkeys are birds, and we have very different theories about how the two are conceived, and how they develop through to birth and hatching, respectively. Another way of putting the point would be to say that the listed analogies are irrelevant because we have a “guiding theory” about the various ways in which reproduction occurs, and within that theory the analogies listed above are all irrelevant…
So in assessing an argument by analogy, we do not just look at the raw number of analogies cited. Rather, we look at both how salient are the various analogies cited and whether there are any relevant disanalogies, and we determine how salient various comparisons are by reference to a “guiding theory.”
Unfortunately, as explained above, it isn’t clear to me what our guiding theory about consciousness should be. Because of this, I present below a table that includes an unusually wide variety of PCIFs that I have seen suggested in the literature, along with a few of my own. From this initial table, one can use one’s own guiding theories to discard or de-emphasize various PCIFs (rows), perhaps temporarily, to see what doing so seems to suggest about the likely distribution of consciousness.
Another worry is that one’s choices about which PCIFs to include, and which taxa to check for those PCIFs, can bias the conclusions of such an exercise.77 To mitigate this problem, my table below is unusually comprehensive with respect to both PCIFs and taxa. As a result, I could not afford the time to fill out most cells in the table, and thus my conclusions about the utility and implications of this approach (below) are limited.
3.2.2 A large (and incomplete) table of PCIFs and taxa
The taxa represented in the table below were selected either (a) for comparison purposes (e.g. human, bacteria), or (b) because they are killed or harmed in great numbers by human activity and are thus plausible targets of welfare interventions if they are thought of as moral patients (e.g. chickens, fishes), or (c) a mix of both. To represent each very broad taxon of interest (e.g. fishes, insects), I chose a representative sub-taxon that has been especially well-studied (e.g. rainbow trout, common fruit fly). More details on the taxa and PCIFs I chose, and why I chose them, are provided in a footnote.78
One column below — “Function sometimes executed non-consciously in humans?” — requires special explanation. Many behavioral and neurofunctional PCIFs can be executed by humans either consciously or non-consciously. In fact, most cognitive processing in humans seems to occur non-consciously, and humans sometimes engage in fairly sophisticated behaviors without conscious awareness of them, as in (it is often argued) cases of sleepwalking, or when someone daydreams while driving a familiar route, or in cases of absence seizures involving various “automatisms” like this one described by Antonio Damasio:79
…a man sat across from me… [and] we talked quietly. Suddenly the man stopped, in midsentence, and his face lost animation; his mouth froze, still open, and his eyes became vacuously fixed on some point on the wall behind me. For a few seconds he remained motionless. I spoke his name but there was no reply. Then he began to move a little, he smacked his lips, his eyes shifted to the table between us, he seemed to see a cup of coffee and a small metal vase of flowers; he must have, because he picked up the cup and drank from it. I spoke to him again and again he did not reply. He touched the vase. I asked him what was going on, and he did not reply, his face had no expression. He did not look at me. Now, he rose to his feet and I was nervous; I did not know what to expect. I called his name and he did not reply. When would this end? Now he turned around and walked slowly to the door. I got up and called him again. He stopped, he looked at me, and some expression returned to his face — he looked perplexed. I called him again, and he said, “What?”
For a brief period, which seemed like ages, this man suffered from an impairment of consciousness. Neurologically speaking, he had an absence seizure followed by an absence automatism, two among the many manifestations of epilepsy…
If such PCIFs are observed in humans either with or without consciousness, then perhaps the case for them as being indicative of consciousness in other taxa is less strong than one might think:80 e.g. are fishes conscious of their behavior, or are they continuously “sleepwalking”?81
In the table below, a cell is left blank if I didn’t take the time to investigate, or in some cases even think about, what its value should be, or if I investigated briefly but couldn’t find a clear value for the cell. A cell’s value is “n/a” when a PCIF is not applicable to that taxon, and it is “unavailable” when I’m fairly confident the relevant data has not (as of December 2016) been collected. In some cases, data are not available for my taxon of choice, but I guess or estimate the value of that cell from data available for a related taxon (e.g. a closely related species), and in cases where this leaves me with substantial uncertainty about the appropriate value for that cell, I indicate my extra uncertainty with a question mark, an “approximately” tilde symbol (“~”) for scalar data, or both. To be clear: a question mark does not necessarily indicate that domain experts are uncertain about the appropriate value for that cell of the table; it merely means that I am substantially uncertain, given the very few sources I happened to skim. Sources and reasoning for the value in each cell are given in the footnote immediately following each row’s PCIF.
The values of the cells in this table have not been vetted by any domain experts. In many cases, I populated a cell with a value drawn from a single study, without reading the study carefully or trying hard to ensure I was interpreting it correctly. Moreover, I suspect many of the studies used to populate the cells in this table would not hold up under deeper scrutiny or upon a rigorous replication attempt (see below). Hence, the contents of this table should be interpreted as a set of tentative estimates and guesses, collected hastily by a non-expert.
Because the table doesn’t fit on the page, the table must be scrolled horizontally and vertically to view all its contents.
POTENTIALLY CONSCIOUSNESS-INDICATING FEATURE | HUMAN | CHIMPANZEE | COW | CHICKEN | RAINBOW TROUT | GAZAMI CRAB | COMMON FRUIT FLY | E. COLI | FUNCTION SOMETIMES EXECUTED NON-CONSCIOUSLY IN HUMANS? | HUMAN ENTERIC NERVOUS SYSTEM |
---|---|---|---|---|---|---|---|---|---|---|
Last common ancestor with humans (Mya)82 | n/a | 6.7 | 96.5 | 311.9 | 453.3 | 796.6 | 796.6 | 4290 | n/a | n/a |
Category: Neurobiological features | ||||||||||
Adult average brain mass (g)83 | 1509 | 385 | 480.5 | 3.5 | 0.2 | n/a | n/a | |||
Neurons in brain (millions)84 | 86060 | unavailable | unavailable | ~221 | unavailable | unavailable | 0.12 | n/a | n/a | 400 |
Neurons in pallium (millions)85 | 16340 | unavailable | unavailable | ~60.7 | unavailable | n/a | n/a | n/a | n/a | n/a |
Encephalization quotient86 | 7.6 | 2.35 | n/a | n/a | n/a | |||||
Has a neocortex87 | Yes | Yes | Yes | No | No | n/a | n/a | n/a | ||
Has a central nervous system88 | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No | n/a | n/a |
Category: Nociceptive features | ||||||||||
Has nociceptors89 | Yes | Yes | Yes | Yes | Yes | Yes? | Yes | Yes? | n/a | Yes? |
Has neural nociceptors90 | Yes | Yes | Yes | Yes | Yes | Yes? | Yes | No | n/a | Yes? |
Nociceptive reflexes91 | Yes | Yes | Yes | Yes | Yes | Yes? | Yes? | Yes? | Yes | |
Physiological responses to nociception or handling92 | Yes | Yes | Yes | Yes | Yes | n/a | n/a | |||
Long-term alteration in behavior to avoid noxious stimuli93 | Yes | |||||||||
Taste aversion learning94 | Yes | |||||||||
Protective behavior (e.g. wound guarding, limping, rubbing, licking)95 | Yes | Yes | Yes? | Yes? | Yes? | |||||
Nociceptive reflexes or avoidant behaviors reduced by analgesics96 | Yes | Yes | Yes | Yes | Yes | |||||
Self-administration of analgesia97 | Yes | Yes | Yes | |||||||
Will pay a cost to access analgesia98 | Yes | |||||||||
Selective attention to noxious stimuli over other concurrent events99 | Yes | Yes | ||||||||
Pain-relief learning100 | Yes | Yes | ||||||||
Category: Other behavioral/cognitive features | ||||||||||
Reports details of conscious experiences to scientists101 | Yes | No | No | No | No | No | No | No | No | No |
Cross-species measures of general cognitive ability102 | ||||||||||
Plastic behavior103 | Yes | |||||||||
Detour behaviors104 | Yes | Yes | ||||||||
Play behaviors105 | Yes | Yes | Yes | Yes? | Yes? | n/a | ||||
Grief behaviors106 | Yes | |||||||||
Expertise107 | Yes | |||||||||
Goal-directed behavior108 | Yes | |||||||||
Mirror self-recognition109 | Yes | Yes | unavailable? | unavailable? | unavailable? | unavailable? | unavailable? | n/a | n/a | |
Mental time travel110 | Yes | |||||||||
Distinct sleep/wake states111 | Yes | |||||||||
Advanced social politics112 | Yes | |||||||||
Uncertainty monitoring113 | Yes | probably? | unavailable? | unavailable? | unavailable? | unavailable? | unavailable? | |||
Intentional deception114 | Yes | |||||||||
Teaching others115 | ||||||||||
Abstract language capabilities116 | Yes | |||||||||
Intentional agency117 | Yes | |||||||||
Understands pointing at distant objects118 | Yes | |||||||||
Non-associative learning119 | Yes | |||||||||
Tool use120 | Yes | |||||||||
Can spontaneously plan for future days without reference to current motivational state121 | Yes | unavailable? | unavailable? | unavailable? | unavailable? | unavailable? | ||||
Can take into account another’s spatial perspective122 | Yes | Yes? | unavailable? | unavailable? | unavailable? | unavailable? | n/a | n/a | ||
Theory of mind123 | Yes |
A fuller examination of the PCIFs approach, which I don’t conduct here, would involve (1) explaining these PCIFs in some detail, (2) cataloging and explaining the strength of the evidence for their presence or absence (or scalar value) for a wide variety of taxa, (3) arguing for some set of “weights” representing how strongly each of these PCIFs indicate consciousness and why, with some PCIFs perhaps being assigned ~0 weight, and (4) arguing for some resulting substantive conclusions about the likely distribution of consciousness.
3.2.3 My overall thoughts on PCIF arguments
Given that my table of PCIFs and taxa is so incomplete, not much can be concluded from it concerning the distribution question. However, my investigation into analogy-driven arguments, and my incomplete attempt to construct my own table of analogies, left me with some impressions I will now share (but not defend).
First, I think that analogy-driven arguments about the distribution of consciousness typically draw from far too narrow a range of taxa and PCIFs. In particular, it seems to me that analogy-driven arguments, as they are typically used, do not take seriously enough the following points:
- Many commonly-used PCIFs are executed both with and without conscious awareness in humans (e.g. at different times), and are thus not particularly compelling evidence for the presence of consciousness in non-humans (without further argument).124
- Many commonly-used PCIFs are possessed by biological subsystems which are typically thought to be non-conscious, for example the enteric nervous system and the spinal cord.125
- Many commonly-used PCIFs are possessed by simple, short computer programs, or in other cases by more complicated programs in widespread use (such as Microsoft Windows or policy gradients). Yet, these programs are typically thought to be non-conscious, even by functionalists.126
- Many commonly-used PCIFs are possessed by plants and bacteria and other very “simple” organisms, which are typically thought to be non-conscious.127 For example, a neuron-less slime mold can store memories, transfer learned behaviors to conspecifics, escape traps, and solve mazes.128
- Analogy-driven arguments typically make use of a very short list of PCIFs, and a very short list of taxa. Including more taxa and PCIFs would, I think, give a more balanced picture of the situation.
Second, I think analogy-driven arguments about consciousness too rarely stress the general point that “functionally similar behavior, such as communicating, recognizing neighbors, or way finding, may be accomplished in different ways by different kinds of animals.”129 This holds true for software as well130 — consider the many different algorithms that can be used to sort information, or implement a shared memory system, or make complex decisions,131 or learn from data.132 Clearly, many behavioral PCIFs can be accomplished by many different means, and for any given behavioral PCIF, it may be the case that it is achieved with the help of conscious awareness in some cases, and without conscious awareness in other cases.
Third, analogy-driven arguments typically do not (in my opinion) take seriously enough the possibility of hidden qualia (i.e. qualia inaccessible to introspection; see Appendix H), which has substantial implications for how one should weight the evidential importance of various PCIFs against each other. For example, one might argue that the presence of feature A should be seen as providing stronger evidence of consciousness than the presence of feature B does, because in humans we only observe feature A with conscious accompaniments, but we do sometimes (in humans) observe feature B without conscious accompaniments. But if hidden qualia exist, then perhaps our observations of B “without conscious accompaniments” are mistaken, and these are merely observations of B without conscious accompaniments accessible to introspection.
Fourth, analogy-driven arguments typically do not (in my opinion) take seriously enough the possibility that the ethology literature would suffer from its own “replication crisis” if rigorous attempts at mass replication were undertaken (see here).
3.3 Searching for necessary or sufficient conditions
How else might we learn something about the distribution question, without putting much weight on any single, specific theory of consciousness?
One possibility is to argue that some structure or capacity is likely necessary for consciousness — without relying much on any particular theory of consciousness — and then show that this structure or capacity is present for some taxa and not others. This wouldn’t necessarily prove which taxa are conscious, but it would tell us something about which ones aren’t.
Another possibility is to argue that some structure or capacity is likely sufficient for consciousness, and then show that this structure or capacity is present for some taxa and not others. This wouldn’t say much about which taxa aren’t conscious, but it would tell us something about which ones are.
(Technically, all potential necessary or sufficient conditions are just PCIFs, but with a different “strength” to their indication of consciousness.133 I discuss them separately in this report mainly for organizational reasons.)
Of course, we’d want such necessary or sufficient conditions to be “substantive.” For example, I think there’s a pretty strong case that information processing of some sort is a necessary condition for consciousness, but this doesn’t tell me much about distribution of consciousness: even bacteria process information. I also think there’s a pretty strong case that “human neurobiology plus detailed self-report of conscious experience” should be seen as sufficient evidence for consciousness, but again this doesn’t tell me anything novel or interesting about the distribution question.
I assume that at this stage of scientific progress we cannot definitely prove a “substantive” necessary or sufficient condition for consciousness, but can we make a “moderately strong argument” for some such necessary or sufficient condition?
Below I consider the case for just one proposed necessary or sufficient condition for consciousness.134 There are other candidates I could have investigated,135 but decided not to at this time.
3.3.1 Is a cortex required for consciousness?
One commonly-proposed necessary condition for phenomenal consciousness is possession of a cortex, or sometimes possession of a neocortex, or possession of a specific part of the neocortex such as the association cortex. Collectively, I’ll refer to these as “cortex-required views” (CRVs). Below, I report my findings about the credibility of CRVs.
Even sources arguing against CRVs often acknowledge that, for many years, it has been commonly believed by cognitive scientists and medical doctors that the cortex is the organ of consciousness in humans,136 though it’s not clear whether they would have also endorsed the much stronger claim that a cortex is required for consciousness in general. However, some experts have recently lost confidence that the cortex is required for consciousness (even just in humans), for several reasons (which I discuss below).137
One caveat about the claims above, and throughout the rest of this section, is that different authors appeal to slightly different definitions of “consciousness,” and so it is not always the case that the authors I cite or quote explicitly argued for or against the view that a cortex is required for “consciousness” as defined above. Still, these are arguments are sometimes used by others to make claims about the dependence or non-dependence of consciousness (as defined above) on a cortex, and certainly the arguments could easily be adapted to make such claims.
In this section, I describe some of the evidence used to argue for and against a variety of cortex-required views. As with the table of PCIFs and taxa above, please keep in mind that I am not an expert on the topics reviewed below, and my own understanding of these topics is based on a quick and shallow reading of various overview books and articles, plus a small number of primary studies.138
3.3.1.1 Arguments for cortex-required views
For decades, much (but not all) of the medical literature and the bioethics literature more-or-less assumed one or another CRV, at least in the case of humans, without much argument.139 In those sources which argue for a CRV,140 I typically see two types of arguments:
- For multiple types of cognitive processing (visual processing, emotional processing, etc.), we have neuroimaging evidence and other evidence showing that we are consciously aware of activity occurring in (some regions of) the cortex, but we are not aware of activity occurring outside (those regions of) the cortex.
- In cases where (certain kinds of) cortical operations are destroyed or severely disrupted, conscious experience seems to be abolished. When those cortical operations are restored, conscious experience returns.
According to James Rose141 and some other proponents of one or another CRV, the case for CRVs about consciousness comes not from any one line of evidence, but from several converging lines of evidence of types (1) and (2), all of which somewhat-independently suggest that conscious processing must be subserved by certain regions of the cortex, whereas unconscious processing can be subserved by other regions. If this is true, this could provide a very suggestive case in favor of some kind of CRV about consciousness. (Though, this suggestive case would still be undercut somewhat by the possibility of hidden qualia.)
Unfortunately, I did not have the time to survey several different lines of evidence to check whether they converged in favor of some kind of CRV. Instead, I examine below just one of these lines of evidence — concerning conscious and unconscious vision — in order to illustrate how the case for a CRV could be constructed, if other lines of evidence showed a similar pattern of results.
3.3.1.2 Unconscious vision
Probably the dominant142 (but still contested) theory of human visual processing holds that most human visual processing occurs in two largely (but not entirely) separate streams of processing. According to this theory, the ventral stream, also known “vision for perception,” serves to recognize and identify objects and people, and typically leads to conscious visual experience. The dorsal stream, also known as “vision for action,” serves to locate objects precisely and interact with them, but is not part of conscious experience.143 Below, I summarize what this theory says, but I don’t summarize the evidence in favor of the theory. I summarize that evidence in Appendix C.
These two streams are thought to be supported by different regions of the cortex, as shown below:
In primates, most visual information from the retina passes through the lateral geniculate nucleus (LGN) in the thalamus on its way to the primary visual cortex (V1) in the occipital lobe at the back of the skull.144 From there, visual information is passed from V1 to two separate streams of processing. The ventral stream leads into the inferotemporal cortex in the temporal lobe, while the dorsal stream leads into the posterior parietal cortex in the parietal lobe. (The dorsal stream also receives substantial input from several subcortical structures in addition to its inputs from V1, whereas the the ventral stream depends almost entirely on inputs from V1.145)
To illustrate how these systems are thought to interact, consider an analogy to the remote control of a robot in a distant or hostile environment (e.g. Mars):146
In tele-assistance, a human operator identifies and “flags” the goal object, such as an interesting rock on the surface of Mars, and then uses a symbolic language to communicate with a semi-autonomous robot that actually picks up the rock.
A robot working with tele-assistance is much more flexible than a completely autonomous robot… Autonomous robots work well in situations such as an automobile assembly line, where the tasks they have to perform are highly constrained and well specified… But autonomous robots… [cannot] cope with events that its programmers [have] not anticipated…
At present, the only way to make sure that the robot does the right thing in unforeseen circumstances is to have a human operator somewhere in the loop. One way to do this is to have the movements or instructions of the human operator… simply reproduced in a one-to-one fashion by the robot… [but this setup] cannot cope well with sudden changes in scale (on the video monitor) or with a significant delay between the communicated action and feedback from that action [as with a Mars robot]. This is where tele-assistance comes into its own.
In tele-assistance the human operator doesn’t have to worry about the real metrics of the workspace or the timing of the movements made by the robot; instead, the human operator has the job of identifying a goal and specifying an action toward that goal in general terms. Once this information is communicated to the semi-autonomous robot, the robot can use its on-board range finders and other sensing devices to work out the required movements for achieving the specified goal. In short, tele-assistance combines the flexibility of tele-operation with the precision of autonomous robotic control.
…[By analogy,] the perceptual systems in the ventral stream, along with their associated memory and other higher-level cognitive systems in the brain, do a job rather like that of the human operator in tele-assistance. They identify different objects in the scene, using a representational system that is rich and detailed but not metrically precise. When a particular goal object has been flagged, dedicated visuomotor networks in the dorsal stream, in conjunction with output systems elsewhere in the brain… are activated to perform the desired motor act. In other words, dorsal stream networks, with their precise egocentric coding of the location, size, orientation, and shape of the goal object, are like the robotic component of tele-assistance. Both systems have to work together in the production of purposive behavior — one system to help select the goal object from the visual array, the other to carry out the required metrical computations for the goal-directed action.
If something like this account is true — and it might not be; see Appendix C — then it could be argued to fit with a certain kind of CRV, according to which some parts of the cortex — those which include the ventral stream but not the dorsal stream — are required for conscious experience (at least in humans).147
3.3.1.3 Suggested other lines of evidence for CRVs
On its own, this theory of conscious and unconscious vision is not very suggestive, but if several different types of cognitive processing tell a similar story — with all of them seeming to depend on certain areas of the cortex for conscious processing, with unconscious processing occurring elsewhere in the brain — then this could add up to a suggestive argument for some kind of CRV.
Here are some other bodies of evidence that could (but very well might not) turn out to collectively suggest some sort of CRV about consciousness:
- Preliminary evidence suggests there may be multiple processing streams for other sense modalities, too, but I haven’t checked whether this evidence is compatible with CRVs.148
- There is definitely “unconscious pain” (technically, unconscious nociception), but I haven’t checked whether the evidence is CRVs-compatible. (See my linked sources on this in Appendix D.)
- There are both conscious and unconscious aspects to our emotional responses, but I haven’t checked whether the relevant evidence is CRVs-compatible. (See Appendix Z.4.)
- Likewise, there are both conscious and unconscious aspects of (human) learning and memory,149 but I haven’t checked whether the relevant evidence is CRVs-compatible.
- According to Laureys (2005), patients in a persistent vegetative state (PVS), who are presumed to be unconscious, show greatly reduced activity in the associative cortices, and also show disrupted cortico-cortical and thalamo-cortical connectivity. Laureys also says that recovery from PVS is accompanied by restored connectivity of some of these thalamo-cortical pathways.150
- The mechanism by which general anesthetics abolish consciousness in humans isn’t well-understood, but at least one live hypothesis is that (at least some) general anesthetics abolish consciousness primarily by disrupting cortical functioning. If true, perhaps this account would lend some support to some CRVs.151
- Coma states, in which consciousness is typically assumed to be absent, seem to be especially associated with extensive cortical damage.152
3.3.1.4 Overall thoughts on arguments for CRVs
I have not taken the time to assess the case for CRVs about consciousness. I can see how such a case could be made, if multiple lines of evidence about a variety of cognitive functions aligned with the suggestive evidence concerning the neural substrates of conscious vs. unconscious vision. On the other hand, my guess is that if I investigated these additional lines of evidence, I would find the following:
- I expect I would find that the evidence base on these other topics is less well-developed than the evidence base concerning conscious vs. unconscious vision, since vision neuroscience seems to be the most “developed” area within cognitive neuroscience.
- I expect I would find that the evidence concerning which areas of the brain subserve specifically conscious processing of each type would be unclear, and subject to considerable expert debate.
- I expect I would find that the underlying studies often suffer from the weaknesses described in Appendix Z.8.
And, as I mentioned earlier, the possibility of hidden qualia undermines the strength of any pro-CRVs argument one could make from empirical evidence about which processes are conscious vs. unconscious, since the “unconscious” processes might actually be conscious, but in a way that is not accessible to introspection.
Overall, then, my sense is that the case for CRVs about consciousness is currently weak or at least equivocal, though I can imagine how the case could turn out to be quite suggestive in the future, after much more evidence is collected in several different subdomains of neurology and cognitive neuroscience.
3.3.1.5 Arguments against cortex-required views
One influential paper, Merker (2007), pointed to several pieces of evidence that seem, to some people, to argue against CRVs. One piece of evidence is the seemingly-conscious behavior of hydranencephalic children,153 whose cerebral hemispheres are almost entirely missing and replaced by cerebrospinal fluid filling that part of the skull:
These children are not only awake and often alert, but show responsiveness to their surroundings in the form of emotional or orienting reactions to environmental events…, most readily to sounds, but also to salient visual stimuli… They express pleasure by smiling and laughter, and aversion by “fussing,” arching of the back and crying (in many gradations), their faces being animated by these emotional states. A familiar adult can employ this responsiveness to build up play sequences predictably progressing from smiling, through giggling, to laughter and great excitement on the part of the child. The children respond differentially to the voice and initiatives of familiars, and show preferences for certain situations and stimuli over others, such as a specific familiar toy, tune, or video program, and apparently can even come to expect their regular presence in the course of recurrent daily routines.
…some of these children may even take behavioral initiatives within the severe limitations of their motor disabilities, in the form of instrumental behaviors such as making noise by kicking trinkets hanging in a special frame constructed for the purpose (“little room”), or activating favorite toys by switches, presumably based upon associative learning of the connection between actions and their effects… The children are, moreover, subject to the seizures of absence epilepsy. Parents recognize these lapses of accessibility in their children, commenting on them in terms such as “she is off talking with the angels,” and parents have no trouble recognizing when their child “is back.”
In a later survey of 108 primary caregivers of hydranencephalic children (Aleman & Merker 2014), 94% of respondents said they thought their child could feel pain, and 88% said their child takes turns with the caregiver during play activities.
However, these findings are not a certain refutation of CRVs, for at least three reasons. First, hydranencephalic children cannot provide verbal reports of conscious experiences (if they have any). Second, it is typically the case that hydranencephaly allows for small portions of the cortex to develop, which might subserve conscious experience. Third, there is the matter of plasticity: perhaps consciousness normally requires certain regions of the cortex, but in cases of hydranencephaly, other regions are able to support conscious functions. Nevertheless, these observations of hydranencephalic children are suggestive to many people that CRVs cannot be right.
Another line of evidence against CRVs comes from isolated case studies in which conscious experience remains despite extensive cortical damage.154 But once again these cases are not a definitive refutation of CRVs, because “extensive cortical damage” is not the same as “complete destruction of the cortex,” and also because of the issue of plasticity mentioned above.
Or, consider the case of Mike the headless chicken. On September 10, 1945, a Colorado farmer named Lloyd Olsen decapitated a chicken named “Mike.” The axe removed most of Mike’s head, but left intact the jugular vein, most of the brain stem, and one ear. Mike got back up and began to strut around as normal. He survived another 18 months, being fed with milk and water from an eyedropper, as well as small amounts of grit (to help with digestion) and small grains of corn, dropped straight into the exposed esophagus. Mike’s behavior was reported to be basically normal, albeit without sight. For example, according to various reports, he tried to crow (which made a gurgling sound), he could hear and respond to other chickens, and he tried to preen himself (which didn’t accomplish much without a beak). He was taken on tour, photographed for dozens of magazines and newspapers, and examined by researchers at the University of Utah.155
For those who endorse a CRV, Mike could be seen as providing further evidence that a wide variety of behaviors can be produced without any conscious experience. For those who reject CRVs, Mike could be seen as evidence that the brain stem alone can be sufficient for consciousness.
Another problem remains. Even if it could be proved that particular cortical structures are required to produce conscious experiences in humans, this wouldn’t prove that other animals can’t be phenomenally conscious via other brain structures. For example, it might be the case that once the cortex evolved in mammals, some functions critical to consciousness “migrated” from subcortical structures to cortical ones.156 To answer this question, we’d need to have a highly developed theory of how consciousness works in general, and not just evidence about its necessary substrates in humans.
Several authors have summarized additional arguments against CRVs,157 but I don’t find any of them to be even moderately conclusive. I do, however, think all this is sufficient to conclude that the case for CRVs is unconvincing. Hence, I don’t think there is even a “moderately strong” case for the cortex as a necessary condition for phenomenal consciousness (in humans and animals). But, I could imagine the case becoming stronger (or weaker) with further research.
3.4 Big-picture considerations that pull toward or away from “consciousness is rare”
Given that (1) I lack a satisfying theory of consciousness, (2) I don’t know which PCIFs are actually consciousness-indicating, and that (3) I haven’t found any convincing and substantive necessary or sufficient conditions for consciousness, my views about the distribution of consciousness at this point seem to be quite sensitive to how much weight I assign to various “big picture considerations.” I explain four of these below.
Putting substantial weight on some of these considerations pulls me toward a “consciousness is rare” conclusion, whereas putting more weight on other considerations pulls me toward a “consciousness is extensive” conclusion.
3.4.1 Consciousness inessentialism
How did artificial intelligence (AI) researchers build machines that outperform many or most humans (and in some cases all humans) at intelligent tasks such as playing Go or DOOM, driving cars, reading lips, and translating texts between languages? They did not do it by figuring out how consciousness works, even though consciousness might be required for how we do those things. In my experience, most AI researchers don’t think they’ll need to understand consciousness to successfully automate other impressive feats of human intelligence, either,158 and that fits with my intuitions as well (though I won’t argue the point here).
That said, AI scientists might produce consciousness as a side effect of trying to automate certain intelligent behaviors, without first understanding how consciousness works, just as AI researchers at Google DeepMind produced a game-playing AI that learned to exploit the “tunneling” strategy in Breakout!, even though the AI programmers didn’t know about that strategy themselves, and didn’t specifically write the AI to use it. Perhaps it is even the case that certain intelligent behaviors can only be achieved with the participation of conscious experience, even if the designers don’t need to understand consciousness themselves to produce a machine capable of exhibiting those intelligent behaviors.
My own intuitions lean the other way, though. I think it’s plausible that, “for any intelligent activity i, performed in any cognitive domain d, even if we do i with conscious accompaniments, i can in principle be done without these conscious accompaniments.” Flanagan (1992) called this view “conscious inessentialism,”159 but I think it is more properly called “consciousness inessentialism.”160
Defined this way, consciousness inessentialism is not the same as epiphenomenalism about consciousness, nor does it require that one think philosophical zombies are empirically possible. (Indeed, I reject both those views.) Instead, consciousness inessentialism merely requires that it be possible in principle for a system to generate the same input-output behavior as a human (or some other conscious system), without that system being conscious.
To illustrate this view, imagine replacing a human brain with a giant lookup table:
A Giant Lookup Table… is when you implement a function as a giant table of inputs and outputs, usually to save on runtime computation. If my program needs to know the multiplicative product of two inputs between 1 and 100, I can write a multiplication algorithm that computes each time the function is called, or I can precompute a Giant Lookup Table with 10,000 entries and two indices. There are times when you do want to do this, though not for multiplication — times when you’re going to reuse the function a lot and it doesn’t have many possible inputs; or when clock cycles are cheap while you’re initializing, but very expensive while executing.
Giant Lookup Tables [GLUTs] get very large, very fast. A GLUT of all possible [twenty-remark] conversations with ten words per remark, using only 850-word Basic English, would require 7.6 * 10585 entries.
Replacing a human brain with a Giant Lookup Table of all possible sense inputs and motor outputs (relative to some fine-grained digitization scheme) would require an unreasonably large amount of memory storage. But “in principle”… it could be done.
The GLUT is not a [philosophical] zombie… because it is microphysically dissimilar to a human.
A GLUT of a human brain is not physically possible because it is too large to fit inside the observable universe, let alone inside a human skull, but it illustrates the idea of consciousness inessentialism: if it was possible, for example via a hypercomputer, a GLUT would exhibit all the same behavior as a human — including talking about consciousness, writing articles about consciousness, and so on — without (I claim) being conscious.
Can we imagine a physically possible system that would exhibit all human input-output behavior without consciousness? Unfortunately, answering that question seems to depend on knowing how consciousness works. But, I think the answer might very well turn out to be “yes,” largely because, as every programmer knows, there are almost always many, many ways to write any given computer program (defined in terms of its input-output behavior), and those different ways of writing the program typically use different internal sequences of information processing and different internal representations. In most contexts, what separates a “good” programmer from a mediocre one is not that the good programmer can find a way to write a program satisfying some needed input-output behavior while the mediocre programmer cannot; rather, what separates them is that the good programmer can write the needed program using particular sequences of information processing and internal representations that (1) are easy to understand and debug, (2) are modular and thus easy to modify and extend, (3) are especially computationally efficient, and so on.
Similarly, if consciousness is instantiated by some kinds of sequences of information processing and internal representations but not others, then it seems likely to me that there are many cognitive algorithms that could give rise to my input-output behavior without the particular sequences of information processing and internal representations that instantiate consciousness. (Again, remember that as with the GLUT example, this does not imply epiphenomenalism, nor the physical possibility of zombies.)
For example, suppose (for the sake of illustration) that consciousness is only instantiated in a human brain if, among other necessary conditions, some module A shares information I with some module B in I’s “natural” form. Afterward, module B performs additional computations on I, and passes along the result to module C, which eventually leads to verbal reports and stored memories of conscious experience. But now, suppose that my brain is rewired such that module A encrypts I before passing it to B, and B knows how to perform the requisite computations on I via fully homomorphic encryption, but B doesn’t know how to decrypt the encrypted version of I. Next, B passes the result to module C which, as a result of the aforementioned rewiring, does know how to decrypt the encrypted version of I, and passes it along (in unencrypted form) so that it eventually results in verbal reports and stored memories of conscious experience. In this situation (with the hypothesized “rewiring”), the same input-output behavior as before is always observed, even my verbal reports about conscious experience, but consciousness is never instantiated inside my brain, because module B never sees information I in its natural form.161
Of course, it seems unlikely in the extreme that the human brain implements fully homomorphic encryption — this is just an illustration of the general principle that there are many ways to compute a quite sophisticated behavioral function, and it’s plausible that not all of those methods also compute an internal cognitive function that is sufficient for consciousness.
Unfortunately, it’s still unclear whether there is any system of a reasonable scale that would replicate my behavior without any conscious experience. That seems to depend in part on whether the computations necessary for human consciousness are highly specific (akin to those of a specific, small-scale device driver but not other device drivers), or whether consciousness is a result of relatively broad, general kinds of information processing (a la global workspace theory). In the former case, one might imagine relatively small (but highly unlikely to have evolved) tweaks to my brain that would result in identical behavior without conscious experience. In the latter case, this is more difficult to imagine, at least without vastly more computational resources, e.g. the computational resources required to execute a large fraction of my brain’s total information processing under fully homomorphic encryption.
Moreover, even if I’m right about consciousness inessentialism, I also think it’s quite plausible that, as a matter of contingent fact, many animals do have conscious experiences (of some sort) accompanying some or all of their sophisticated behaviors. In fact, at least for the animals with relatively similar brains to ours (primates, and probably all mammals), it seems more reasonable than not to assume they do have conscious experiences (at least, before consulting additional evidence), simply because we have conscious experiences, we share a not-so-distant common ancestor with those animals,162 and their brains seem similar to ours in many ways (see Appendix E).
Still, if you find consciousness inessentialism as plausible as I do, then you can’t take for granted that if an animal exhibits certain sophisticated behaviors, it must be conscious.163 On the other hand, if you find consciousness inessentialism highly implausible, then perhaps at least some sophisticated behaviors should be taken (by you) to be very strong evidence of consciousness. In this way, putting more weight on consciousness inessentialism should shift one’s view in the direction of “consciousness might be rare,” whereas giving little credence to consciousness inessentialism should shift one’s view in the direction of “consciousness might be widespread.”
3.4.2 The complexity of consciousness
How complex is consciousness? That is, how many “components” are needed, and how precisely must they be organized, for a conscious experience to be instantiated? The simpler consciousness is, the more extensive it should be (all else equal), for the same reason that both “unicellular life” and “muliticellular life” are rarer than “life,” and for the same reason that instances of both “Microsoft Windows” and “Mac OS” are rarer than instances of “personal computer operating systems.”
To illustrate this point, I’ll very briefly survey some families of theories of consciousness which differ in how complex they take consciousness to be.
Panpsychism posits that consciousness is a fundamental feature of reality, e.g. a fundamental property in physics.164 This, of course, is as simple (and therefore ubiquitous) as consciousness could possibly be.
Compared to panpsychism, first-order representationalism (FOR) posits a substantially more complex account of consciousness. Above, I described an example FOR theory, Michael Tye’s PANIC theory. On Tye’s theory of consciousness (and on other FOR theories), consciousness is a much more specific, complicated, and rare sort of thing than it is on a panpsychist view. If conscious states are states with PANIC, then we are unlikely to find them in carbon dioxide molecules, stars, and rocks,165 as the panpsychist claims. Nevertheless the PANIC theory seems to imply that consciousness might be relatively extensive within the animal kingdom. When Tye applies his own theory to the distribution question, he concludes that even some insects are clearly conscious.166 Personally, I think a PANIC theory of consciousness also seems to imply that some webcams and many common software programs are also conscious, though I suspect Tye disagrees that his theory has that implication.
Meanwhile, higher-order approaches to consciousness typically posit an even more complex account of consciousness, relative to FOR theories like PANIC. Carruthers (2016) explains:167
According to first-order views, phenomenal consciousness consists in analog or fine-grained contents that are available to the first-order processes that guide thought and action. So a phenomenally-conscious percept of red, for example, consists in a state with the analog content red which is tokened in such a way as to feed into thoughts about red, or into actions that are in one way or another guided by redness…
The main motivation behind higher-order theories of consciousness… derives from the belief that all (or at least most) mental-state types admit of both conscious and unconscious varieties. Almost everyone now accepts, for example, …that beliefs and desires can be activated unconsciously. (Think, here, of the way in which problems can apparently become resolved during sleep, or while one’s attention is directed to other tasks. Notice, too, that appeals to unconscious intentional states are now routine in cognitive science.) And then if we ask what makes the difference between a conscious and an unconscious mental state, one natural answer is that conscious states are states that we are aware of… That is to say, these are states that are the objects of some sort of higher-order representation…
As a further example, consider the case of unconscious vision, discussed here. Visual processing in the dorsal stream seems to satisfy something very close to Tye’s PANIC criteria,168 and yet these processes are unconscious (as far as anyone can tell). Hence the suggestion that more is required — specifically, that some “higher-order” processing is required. For example, perhaps it’s the case that for visual processing to be conscious, some circuits of the brain need to represent parts of that processing as being attended-to by the self, or something like that.
It’s easy to see, then, why higher-order theories will tend to be more complex than FOR theories. Basically, higher-order theories tend to assume FOR-style information processing, but they say that some additional processing is required in order for consciousness to occur. If higher-order theories are right, then (all else equal) we should expect consciousness to be rarer than if first-order theories are right.
What about illusionist theories? As of today, most illusionist theories seem to be at least as complex as higher-order theories tend to be.169 But perhaps in the future illusionists will put forward compelling illusionist theories which do not imply a particularly complex account of consciousness.170
So, how complex will consciousness turn out to be?
Much of the academic debate over the complexity of consciousness and the distribution question has taken place in the context of the debate between first-order and higher-order approaches to consciousness, and experts seem to agree that higher-order theories imply a less-extensive distribution of consciousness than first-order theories do. If I assume this framing for the debate, I generally find myself more sympathetic with higher-order theories (for the usual reason summarized by Carruthers above), though I think there are some reasons to take a first-order (or at least “lower-order”171) approach as a serious possibility (see Appendix H).
However, I think the first-order / higher-order dichotomy is a very limiting way to argue about theories of consciousness, the complexity of consciousness, and the distribution question. I would much rather see these debates transition to being debates about proposed (and at least partially coded) cognitive architectures — architectures which don’t neatly fit into the first-order / higher-order dichotomy. (I say more about this here.)
One final comment on the likely complexity of consciousness is that, as far as I can tell, early scientific progress (outside physics) tends to lead to increasingly complex models of the phenomena under study. If this pattern is real, and holds true for the study of consciousness, then perhaps future accounts of consciousness will tend to be more complex than the accounts we have come up with thus far. (For more on this, see Appendix Z.9.)
3.4.3 We continue to find that many sophisticated behaviors are more extensive than we once thought
My third “big-picture consideration” is motivated by the following fact: turn to almost any chapter of a recent textbook on animal cognition,172 check the publication years of the cited primary studies, and you’ll find an account that could be summarized like this: “A few decades ago, we didn’t know that [some animal taxon] exhibited [some sophisticated behavior], suggesting they may have [some sophisticated cognitive capacity]. Today, we have observed multiple examples.” In this sense, at least, it seems true that “research constantly moves forward, and the tendency of research is to extend the number of animals that might be able to suffer, not decrease it.”173
Consider, for example, these (relatively) recent reported discoveries:174
- Several forms of tool use and tool manufacture by insects and other invertebrates175
- Complex food preparation by a wild dolphin176
- The famous feats of Alex the parrot
- Fish using transitive inference to learn their social rank177
- Fairly advanced puzzle-solving by New Caledonian crows178
- Western scrub-jays planning for future days without reference to their current motivational states179
Are these observations of sophisticated animal behavior trustworthy? In many cases, I have my doubts. Studies of animal behavior often involve very small samples sizes, no controls (or poorly constructed controls), and inadequate reporting. Many studies fail to replicate.180 In general, the study of animal behavior seems to suffer from many of the weaknesses of scientific methodology that I summarize (for other fields) in Appendix Z.8.181 On the other hand, the trend in the reported sophistication of animal behaviors seems clear. Can it all be a mirage?
I suspect not, for at least two reasons.
First, one skeptical explanation of the trend described above might be the following: “People who decide to devote their careers to ethology are more likely to be people who are intuitively empathic toward (and think highly of) animals, and they’re just ‘finding’ what they want to find, and the reason for the trend is just that it takes time for a small number of scientists to get around to running the right sorts of experiments with an expanding set of species.” The part about “it takes time” is almost certainly true whether the field is generally biased or not, but what can we say about the likelihood of the proposed bias itself?
One piece of evidence is this: for most of the 20th century, ethologists were generally reluctant to attribute sophisticated cognitive capacities to animals, in part due to the dominating influence182 of Morgan’s Canon of 1894, which states that “In no case is an animal activity to be interpreted in terms of higher psychological processes if it can be fairly interpreted in terms of processes which stand lower in the scale of psychological evolution and development.” Or as Breed (2017) puts it: “Do not over-credit animals with human-like capacities, look for the simplest possible explanations for animal behavior.” According to Breed, “It really has been only in the last 20 years that consideration of animal cognition, thoughts and feelings has gained substantial scientific credibility.”183
Given this history, it doesn’t seem that those who devote their careers to the study of animal behavior are in general heavily biased toward ‘finding’ more sophisticated cognitive capacities in animals than those animals actually possess.184 If anything, my quick read of the history of the field is more consistent with a story according to which ethologists have generally been too biased against the possibility of sophisticated animal capacities, and are only recently overcoming that initial bias.
A second reason I suspect the trend of discovering sophisticated capacities in an ever-widening set of species is not entirely a mirage is that even the most skeptical ethologists seem to accept the general trend. For example, consider Clive Wynne, a comparative psychologist at Arizona State University. Wynne avoids talking of animal “thought” or “intelligence,” refers to Morgan’s Canon as “the most awesome weapon in animal psychology,” remains agnostic about whether animal behavior is driven by internal representations of the world, thinks that not even chimpanzees have been shown (yet) to have a theory of mind, does not think mirror self-recognition demonstrates the possession of a self-concept, does not think teaching of conspecifics has yet been demonstrated in apes, and does not think language-trained apes like Kanzi have demonstrated grammatical competence.185 And yet, his textbook on animal cognition (Wynne & Udell 2013) exhibits the same trend as the other textbooks do, and he seems to more-or-less accept the reported evidence concerning a variety of relatively sophisticated cognitive capacities, for example: the ability to count, in Alex the Parrot; the ability to form concepts of individual persons in a natural environment, in northern mockingbirds; the ability of pigeons to discriminate paintings by which school of art produced them (the Monet school vs. the Picasso school); the ability of some birds to modify and/or use tools to retrieve food, without training; the ability of several species to perform transitive inference; the ability of several species to follow human pointing; the teaching of conspecifics by meerkats; and the dog Chaser’s 1000+ word vocabulary of both nouns and verbs.186
Of course, it’s possible that even relatively skeptical ethologists like Wynne are still not skeptical enough. Indeed, I suspect this is generally true, given that the ethology literature seems to suffer from the same problems as medical and social science literatures do (see Appendix Z.8), but there is not yet as much discussion of these problems and how to solve them (in ethology) as there now is in medicine and the social sciences.187 Even still, I suspect a large fraction of past findings (concerning sophisticated animal behavior) — at least, the findings which have persuaded even relatively skeptical ethologists such as Wynne — would be mostly upheld by rigorous replication attempts. I don’t know which findings would be retained, and that makes it difficult for me to fill out my table of PCIFs which much confidence, but I suspect the broad trend would survive, even if (say) 60% of past findings accepted by Wynne failed to replicate when using more rigorous study designs.
If I’m right, and the general trend is real, then I have every reason to think the trend will continue: the more we look, the more we’ll find that a wide variety of animals, including many simple animals, engages in fairly sophisticated behaviors. The question is: Exactly which behaviors will eventually be observed in which taxa, and how indicative of consciousness are those behaviors?
3.4.4 Rampant anthropomorphism
My fourth “big-picture consideration” is this: we humans are nearly-incorrigible anthropomorphizers. We seem to be hard-wired to attribute human-like cognitive traits and emotions to non-humans, including animals, robots, chatbots, inanimate objects, and even simple geometric shapes. Indeed, after extensive study of the behavior of unicellular organisms, the microbiologist H.S. Jennings was convinced that (e.g.) if an amoeba was large enough that humans came into regular contact with it, we would assume it is conscious for the same reasons we instinctively assume a dog is conscious.188 As cognitive scientist Dan Sperber put it, “Attribution of mental states is to humans as echolocation is to the bat.”189
Of course, a general warning about anthropomorphism is no substitute for reading through a great many examples of (false) anthropomorphisms, from Clever Hans onward, which you can find in the sources I list in a footnote.190
Here, I’ll give but one example of flawed anthropomorphism. My sense is that many people, when imaginging what it must be like for an animal to undergo some experience, imagine that the animal’s subjective experience is similar to their own, minus various kinds of “sophisticated reasoning,” such as long-term planning, a stream of thoughts in a syntactically advanced language, and occasional use of explicit logical reasoning about the expected results of different possible actions one could take. However, studies of animal behavior and neurology tend to suggest the differences between human and animal experiences are much more profound than this. Consider, for example, studies of “interocular transfer.” Dennett gives an example:191
What is it like to be a rabbit? Well you may think that it’s obvious that rabbits have an inner life that’s something like ours. Well, it turns out that if you put a patch over a rabbit’s left eye and train it in a particular circumstance to be (say) afraid of something, and then you move the patch to the right eye, so that… the very same circumstance that it has been trained to be afraid of [is now] coming in the other eye, you have a naive rabbit [i.e. the rabbit isn’t afraid of the stimulus it had previously learned to be afraid of], because in the rabbit brain the connections that are standard in our brains just aren’t there, there isn’t that unification. What is it like to be which rabbit? The rabbit on the left, or the rabbit on the right? The disunity in a rabbit’s brain is stunning when you think about it….
On the basis of many decades of such counterintuitive studies of animal behavior, I think that if there is “something it’s like” to be a rabbit, I suspect it is not “roughly like my own subjective experience, minus various kinds of sophisticated reasoning.”192
The biologist and animal welfare advocate Marian Dawkins has expressed something close to my view on anthropomorphism, in her book Why Animals Matter (2012):
Anthropomorphic interpretations may be the first ones to spring to mind and they may, for all we know, be correct. But there are usually other explanations, often many of them, and the real problem with anthropomorphism is that it discourages, or even disparages, a more rigorous exploration of these other explanations. Rampant anthropomorphism threatens the very basis of ethology by substituting anecdotes, loose analogies, and an ‘I just know what the animal is thinking so don’t bother me with science’ attitude to animal behaviour.
…
We need all the scepticism we can muster, precisely because we are all so susceptible to the temptation to anthropomorphize. If we don’t resist this temptation, we risk ending up being seriously wrong.
My guess is that most people anthropomorphize animals far too quickly — including, by attributing consciousness to them193 — and as such, a proper undercutting of these anthropomorphic tendencies should pull one’s views about the distribution of consciousness toward a “consciousness is rare” conclusion, relative to where one’s views were before.
4 Summary of my current thinking about the distribution question
4.1 High-level summary
Below is a high-level summary of my current thinking about the distribution-of-consciousness question (with each point numbered for ease of reference):
- Given that we don’t yet have a compelling theory of consciousness, and given that just about any behavior194 could (as far as I know) be accomplished with or without consciousness (consciousness inessentialism), it seems to me that we can’t know which potentially consciousness-indicating features (PCIFs) are actually consciousness-indicating,195 except insofar as we continue to get evidence about how consciousness works from the best source of evidence about consciousness we have: human self-report.
- Unfortunately, as far as I can tell, studies of human consciousness haven’t yet confidently identified any particular “substantive” PCIFs as necessary for consciousness, sufficient for consciousness, or strongly indicative of consciousness.
- Still, there are some limits to my uncertainty about the distribution question, for reasons I give below.
- As far as we know,196 the vast majority of human cognitive processing is unconscious, including a large amount of fairly complex, “sophisticated” processing. This suggests that consciousness is the result of some particular kinds of information processing, not just any information processing.
- Assuming a relatively complex account of consciousness, I find it intuitively hard to imagine how (e.g.) the 302 neurons of C. elegans could support cognitive algorithms which instantiate consciousness. However, it is more intuitive to me that the ~100,000 neurons of the Gazami crab might support cognitive algorithms which instantiate consciousness. But I can also imagine it being the case that not even a chimpanzee happens to have the right organization of cognitive processing to have conscious experiences.
- Given the uncertainties involved, it is hard for me to justify assigning a “probability of consciousness” lower than 5% to any creature with a neuron count at least a couple orders of magnitude larger than that of C. elegans, and it is hard for me to justify assigning a “probability of consciousness” higher than 95% to any non-human species, including chimpanzees. Indeed, I think I can make a weakly plausible case for (e.g.) Gazami crab consciousness, and I think I can make a weakly plausible case for chimpanzee non-consciousness.
- When introspecting about how I was intuitively assigning “probabilities of consciousness” (between 5% and 95%) to various species within (say) the “Gazami crabs to chimpanzees” range, it seemed that the four most important factors influencing my “wild guess” probabilities were:
- evolutionary distance from humans (years since last common ancestor),
- neuroanatomical similarity with humans (see Appendix E),
- apparent cognitive-behavioral “sophistication” (advanced social politics, mirror self-recognition, abstract language capabilities, and some other PCIFs197), and
- total “processing power” (neurons, and maybe especially pallial neurons198).
- But then, maybe I’m confused about consciousness at a fairly basic level, and consciousness isn’t at all complicated (see Appendix H), as a number of scholars of consciousness currently think. I should give some weight to such views, nearly all of which would imply higher probabilities of consciousness for most animal taxa than more complex accounts of consciousness typically do.
I should say a bit more about the four factors mentioned in (7). Each of these factors provide very weak evidence concerning the distribution question, and can be thought of as providing one component of a four-factor “theory-agnostic estimation process” for the presence of consciousness in some animal.199
The reasoning behind the first two factors is this: Given that I know very little about consciousness beyond the fact that humans have it, and it is implemented by information processing in brains,200 then all else equal, creatures that are more similar to humans, especially in their brains, are more likely to be conscious.
The reasoning behind the third factor is twofold. First: in humans, consciousness seems to be especially (but not exclusively) associated with some of our most “sophisticated” behaviors, such as problem-solving and long-term planning. (For example, we have many cases of apparently unconscious simple nocifensive behaviors, but I am not aware of any cases of unconscious long-term logical planning.) Second, suppose we give each extant theory of consciousness a small bit of consideration. Some theories assume that consciousness requires only some very basic supporting functions (e.g. some neural information processing, a simple body schema, and some sensory inputs), whereas others assume that consciousness requires a fuller suite of supporting functions (e.g. a more complex self-model, long-term memory, and executive control over attentional mechanisms). As a result, the total number of theories which predict consciousness in an animal that exhibits both simple and “sophisticated” behaviors is much greater than the number of theories which predict consciousness in an animal that exhibits only simple behaviors.
The reasoning behind the fourth factor is just that a brain with more total processing power is (all else equal) more likely to be performing a greater variety of computations (some of which might be conscious), and is also more likely to be conscious if consciousness depends on a brain passing some threshold of repeated, recursive, or “integrated” computations.
Here is a table showing how the animals I ranked compare on these factors (according to my own quick, non-expert judgments):
EVOLUTIONARY DISTANCE FROM HUMANS | NEUROANATOMICAL SIMILARITY WITH HUMANS (SEE APPENDIX E) | APPARENT COGNITIVE-BEHAVIORAL SOPHISTICATION (SEE PCIFS TABLE) | TOTAL PROCESSING POWER (NEURONS) | |
---|---|---|---|---|
Humans (for comparison) | 0 | ∞ | Very high | 86 billion201 |
Chimpanzees | 6.7 Mya | High | High | ~28 billion??202 |
Cows | 96.5 Mya | Moderate/high | Low | ~10 billion??203 |
Chickens | 311.9 Mya | Low/moderate | Low | ~221 million |
Rainbow trout | 453.3 Mya | Low | Low | ~12 million??204 |
Common fruit flies | 796.6 Mya | Very low | Very low | 0.12 million |
Gazami crabs | 796.6 Mya | Very low | Very low | 0.1 million??205 |
But let me be clear about my process: I did not decide on some particular combination rule for these four factors, assign values to each factor for each species, and then compute a resulting probability of consciousness for each taxon. Instead, I used my intuitions to generate my probabilities, then reflected on what factors seemed to be affecting my intuitive probabilities, and then filled out this table. However, once I created this table the first time, I continued to reflect on how much I think such weak sources of evidence should be affecting my probabilities, and my probabilities shifted around a bit as a result.
Given the uncertainties involved, and given how ad-hoc and unjustified the reasoning process described in this section is, and given that consciousness is likely a “fuzzy” concept, it might seem irresponsible or downright silly to say “There’s an X% chance that chickens are ‘conscious,’ a Y% chance that rainbow trout are ‘conscious,’ and a Z% chance that the Tesla Autopilot algorithms are ‘conscious.’”
Nevertheless, I will make some such statements in the next section, for the following reasons:
- As subjective Bayesians would point out, my ongoing decisions imply that I already implicitly assign (something like) probabilities to consciousness-related or moral patienthood-related statements. I treat rocks differently than fishes, and fishes differently than humans. Also, there are some bets on this topic I would take, and some I would not. For example, given a suitably specified arbiter (e.g. a well-conducted poll of relevant leading scientists, taken 40 years from now), if someone wanted to bet me, at 100-to-1 odds, that no fishes are “conscious” (as determined by a plurality of relevant leading scientists, 40 years from now), I would take the bet — meaning I think there’s better than a 1-in-100 chance that scientists will conclude at least one species of fish is conscious.
- Even if my probabilities have no principled justification, and even if they aren’t straightforwardly action-guiding (see below), putting “made-up” numbers on my beliefs makes it easier for others to productively disagree with my conclusions, and argue against them.
4.2 My current probabilities
Below, I list some of my current probabilities for the possession of consciousness by normally-functioning adult members of several different animal taxa, and also for the possession of consciousness by an example AI program: DeepMind’s AlphaGo.
I always assigned a lower probability to “consciousness as loosely defined by example above” than I did to “consciousness of a sort I intuitively morally care about,” since I suspect the latter (given my moral intuitions) will end up being a slightly (but perhaps not hugely) broader concept than the former, since the former is defined with reference to the human example even though it is typically meant to apply substantially beyond it.
PROBABILITY OF CONSCIOUSNESS AS LOOSELY DEFINED BY EXAMPLE ABOVE | PROBABILITY OF CONSCIOUSNESS OF A SORT I INTUITIVELY MORALLY CARE ABOUT | |
---|---|---|
Humans | >99% | >99% |
Chimpanzees | 85% | 90% |
Cows | 75% | 80% |
Chickens | 70% | 80% |
Rainbow trout | 60% | 70% |
Common fruit flies | 10% | 25% |
Gazami crabs | 7% | 20% |
AlphaGo206 | <5% | <5% |
Unfortunately, I don’t have much reason to believe my judgments about consciousness are well-calibrated (such that statements I make with 70% confidence turn out to be correct roughly 70% of the time, etc.).207 But then, neither does anybody else, as far as I know.
Please keep in mind that I don’t think this report argues for my stated probabilities. Rather, this report surveys the kinds of evidence and argument that have been brought to bear on the distribution question, reports some of my impressions about those bodies of evidence and argument, and then reports what my own intuitive probabilities seem to be at this time. Below, I try — to a limited degree — to explore why I self-report these probabilities rather than others, but of course, I have limited introspective access to the reasons why my brain has produced these probabilities rather than others.208 I assume the evidence and argument I’ve surveyed here has a substantial impact on my current probabilities, but I do not think my brain even remotely approximates an ideal Bayesian integrator of evidence (at this scale, anyway), and I do not think my brief and shallow survey of such a wide range of complicated evidence (from fields in which I have little to no expertise) argues for the probabilities I’ve given here. Successfully arguing for any set of probabilities about the distribution of consciousness would, I think, require a much larger effort than I have undertaken here.
Also, remember that whichever of these taxa turn out to actually be “conscious,” they could vary by several orders of magnitude in moral weight. In particular, I suspect the arthropods on this list, if they are conscious, might be several orders of magnitude lower in moral weight (given my moral judgments) than (e.g.) most mammals, given the factors listed briefly in Appendix Z.7. (But that is just a hunch; I haven’t yet thought about moral weight much at all.)
4.3 Why these probabilities?
It is difficult to justify, or even to explain, why I give these probabilities and not others, beyond what I’ve already said above. My hope is that the rest of this report gives some insight into why I report these probabilities, but there is no clear weighted combination rule for synthesizing the many different kinds of argument and evidence I survey above, let alone the many considerations that are affecting my judgments but which I did not have time to explain in this report, and (in some cases) that I don’t even know about myself. Nevertheless, I offer a few additional explanatory comments below.
My guess is that to most consciousness-interested laypeople, the most surprising facts about the probabilities I state above will be that my probability of chimpanzee consciousness is so low, and that my probability of Gazami crab consciousness is so high. In a sense, these choices may simply reflect my view that, as Dawkins (2012) put it, “consciousness is harder [to understand] than you think”209 — i.e., that I’m unusually uncertain about my attributions of consciousness, which pulls the probability of consciousness for a wide range of taxa closer to some kind of intuitive “total ignorance prior” around 50%.210
What might experts in animal consciousness think of my probabilities? My guess is that most of them would think that my probabilities are too low, at least for the mammalian taxa, and probably for all the animal taxa I listed except for the arthropods. If that’s right, then my guess is that our disagreements are largely explained by (1) my greater uncertainty about ~all attributions of consciousness, and (2) selection effects on the field of animal consciousness studies. (If you don’t think it’s likely that many animals are conscious, you’re unlikely to devote a large fraction of your career to studying the topic!)211
I should say a bit more about why I might be less confident in “~all attributions of consciousness” than most experts in consciousness are. In part, this may be a result of the fact that, in my experience, I seem to be more skeptical of published scientific findings than most working scientists and philosophers are. Hence, whereas some people are convinced by (for example) the large, diverse, and cohesive body of evidence for global neuronal workspace theory assembled in Dehaene (2014), I read a book like that and think “Based on my experience, I’d guess that many of the cited studies would fail to hold up under attempted replication, or even under close scrutiny of the methods used, and I’m not sure how much that would affect the overall conclusions.”212 I could be wrong, of course, and I haven’t scrutinized the studies cited in Dehaene’s book myself; I’m just making a prediction based on the base rate for how often studies of various kinds fail to “hold up” upon closer inspection (by me), or upon attempted replication. I don’t defend my general skepticism of published studies here, but I list some example sources of my skepticism in Appendix Z.8.
In any case, heightened skepticism of published studies — e.g. studies offered as support for some theory of consciousness, or for the presence of some cognitive or behavioral feature in some animal taxon — will tend to pull one’s views closer to a “total ignorance” prior, relative to the views of someone who takes the published studies more seriously.
What about AlphaGo? My very low probability for AlphaGo consciousness is obviously not informed by most of the reasoning that informs my probabilities for animal species. AlphaGo has no evolutionary continuity with humans, it has no neuroanatomical similarity with humans (except for AlphaGo’s “neurons,” which are similar to human neurons only in a very abstract way), and its level of “cognitive-behavioral sophistication” is essentially “none” except for the very narrow task at which it is specifically programmed to excel (playing Go). Also, unlike with animal brains, I can trace, to a large extent, what AlphaGo is doing, and I don’t think it does anything that looks to me like it could instantiate consciousness (e.g. on an illusionist account of consciousness). Nevertheless, I feel I must admit some non-negligible probability that AlphaGo is conscious, given how many scholars of consciousness endorse views that seem to imply AlphaGo is conscious (see below). Though even if AlphaGo is conscious, it might have negligible moral weight.
4.4 Acting on my probabilities
Should one take action based on such made-up, poorly-justified probabilities? I’m genuinely unsure. There are many different kinds of uncertainty, and I’m not sure how to act given uncertainty of this kind.213 (We hope to write more about this issue in the future.)
4.5 How my mind changed during this investigation
First, a note on how my mind did not change during this investigation. By the time I began this investigation, I had already found persuasive my four key assumptions about the nature of consciousness: physicalism, functionalism, illusionism, and fuzziness. During this investigation I studied the arguments for and against these views more deeply than I had in the past, and came away more convinced of them than I was before. Perhaps that is because the arguments for these views are stronger than the arguments against them, or perhaps it is because I am roughly just as subject to confirmation bias as nearly all people seem to be (including those who, like me, know about confirmation bias and actively try to mitigate it).214 In any case: as you consider how to update your own views based on this report, keep in mind that I began this investigation as a physicalist functionalist illusionist who thought consciousness was likely a very fuzzy concept.
How did my mind change during this investigation? First, during the first few months of this investigation, I raised my probability that a very wide range of animals might be conscious. However, this had more to do with a “negative” discovery than a “positive” one, in the following sense: Before I began this investigation, I hadn’t studied consciousness much, and I held out some hope that there would turn out to be compelling reasons to “draw lines” at certain points in phylogeny, for example between animals which do and don’t have a cortex, and that I could justify a relatively sharp drop in probability of consciousness for species falling “below” those lines. But, as mentioned above, I eventually lost hope that there would (at this time) be compelling arguments for drawing any such lines in phylogeny (short of having a nervous system at all). Hence, my probability of a species being conscious now drops gradually as the values of my “four factors” decrease,215 with no particularly “sharp” drops in probability among creatures with a nervous system.
A few months into the investigation, I began to elicit my own intuitive probabilities about the possession of consciousness by several different animal taxa. I did this to get a sense of how my opinions were changing during the investigation, and perhaps also to harness a single-person “wisdom of crowds” effect (though, I don’t think this worked very well).216 Between July and October of 2016, my probabilities changed very little (see footnote for details217). Then, in January 2017, I finally got around to investigating the arguments for “hidden qualia” (and thus for the plausibility of relatively simple accounts of consciousness; see Appendix H), and this moved my probabilities upward a bit, especially for “consciousness of a sort I intuitively morally care about.”
There are some other things on which my views shifted noticeably as a result of this investigation:
- During the investigation, I became less optimistic that philosophical arguments of the traditional analytic kind will contribute much to our understanding of the distribution question on the present margin. I see more promise in scientific work — such as the scientific work which would feed into my four-factor “theory-agnostic estimation process” described above, that which could contribute toward progress on theories of consciousness (see Appendix B), and that which can provide the raw data that can be used in arguments about whether specific taxa are conscious (such as those in Tye 2016, chs. 5-9).218 I also see some promise in certain kinds of “non-traditional philosophical work,” such as computational modeling of theories of consciousness, and direct collaborations between philosophers and scientists so that some scientific work can target philosophically important hypotheses as directly as possible.219 Some philosophers are likely well-positioned to do some of this work, regardless of how well it resembles “traditional” philosophical argumentation.
- During the investigation, it became clear to me that I think too much professional effort is being spent on different schools of thought arguing with each other, and not enough effort spent on schools of thought ignoring each other and making as much progress as they can on their own assumptions to see what those assumptions can lead to. The latter practice seems necessary in order to have much hope of refining one’s views on the central question of this report (“Which beings should we consider to be moral patients?”), and seems neglected relative to the former practice. For example, I would like to see more books and articles similar to Prinz (2012), Dehaene (2014), and Kammerer (2016).220
- When I began this investigation, I felt fundamentally confused about consciousness in a way that I did not, for example, feel confused about many other classically confusing phenomena, for example free will or quantum mechanics. I couldn’t grok how any set of cognitive algorithms could ever “add up to” the phenomenality of phenomenal consciousness, though I assumed, via a “system 2 override” of my dualistic intuitions, that somehow, some set of cognitive algorithms must add up to phenomenal consciousness. Now, having spent so much time trying to both solve and dissolve the perplexities of consciousness, I no longer feel confused about them in that way. Of course, I still don’t know which cognitive systems are conscious, and I don’t know which cognitive-behavioral evidence is most indicative of consciousness, and so on — but the puzzle of consciousness now feels to me more like the puzzle of how different cognitive systems achieve different sorts of long-term hierarchical planning, or the puzzle of how different cognitive systems achieve different sorts of metacognition (see here). This loss of confusion might be mistaken, of course; perhaps I ought to feel more confused about consciousness than I now do!
4.6 Some outputs from this investigation
The first key output from the investigation is my stated set of probabilities, but — as mentioned above — I’m not sure they’re of much value for decision-making at this point.
Another key output of this investigation is a partial map of which activities might give me greater clarity about the distribution of consciousness (see the next section).
A third key output from this investigation is that we decided (months ago) to begin investigating possible grants targeting fish welfare. This is largely due to my failure to find any compelling reason to “draw lines” in phylogeny (see previous section). As such, I could find little justification for suggesting that there is a knowably large difference between the probability of chicken consciousness and the probability of fish consciousness. Furthermore, humans harm and kill many, many more fishes than chickens, and some fish welfare interventions appear to be relatively cheap. (We’ll write more about this later.)
Of course, this decision to investigate possible fish welfare grants could later be shown to have been unwise, even if the Open Philanthropy Project assumes my personal probabilities of consciousness in different taxa, and even if those probabilities don’t change. For example, I have yet to examine other potential criteria for moral patienthood besides consciousness, and I have not yet examined the question of moral weight (see above). The question of moral weight, especially, could eventually undermine the case for fish welfare grants, even if the case for chicken welfare grants remains robust. Nevertheless, and consistent with our strategies of hits-based giving and worldview diversification, we decided to seek opportunities to benefit fishes in case they should be considered moral patients with non-negligible weight.
5 Potential future investigations
5.1 Things I considered doing, but didn’t, due to time constraints
There are many things I considered doing to reduce my own uncertainty about the likely distribution of morally-relevant consciousness, but which I ended up not doing, due to time constraints. I may do some of these things in the future.221 In no particular order:
- I’d like to speak to consciousness experts about, and think through more thoroughly, which potentially fundable projects seem as though they’d shed the most light on the likely distribution of morally-relevant consciousness.
- I’d like to get more feedback on this report from long-time “consciousness experts” of various kinds. (So far, the only long-time “consciousness expert” from which I’ve gotten extensive feedback is David Chalmers.)
- I’d like to think through more carefully whether my four-factor “theory-agnostic estimation process” described above makes sense given my current state of ignorance, get help from some ethologists and comparative neuroanatomists to improve the “accuracy” of my ratings for “neuroanatomical similarity with humans” and “apparent cognitive-behavioral sophistication,” and explore what my updated factors and ratings suggest about the distribution question.
- As mentioned elsewhere, I’d like to work with a more experienced programmer to sketch a toy program that I think might be conscious if elaborated, coded fully, and run. Then, I’d like to adjust the details of its programming so that it more closely matches my own first-person data222 and the data gained from others’ self-reports of conscious experience (e.g. in experiments and in brain damage cases), and then check how my intuitions about the program’s moral patienthood respond to various tweaks to its design. I would especially like to think more carefully about algorithms that might instantiate conscious “pain” or “pleasure,” and how they might be dissociated from behavior. We have begun to collaborate with a programmer on such a project, but we’re not sure how much effort we will put into it at this time.
- I’d like to collect others’ moral intuitions, and their explanations of those intuitions, with respect to many cases I have also considered, possibly including different versions of the MESH: Hero program described here (or something like it).
- I’d like to check my moral intuitions against many more cases — including those proposed by philosophers,223 and further extensions of the MESH: Hero exercise I started elsewhere — and, when making my moral judgments about each case and each version of the program, expend more effort to more closely approximate the “extreme effort” version of my process for making moral judgments than I did for this report.
- I’d like to research and make the best case I can for insect consciousness, and also research and make the best case I can for chimpanzee non-consciousness, so as to test my intuition that weakly plausible cases can be made for both hypotheses.224
- I’d like to more closely examine current popular theories (including computational models) of consciousness, and write down what I do and don’t find to be satisfying about them, in a much more thorough and less hand-waving way than I do in Appendix B. In particular, I’d like to evaluate Dehaene (2014) more closely, as it seems to have been convincing to a decent number of theorists who previously endorsed other theories, e.g. see Carruthers (2017).
- I’d like to more closely study the potential and limits of current methods for studying consciousness in humans, e.g. the psychometric validity of different self-report schemes,225 interpretations of neuroimaging data,226 and the feasibility of different strategies for making progress toward a satisfying theory of consciousness via triangulation of data coming from “the phenomenological reports of patients, psychological testing at the cognitive/behavioral level, and neurophysiological and neuroanatomical findings.”227
- I’d like to make the exposition of my views on consciousness and moral patienthood more thoroughly-argued and better-explained, so that others can more easily and productively engage with them.
- I’d like to more thoroughly investigate the current state of the arguments about what the human unconscious can and can’t do.
- I’d like to expand on my “big picture considerations” and think through more carefully and thoroughly what I should think about them, and what they imply about the distribution question, and which other “big picture considerations” seem most important (that I haven’t already listed).
- I’d like to more closely examine the arguments concerning “hidden qualia” (see Appendix H).
- I’d like to more carefully examine current theories about how consciousness evolved.
- I’d like to think more about what my intuitions about consciousness suggest about consciousness during early human development, current AI systems, and other potential moral patients besides non-human animals, since this report mostly focused on animals.
- I’d like to study arguments and evidence about the unity of consciousness more closely.228
- I’d like to study the arguments for and against illusionism more closely, and consider in more depth how illusionism and other approaches should affect my views on the distribution question.
- I’d like to think more about “valenced” experience (e.g. pain and pleasure), and how it might interact with “basic” consciousness and behavior.
- I’d like to get a better sense of the likely robustness / reproducibility of empirical work on human consciousness, given the general concerns I outline in Appendix Z.8.
5.2 Projects that others could conduct
During this investigation, I came to think of progress on the “distribution of morally relevant consciousness” question as occurring on three “fronts”:
- Progress on theories of consciousness: If we can arrive at a convincing theory of consciousness, or at least a convincing theory of human consciousness, then we can apply that theory to the distribution question. This is the most obvious way forward, and the way science usually works.
- Progress on our best theory-agnostic guess about the distribution of consciousness: Assuming we are several decades away from having a convincing theory of consciousness, what should our “best theory-agnostic guess” about the distribution question be in the meantime? Should it be derived from something like a better-developed version of the four-factor approach I described above? Which other factors should be added, and what is our best guess for the value of each variable in that model? Etc.
- Progress on our moral judgments about moral patienthood: How do different judges’ moral intuitions respond to different first-person and third-person cases of possible moral patienthood? Do those intuitions change when people temporarily adopt my approach, after engaging in some training for forecasting accuracy? Do we know enough about how moral intuitions vary over time and in different contexts to say much about which moral intuitions we should see as “legitimate,” and which ones we shouldn’t, when thinking about which beings are moral patients? Etc.
Below are some projects that seem like they’d be useful, organized by the “front” each one is advancing.
5.2.1 Projects related to theories of consciousness
- Personally, I’m most optimistic about illusionist theories of consciousness, so I think it could be especially useful for illusionists to gather and discuss how to develop their theories further, especially in collaboration with computer programmers, along the lines described here.
- Of course, it could also be useful to engage in similar projects to better develop other types of theories.
- It could be useful to produce a large reference work on “the explananda of human consciousness.” (“Explananda” means “things to be explained.”) Each chapter would summarize what is currently known about (self-reported) human conscious experience under various conditions. For example there could be chapters on auto-activation deficit, various sensory agnosias, lucid dreams, absence seizures, pain asymbolia, blindsight, split-brain patients, locked-in syndrome, masking studies on healthy subjects, and many other “natural” or experimentally manipulated conditions. Ideally, each chapter would be co-authored by multiple subject-matter experts, including experts who disagree about the interpretation of the primary studies, and would survey expert disagreement about how to interpret those studies. It might also be best if each chapter explained many of the primary studies in some detail, with frank acknowledgment of their design limitations. This reference work could be updated every 5-10 years, and (I hope) would make it much easier for consciousness theorists to understand the full body of evidence that a successful theory of consciousness should be consistent with, and ideally explain.
- It could be useful for a “neutral” party (not an advocate of one of the major theories of consciousness) to summarize each major existing theory of consciousness in a fair but critical way, list the predictions they seem to make (according to the theory as stated, not necessarily according to each theory’s advocates, and with a focus on predictions for which the “ground truth” is not already known but could be tested by future studies), and critically examine how well those predictions match the available data plus data produced by novel experiments. They could also argue for some list of key consciousness explananda, and critically examine how thoroughly and precisely each major theory explains those explananda. Ideally, the author(s) of this synthesis would collaborate with both advocates and critics of these theories to help ensure they are interpreting the theories, and the relevant evidence, accurately.
- Given my general study quality concerns (see Appendix Z.8), it could be useful to try to improve study quality standards for the types of studies that are used to support theories of (human) consciousness, for example by organizing a conference attended both by experimentalists studying human consciousness and by experts in study robustness / replicability.
For a higher-level overview of scientific work that can contribute to the development of more satisfying theories of consciousness, see Chalmers (20https://www.openphilanthropy.org/research/2017-report-on-consciousness-and-moral-patienthood04).
5.2.2 Projects related to theory-agnostic guesses about the distribution of consciousness
- It could be helpful for researchers to collect counts of neurons (especially pallial neurons) for a much wider variety of species, since “processing power” is probably the cheapest of the four factors contributing to my “theory-agnostic estimation process” to collect additional data on — except perhaps for “evolutionary distance from humans,” which is already widely measured. (Note: I’d rather not measure brain masses alone, because neuronal scaling rules vary widely across different taxa.229)
- It could be useful for a group of neuroscientists, ethologists, and other relevant experts to collaborate on a large reference book that collects data about a long list of PCIFs in a wide variety of taxa, organized similarly to how Shumaker et al. (2011) organizes different types of animal tool behavior by taxon.230 Ideally, the book would explain how each PCIF and taxon was chosen and defined, fairly characterize any ongoing expert debates about whether those PCIFs should have been chosen and how they should be defined, and also fairly characterize ongoing expert debates about the absence, presence, or scalar value of each PCIF in each taxon. Besides its contribution to “theory-agnostic guesses” about the distribution question, such a book would also make it easier to construct and critique theories of consciousness, by gathering lots of relevant data across disparate fields into one place.
- After project (2) is completed, it could be useful for several different consciousness experts to make their own extended arguments about which PCIFs should be considered most strongly consciousness-indicating, and what those conclusions imply about the distribution question.
- It could be helpful for someone to write a detailed analysis of the case for and against 3-5 potential (non-obvious and substantive) necessary or sufficient conditions for consciousness, along the lines of (a more thorough version of) my analysis of the case for and against “cortex-required views” above. Two additional potential necessary conditions that could be examined in this way are (1) language and (2) a bilaterally symmetric nervous system.
- It could be useful for someone to conduct a high-response-rate survey of a wide variety of “consciousness experts,” asking a variety of questions about phenomenal consciousness, consciousness-derived moral patienthood, and their guesses about the distribution of each.231
5.2.3 Projects related to moral judgments about moral patienthood
- It could be useful for several people to more-or-less independently try the “extreme effort” version of my process for making moral judgments, and publish the detailed results of this exercise for dozens of variations on dozens of cases. Ideally, each report would include a summary table of the author’s moral judgments with respect to each variation of each case, as in Beckstead (2013), chs. 4 & 5.
- It could be useful for a programmer to do something similar to my incomplete MESH: Hero exercise here, but with a new program written from scratch, and with many more (increasingly complicated) versions of it coded, and with the source code of every version released publicly. Then, the next step could be to gather various “consciousness experts” and moral philosophers at a conference and, over the course of a couple days, have the programmer walk them through how each (progressively more complex) version of the program works, answering questions as needed, and taking a silent electronic survey after each version is explained, so that the consciousness experts and moral philosophers can indicate for each version of the program whether they think it is “conscious,” whether they consider it a moral patient (assuming functionalism), and why. All survey responses could then be published (after being properly anonymized, as desired) and analyzed in various ways. After the conference participants have had several months to digest these results, a follow-up conference could feature public debates about whether specific versions of the program are moral patients or not, and why — again with the programmer present to answer any questions about exactly how the program works. In the event that no non-panpsychist participants think any version of the program is conscious or a moral patient (assuming functionalism), the project could shift to a focus on collecting detailed reasons and intuitions about why no versions of the program are conscious or have moral status, and what changes would be required to (maybe) make some version of the program that is conscious or has moral status.
5.2.4 Additional thoughts on useful projects
One project that seems useful but doesn’t fit into the above categorization is further work addressing the triviality objection to functionalism232 — which in my view may be the most compelling objection to physicalist functionalism I’ve seen — e.g. perhaps via computational complexity theory, as Aaronson (2013) suggests.233
In addition to the specific projects listed above, basic “field-building” work is likely also valuable.234 We’ll make faster progress on the likely distribution of phenomenal consciousness if there are a greater number of skilled researchers devoted to the problem than there are today. So far, the topic has been fairly neglected, though several recent books on the topic235 may begin to help change that. On the “theory of consciousness” front, illusionist approaches seem especially neglected relative to how promising they seem (to me) to be. Efforts on the distribution question and illusionist approaches to consciousness could be expanded via workshops, conferences, post-doctoral positions, etc.
There are also many projects that I would likely suggest as high-priority if I knew more than I do now. I share Daniel Dennett’s intuition that perhaps the most promising path forward on the distribution question is to devise a theory focused on human consciousness — because humans are the taxon for which we can get the strongest evidence about consciousness and its character (self-report) — and then “look and see which features of that account apply to animals, and why.”236 According to that approach, much of the most important work to be done on the distribution of consciousness will take the form of consciousness-related experiments conducted on humans. However, I’m not sure which specific studies I’d most like to see conducted, because I haven’t yet taken the time to deeply familiarize myself with the latest studies and methods of human consciousness research.
It also seems likely that we need fundamental breakthroughs in “tools and techniques” to make truly substantial progress in understanding the mechanisms of consciousness. Consciousness is very likely a phenomenon to be explained at the level of neural networks and the information processes they instantiate, but our our current tools are not equipped to probe that level effectively.237 As such, much of the important progress that could be made in the study of consciousness would come not from consciousness-specific work, but from the development of new tools and techniques that are useful for understanding the brain at the level of information processing in neural networks.238
Many of the projects I’ve suggested above are quite difficult. Some of them would require steady, dedicated work from a moderately large team of experts over the course of many years. But, it seems to me the problem of consciousness is worth a lot of work, especially if you share my intuition that it may be the most important criterion for moral patienthood. A good theory of human consciousness could help us understand which animals and computer programs we should morally care about, and what we can do to benefit them. Without such knowledge, it is difficult for altruists to target their limited resources efficiently.
6 Appendices
See here for a brief description of each appendix.
6.1 Appendix A. Elaborating my moral intuitions
In this appendix, I describe my process for making moral judgments, and then report the outputs of that process for some particular cases, so as to further explain “where I’m coming from” on the topic of consciousness and moral patienthood.
6.1.1 Which kinds of consciousness-related processes do I morally care about?
Given my metaethical approach, when I make a “moral judgment” about something (e.g. about which kinds of beings are moral patients), I don’t conceive of myself as perceiving an objective moral truth, or coming to know an objective moral truth via a series of arguments. Nor do I conceive of myself as merely expressing my moral feelings as they stand today. Rather, I conceive of myself as making a conditional forecast about what my values would be if I underwent a certain “idealization” or “extrapolation” procedure (coming to know more true facts, having more time to consider moral arguments, etc.).
This metaethical approach begins to look a bit like something worthy of being called “moral realism” if you are optimistic that all members of a certain broad class of moral reasoners would converge on roughly the same values if all of them underwent a similar extrapolation procedure (one that was designed “sensibly” rather than designed merely to ensure convergence).239 I think there would be some values convergence among moral reasoners, but not enough for me to be expect that, say, everyone who reads this report within 5 years of its publication would, upon completing a “sensible” extrapolation procedure, converge on roughly the same values.240
Hence, in sharing my intuitions about moral patients below, I see no way to escape the limitation that they are merely my moral judgments. Nevertheless, I suspect many readers will feel that they have similar but not identical moral intuitions. Moreover, as mentioned earlier, I think that sharing my intuitions about moral patients is an important part of being clear about “where I’m coming from” on consciousness, especially since my moral intuitions no doubt affect my preliminary guesses about the distribution of consciousness even if I do not explicitly refer to my moral intuitions in justifying those guesses.
6.1.2 The “extreme effort” version of my process for making moral judgments
To provide more detail on my (ideal) process for making moral judgments, I provide below a description of the “extreme effort” version of my process for making moral judgments. However, I should note that I very rarely engage all the time-consuming cognitive operations described below when making moral judgments, and I did not engage all of them when making the moral judgments reported in this appendix. Rather, I made those moral judgments after running a small subset of the processes described below — whichever processes intuitively seemed, in the moment and for each given case, as though they were likely to quickly and noticeably improve my approximation of the “extreme effort” process described below.
I expect most readers will want to skip to the next subsection, and not bother to read the bullet-points summary of my “extreme effort” process for making moral judgments described below. Nevertheless, here it is:
- I try to make the scenario I’m aiming to forecast as concrete as possible, so that my brain is able to treat it as a genuine forecasting challenge, akin to participating in a prediction market or forecasting tournament, rather than as a fantasy about which my brain feels “allowed” to make up whatever story feels nice, or signals my values to others, or achieves something else that isn’t forecasting accuracy.241 In my case, I concretize the extrapolation procedure as one involving a large population of copies of me who learn many true facts, consider many moral arguments, and undergo various other experiences, and then collectively advise me about what I should value and why.242
- However, I also try to make forecasts I can actually check for accuracy, e.g. about what my moral judgment about various cases will be 2 months in the future.
- When making these forecasts, I try to draw on the best research I’ve seen concerning how to make accurate estimates and forecasts. For example I try to “think like a fox, not like a hedgehog,” and I’ve engaged in several hours of probability calibration training, and some amount of forecasting training.243
- Clearly, my current moral intuitions serve as one important source of evidence about what my extrapolated values might be. However, recent findings in moral psychology and related fields lead me to assign more evidential weight to some moral intuitions than to others. More generally, I interpret my current moral intuitions as data generated partly by my moral principles and partly by various “error processes” (e.g. a hard-wired disgust reaction to spiders, which I don’t endorse upon reflection). Doing so allows me to make use of some standard lessons from statistical curve-fitting when thinking about how much evidential weight to assign to particular moral intuitions.244
- As part of forecasting what my extrapolated values might be, I like to consider different processes and contexts that could generate alternate moral intuitions in moral reasoners both similar and dissimilar to my current self, and consider how I feel about the the “legitimacy” of those mechanisms as producers of moral intuitions. For example I ask myself questions such as “How might I feel about that practice if I was born into a world for which it was already commonplace?” and “How might I feel about that case if my built-in (and largely unconscious) processes for associative learning and imitative learning had been exposed to different life histories than my own?” and “How might I feel about that case if I had been born in a different century, or a different country, or with a greater propensity for clinical depression?” and “How might a moral reasoner on another planet feel about that case if it belonged to a more strongly r-selected species (compared to humans) but had roughly human-like general reasoning ability?”245
- Observable patterns in how people’s values change (seemingly) in response to components of my proposed extrapolation procedure (learning more facts, considering moral arguments, etc.) serve as another source of evidence about my extrapolated values. For example, the correlation between aggregate human knowledge and our “expanding circle of moral concern” (Singer 2011) might (very weakly) suggest that, if I continued to learn more true facts, my circle of moral concern would continue to expand. Unfortunately, such correlations are badly confounded, and might not provide much evidence at all with respect to my extrapolated values.246
- Personal facts about how my own values have evolved as I’ve learned more, considered moral arguments, and so on, serve as yet another source of evidence about my extrapolated values. Of course, these relations are likely confounded as well, and need to be interpreted with care.247
6.1.3 My moral judgments about some particular cases
6.1.3.1 My moral judgments about some first-person cases
What, then, do my moral intuitions say about some specific cases? I’ll start with some “first-person” cases that involve my own internal experiences. The next section will discuss some “third-person” cases, which I can only judge “from the outside,” by guessing about what those algorithms might feel like “from the inside.”
The starting point for my moral intuitions is my own phenomenal experience. The reason I don’t want others to suffer is that I know what it feels like when I cut my hand, or when I feel sad, or when my goals are thwarted, and I don’t want others to have experiences like that. Likewise, the reason I want others to flourish is that I know what it feels when I taste chocolate ice cream, or when I feel euphoric, or when I achieve my goals, and I do want others to have experiences like that.
What if I am injured, or my goals are thwarted, but I don’t have a subjective experience of that? Earlier, I gave the example of injuring myself while playing sports, but not noticing my injury (and its attendant pain) until 5 seconds after the injury occurred, when I exited my flow state. Had such a moment been caught on video, I suspect the video would show that I had been unconsciously favoring my hurt ankle while I continued to chase after the ball, even before I realized I was injured, and before I experienced any pain. So, what if a fish’s experience of nociception is like my “experience” of nociception before exiting the flow state?248 If that’s how it works, then I’m not sure I care about such fish “experiences,” for the same reason I don’t care about my own “experience” of nociception before I exited the flow state. (Of course, I care about the conscious pain that came after, and I care about the conscious experience of sadness at having to sit out the rest of the game as a result of my injury, but I don’t think I care about whatever nociception-related “experience” I had during the 5 seconds before I exited the flow state.)
Next, what if I was conscious, but there was no positive or negative “valence” to any part of my conscious experience? Suppose I was consciously aware of nociceptive signals, but they didn’t bother me at all, as pain asymbolics report.249 Suppose I was similarly aware of sensations that would normally be “positive,” but I didn’t experience them as either positive or negative, but rather experienced them as I experience neutral touch, for example how it feels when my fingers tap away at my keyboard as I write this sentence. Moreover, suppose I had goals, and I had the conscious experience of making plans that I predict would achieve those goals, and I consciously knew when I had achieved or not-achieved those goals, but I didn’t emotionally care whether I achieved them or not, I didn’t feel any happiness or disappointment upon achieving or not-achieving them, and so on. Would I consider such a conscious existence to have moral value? Here again, I’m unsure, but my guess is that I wouldn’t consider such conscious existence to have moral value. If fishes are conscious, but the character of their conscious experience is like this, then I’m not sure I care about fishes. (Keep in mind this is just an illustration: if fishes are conscious at all, then my guess is that they experience at least some nociception as unpleasant pain rather than as an unbothersome signal like the pain asymbolic does.)
This last example is similar to a thought experiment invented by Peter Carruthers, which I consider next.
6.1.3.2 The Phenumb thought experiment
Carruthers (1999) presents an interesting intuition pump concerning consciousness and moral patienthood:
Let us imagine, then, an example of a conscious, language-using, agent — I call him ‘Phenumb’ who is unusual only in that satisfactions and frustrations of his conscious desires take place without the normal sorts of distinctive phenomenology. So when he achieves a goal he does not experience any warm glow of success, or any feelings of satisfaction. And when he believes that he has failed to achieve a goal, he does not experience any pangs of regret or feelings of depression. Nevertheless, Phenumb has the full range of attitudes characteristic of conscious desire-achievement and desire-frustration. So when Phenumb achieves a goal he often comes to have the conscious belief that his desire has been satisfied, and he knows that the desire itself has been extinguished; moreover, he often believes (and asserts) that it was worthwhile for him to attempt to achieve that goal, and that the goal was a valuable one to have obtained. Similarly, when Phenumb fails to achieve a goal he often comes to believe that his desire has been frustrated, while he knows that the desire itself continues to exist (now in the form of a wish); and he often believes (and asserts) that it would have been worthwhile to achieve that goal, and that something valuable to him has now failed to come about.
Notice that Phenumb is not (or need not be) a zombie. That is, he need not be entirely lacking in phenomenal consciousness. On the contrary, his visual, auditory, and other experiences can have just the same phenomenological richness as our own; and his pains, too, can have felt qualities. What he lacks are just the phenomenal feelings associated with the satisfaction and frustration of desire. Perhaps this is because he is unable to perceive the effects of changed adrenaline levels on his nervous system, or something of the sort.
Is Phenumb an appropriate object of moral concern? I think it is obvious that he is. While it may be hard to imagine what it is like to be Phenumb, we have no difficulty identifying his goals and values, or in determining which of his projects are most important to him — after all, we can ask him! When Phenumb has been struggling to achieve a goal and fails, it seems appropriate to feel sympathy: not for what he now feels — since by hypothesis he feels nothing, or nothing relevant to sympathy — but rather for the intentional state which he now occupies, of dissatisfied desire. Similarly, when Phenumb is engaged in some project which he cannot complete alone, and begs our help, it seems appropriate that we should feel some impulse to assist him: not in order that he might experience any feeling of satisfaction — for we know by hypothesis that he will feel none — but simply that he might achieve a goal which is of importance to him. What the example reveals is that the psychological harmfulness of desire-frustration has nothing (or not much — see the next paragraph) to do with phenomenology, and everything (or almost everything) to do with thwarted agency.
The qualifications just expressed are necessary, because feelings of satisfaction are themselves often welcomed, and feelings of dissatisfaction are themselves usually unwanted. Since the feelings associated with desire-frustration are themselves usually unpleasant, there will, so to speak, be more desire-frustration taking place in a normal person than in Phenumb in any given case. For the normal person will have had frustrated both their world-directed desire and their desire for the absence of unpleasant feelings of dissatisfaction. But it remains true that the most basic, most fundamental, way in which desire-frustration is bad for, or harmful to, the agent has nothing to do with phenomenology.
My initial intuitions agree with Carruthers, but upon reflection, I lean toward thinking that Phenumb is not a moral patient (at least, not via the character of his consciousness), so long as he does not have any sort of “valenced” or “affective” experiences. (Phenumb might, of course, be a moral patient via other criteria.)
Carruthers suggests a reason why some people (like me) might have a different moral intuition about this case than he does:
What emerges from the discussions of this paper is that we may easily fall prey to a cognitive illusion when considering the question of the harmfulness to an agent of non-conscious frustrations of desire. In fact, it is essentially the same cognitive illusion which makes it difficult for people to accept an account of mental-state consciousness which withholds conscious mental states from non-human animals. In both cases the illusion arises because we cannot consciously imagine a mental state which is unconscious and lacking any phenomenology. When we imagine the mental states of non-human animals we are necessarily led to imagine states which are phenomenological; this leads us to assert… that if non-human animals have any mental states at all…, then their mental states must be phenomenological ones. In the same way, when we try to allow the thought of non-phenomenological frustrations of desire to engage our sympathy we initially fail, precisely because any state which we can imagine, to form the content of the sympathy, is necessarily phenomenological; this leads us… to assert that if non-human animals do have only non-conscious mental states, then their states must be lacking in moral significance.
In both cases what goes wrong is that we mistake what is an essential feature of (conscious) imagination for something else — an essential feature of its objects, in the one case (hence claiming that animal mental states must be phenomenological); or for a necessary condition of the appropriateness of activities which normally employ imagination, in the other case (hence claiming that sympathy for non-conscious frustrations is necessarily inappropriate). Once these illusions have been eradicated, we see that there is nothing to stand in the way of the belief that the mental states of non-human animals are non-conscious ones, lacking in phenomenology. And we see that this conclusion is perfectly consistent with according full moral standing to the [non-conscious, according to Carruthers] sufferings and disappointments of non-human animals.
It is interesting to consider the similarities between Carruthers’ fictional Phenumb and the real-life cases of auto-activation deficit (AAD) described in Appendix G. These patients are (as far as we can tell) phenomenally conscious like normal humans are, but — at least during the period of time when their AAD symptoms are most acute — they report having approximately no affect or motivation about anything. For example, one patient “spent many days doing nothing, without initiative or motivation, but without getting bored. The patient described this state as “a blank in my mind’ ” (Laplane et al. 1984).
Several case reports (see Appendix G) describe AAD patients as being capable of playing games if prompted to do so. Suppose we could observe an AAD patient named Joan, an avid chess player. Next, suppose we prompted her to play a game of chess, waited until some point in the midgame, and then asked her why she had made her latest move. To pick a dramatic example, suppose her latest move was to take the opponent’s Queen with her Rook. Given the case reports I’ve read, it sounds as though Joan might very well be able (like Phenumb) to explain why her latest move was instrumentally useful for the goal of checkmating the opponent’s King. Moreover, she might be able to explain that, of course, her goal at the moment is to checkmate the opponent’s King, because that is the win condition for a chess game. But, if asked if she felt (to use Carruthers’ phrase) “a warm glow of success” as a result of taking the opponent’s Queen, it sounds (from the case reports) as though Joan would say she did not feel any such thing.250
Or, suppose Joan had her Queen taken by the opponent’s Rook. If asked, perhaps she could report that this event reduced her chances of checkmating the opponent’s King, and that her goal (for the moment) was still to checkmate the opponent’s King. But, based on the AAD case reports I’ve seen, it seems that she would probably report that she felt no affective pang of disappointment or regret at the fact that her Queen had just been captured. Has anything morally negative happened to Joan?251 My intuitions say “no,” but perhaps Carruthers’ intuitions would say “yes.”
So as to more closely match Joan’s characteristics to Phenumb’s, we might also stipulate that Joan is a pain asymbolic (Grahek 2007) and also, let’s say, a “pleasure asymbolic.” Further, let’s stipulate that we can be absolutely certain Joan cannot recover from her conditions of AAD, pain asymbolia, and pleasure asymbolia. Is there now a moral good realized when Joan, say, wins a chess game or accomplishes some other goal? Part of me wants to say “Yes, of course! She has goals and aversions, and she can talk to you about them.” But upon further reflection, I’m not sure I should endorse those empathic impulses in the very strange case of Joan, and I’m not so sure I should think that moral good or harm is realized when Joan’s goals are realized or frustrated — putting aside her earlier experiences, including whatever events led to her AAD, pain asymbolia, and pleasure asymbolia.
6.1.3.3 My moral judgments about some third-person cases
It is difficult to state my moral intuitions about whether specific (brained) animals are moral patients or not, because I don’t know what their brains are doing. Neuroscientists know many things about how individual neurons work, and they are starting to learn a few things about how certain small populations of neurons work, and they can make some observations about how the brain works at the “macro” scale (e.g. via fMRI), but they don’t yet know which particular algorithms brains use to accomplish their tasks.252
Hence, it is easier to state my moral intuitions about computer programs, especially when I have access to their source code, or at least have a rough sense of how they were coded. (As a functionalist, I believe that the right kind of computer program would be conscious, regardless of whether it was implemented via a brain or brain-like structure or implemented some other way.) In the course of reporting some of my moral intuitions, I will also try to illustrate the problematic vagueness of psychological terms (more on this below).
For example, consider the short program below, written in Python (version 3).253 My hope is that even non-programmers will be able to understand what the code below does, especially with the help of my comments. (In Python code, any text following a #
symbol is a “comment,” which means it is there to be read by human readers of the source code, and is completely ignored by the interpreter or compiler program that translates the human-readable source code into bytecode for the computer to run. Thus, comments do not affect how the program runs.)
You may need to scroll horizontally to read all of the source code.
# increment_my_pain.py my_pain = 0 # Create a variable called my_pain, store the value 0 in it. while True: # Execute the code below, in a continuous loop, forever. my_pain += 1 # Increment my_pain by 1. print("My current pain level is " + str(my_pain) + ".") # Print current value of my_pain.
If you compile and run this source code, it will continuously increment the value of my_pain
by 1, and print the value of my_pain
to the screen after each increment, like this:
My current pain level is 1. My current pain level is 2. My current pain level is 3.
…and so on, until you kill the process, the computer runs out of memory and crashes, or the process hits some safeguard built into the browser or operating system from which you are running the program.
My moral intuitions are such that I do not care about this program, at all. This program does not experience pain. It does not “experience” anything. There is nothing that it “feels like” to be this program, running on my computer. It has no “phenomenal consciousness.”
To further illustrate why I don’t care about this program, consider running the following program instead:
# increment_my_pleasure.py my_pleasure = 0 while True: my_pleasure += 1 print("My current pleasure level is " + str(my_pleasure) + ".")
Is the moral value of this program any different than that of increment_my_pain.py
? I think not. The compiler doesn’t know what English speakers mean when we use the strings of letters “pleasure” and “pain.” In fact, if I didn’t hard-code the words “pleasure” and “pain” into the printing string of each program, the compiler would transform increment_my_pain.py
and increment_my_pleasure.py
into the exact same bytecode, which will run exactly the same on the same virtual machine.254
The same points hold true for a similar program using a nonsensical variable name:
# increment_flibbertygibbets.py flibbertygibbets = 0 while True: flibbertygibbets += 1 print("My current count of flibbertygibbets is " + str(flibbertygibbets) + ".")
While this simple illustration is fairly uninformative and (I hope) uncontroversial, I do think that testing one’s moral intuitions against snippets of source code — or, against existing programs for which you have some idea of how they work — is a useful way to make progress on the questions of moral patienthood.255 Most discussions of the criteria for moral patienthood use vague psychological language such as “goals” or “experience,” which can be interpreted in many different ways. In contrast, computer code is precise.
To illustrate how problematic vague psychological language can be when discussing theories of consciousness and moral patienthood, I consider below how some computer programs could be said to qualify as conscious on some (perhaps not very charitable) interpretations of vague terms like “goals.”256 (Hereafter on this point, I’ll just say “moral patienthood,” since it is a common view, and the one temporarily assumed for this report, that consciousness is sufficient for moral patienthood.)
I don’t know whether advocates of these theories would agree that the programs I point to below satisfy their verbal description of their favorite theory. My guess is that in most cases, they wouldn’t think these programs are conscious. But, it’s hard to know for sure, and theories of consciousness could be clarified by pointing to existing programs or snippets of code that do and don’t satisfy various components of these theories.257 Such an exercise would provide a clearer account of theories of consciousness than is possible using vague terms such as “goal” and “self-modeling.”
Consider, for example, the algorithm controlling Mario in the animation below:258
In the full video, Mario dodges bullets, avoids falling into pits, runs toward the goal at the end of the level, stomps on the heads of some enemies but “knows” to avoid doing so for other enemies (e.g. ones with spiky shells), kills other enemies by throwing fireballs at them, intelligently “chooses” between many possible paths through the level (indicated by the red lines), and more. Very sophisticated behavior! And yet it is all a consequence of a very simple search algorithm called A* search.
I won’t explain how the A* search algorithm works, but if you take the time to examine it — see Wikipedia’s article on A* search for a general explanation, or Github for the source code of this Mario-playing implementation — I suspect you’ll be left with the same intuition I have: that the algorithm controlling Mario has no conscious experience, and is not a moral patient.259 And yet, this Mario-controlling algorithm arguably exhibits many of the features that are often considered to be strong indicators of consciousness.
But these are just isolated cases, and there is a more systematic way we can examine our intuitions about moral patients, and explore the problematic vagueness of psychological terms, using computer code — or at least, using a rough description of code that we are confident experienced programmers could figure out how to write. We can start with a simple program, and then gradually add new features to the code, and consult our moral intuitions at each step along the way. That is the exercise I begin (but don’t finish) in the next section. Along the way, it will probably become clearer why I have a “fuzzy” view about consciousness. The next section probably also helps to illustrate what I find unsatisfying about all current theories of consciousness, a topic I discuss in more detail in Appendix B.
6.1.3.4 My moral judgments, illustrated with the help of a simple game
Many years ago, I discovered a series of top-down puzzle games called MESH: Hero. To get to the exit of each tile-based level, you must navigate the Hero character through the level, picking up items (e.g. keys), using those items (e.g. to open doors), avoiding obstacles and enemies (e.g. fire), and interacting with objects (e.g. pushing a slanted mirror in front of a laser so that laser beam is redirected and burns through an obstacle for you). Each time the player moves the Hero character by one tile, everything else in the game “progresses” one step, too — for example enemies move forward one step. (See animated screenshot.)
I wrote the code to add some additional interactive objects to the game,260 so I have some idea of how the game works at a source-code level. To illustrate, I’ll describe what happens when the Hero is standing on a tile that is within the blast zone of the Bomb when it explodes. First, a message is passed to check whether the Hero object has a Shield in its inventory. If it does, nothing happens. If the Hero object does not have a Shield, then the Hero object is removed from the level and a new HeroDead object — which looks like the Hero lying down beneath a gravestone — is placed on the same tile.
Did anything morally bad happen, there? I think clearly not, for reasons pretty similar to why I don’t morally care about increment_my_pain.py
. But, we can use this simplified setup to talk concretely — including with executable source code, if we want — about what we do and don’t intuitively morally care about.
In MESH: Hero, some enemies’ movements can be predicted using (roughly) Daniel Dennett’s “physical stance” (or perhaps his “design stance”). For example, at each time step (when the player moves), the Creeper — the pink object moving about in the animated screenshot — works like this: (1) If there is no obstacle one tile to the left, move one tile to the left, now facing that direction; (2) if there’s an obstacle to the left but no obstacle one tile straight ahead, move one tile straight ahead; (3) if there are obstacles to the left and straight ahead, but no obstacle one tile to the right, move one tile to the right and face that direction; (4) if there are obstacles to the left, straight ahead, and to the right, but not behind, move one space backward, now facing that direction; (5) if there are obstacles on all sides, do nothing.261 Now: is the Creeper a moral patient? I think not.
Some other enemies can be predicted using (roughly) Dennett’s “intentional stance.” For example the Worm, in action, looks as though it wants to get to the Hero. (The Worm is the purple moving object in the animated screenshot.) At each time step, the Worm retrieves the current X/Y coordinates of itself and the Hero (in the level’s grid of tiles), then moves one tile closer to the Hero, so long as there isn’t an obstacle in the way. For example, let’s designate columns with letters and rows with numbers, and say that the Worm is on G5 and the Hero is on E3. In this case, the Worm will be facing diagonally toward the Hero, and will try to move to F4 (diagonal moves are allowed). But if there is an obstacle on F4, it will instead try to move one tile “right and forward” (to G4). But if there’s also an obstacle on G4, it will try to move “left and forward” (to F5). And if there are obstacles on all those tiles, it will stay put. Given that the Worm could be said to have a “goal” — to reach the same tile as the Hero — is the Worm a moral patient? My moral judgment is “no.”
I imagine you have these same intuitions. Now, let’s imagine adding new features to the game, and consider at each step whether our moral intuitions change.262
1. Planning Hero: Imagine the Hero object is programmed to find its own path through the levels. This could essentially work the same way a chess-playing computer does: the Hero object would be programmed with knowledge of how all the objects in the game work, and then it would search all possible “paths” the game would take — including e.g. picking up keys and using them on doors, how each Worm would move in response to each of the Hero’s possible moves, and so on — and find at least one path to the Exit.263 The program could use A* search, or alpha-beta pruning, perhaps with some heuristic improvements. Alternately, the program could use a belief-desire-intention (BDI) architecture to enable its planning.264 Pathfinding or BDI-based versions of the Hero object would be even more tempting to interpret using Dennett’s “intentional stance” than the Worm is. Now is the Hero object a moral patient? (I think not.)
Does this version of the program satisfy any popular accounts of consciousness or moral patienthood? Again, it depends on how we interpret vague psychological terms. For example, Peter Carruthers argues that a being can be morally harmed by “the known or believed frustration of first-order desires,” and he is explicit that this does not require phenomenal consciousness.265 If the Hero object has an explicitly-programmed goal to reach the Exit object, and its (non-conscious) first-order desire to achieve this goal is frustrated (e.g. by obstacle or enemy objects), and the Hero object’s BDI architecture stores the fact that this desire was frustrated as one of its “beliefs,” has the Hero object been harmed in a morally relevant way? I would guess Carruthers thinks the answer is “no,” but why? Why wouldn’t the algorithm I’ve described count as having a belief that its first-order desire was frustrated? How would the program need to be different in order for it to have such a belief?
One might also wonder whether the Hero object in this version of the program satisfies (some interpretations of) the core Kantian criterion for moral patienthood, that of rational agency.266 Given that this Hero object is capable of its own means-end reasoning, is it thus (to some Kantians) an “end in itself,” whose dignity must be respected? Again, I would guess the answer is “no,” but why? What counts as “rational agency,” if not the means-end reasoning of the Hero object described above? What computer program would count as exhibiting “rational agency,” if any?
2. Partially observable environment: Suppose the Hero still uses a pathfinding algorithm to decide its next move, except that instead of having access to the current location and state of every object in the level, it only has access to the location and state of every object “within the Hero’s direct line of sight” — that is, not on the other side of a wall of some other opaque object, relative to the Hero’s position. Now the environment is only “partially observable.” In cases where a path to the Exit is not findable via the objects the Hero can “see,” the Hero object will systematically explore the space (via its modified pathfinding algorithm) until its built-up “knowledge” of the level is complete enough for its pathfinding algorithm to find a path to the Exit. Is the Hero object now a moral patient?
3. Non-discrete movement and collision detection: Suppose that objects in the game “progress” not whenever the Hero moves, but once per second. (The Hero also has one opportunity to move per second.) Moreover, when objects move, they do not “jump” discretely from one tile to the next, but instead their location changes “continuously” (i.e. one pixel at a time; think of a pixel as the smallest possible area in a theory of physics that quantizes area, such as loop quantum gravity) from the center of one tile to the center of the next tile. Let’s say tiles are 1000×1000 pixels (it’s now a very high-resolution game), and since objects move at one tile-width per second, that means they move one pixel per millisecond (ms). Now, instead of objects interacting by checking (at each time step) whether they are located on the same tile as another object, there is instead a collision detection algorithm run by every object to check whether another object has at least one pixel overlapping with one of its own pixels. Each object checks a ten-pixel-deep layer of pixels running around its outermost edge (let’s calls this layer the “skin” of each object), each millisecond. So e.g. if the Hero’s collision detection algorithm detects that a pixel on the Hero’s “face” is overlapping with a pixel of a Worm, then the Hero object is removed from the level and replaced with the HeroDead object immediately, without waiting until both the Hero and the Worm have completed their moves to the center of the same tile. Is the Hero object now a moral patient? (I still think not.)
4. Nociception and nociceptive reflexes: Now, suppose we give the Hero object nociceptors. That is: 1/100th of the pixels in the Hero’s “skin” layer are designated as “nociceptors.” Once per ms, the Hero’s CheckNociception() function checks those pixels for collisions with the pixels of other objects, and if it detects such a “collision,” it runs the NociceptiveReflex() function, which moves the Hero “away” from that collision at a speed of 1 pixel per 0.5ms. By “away,” I mean that, for example, if the collision happened in a pre-defined region of the Hero’s skin layer that is sensibly called the “top-right” region, the Hero moves toward the center of the tile that is one tile down and left from the tile that the center of the Hero is currently within. Naturally, the Hero might fail to move in this direction because it detects an obstacle on that tile, in which case it will stay put. Or there might be a Worm or other enemy on that tile. In any case, another new function executed by the Hero object, CheckInjury(), runs a collision check for all pixels “inside” (closer to the center than) the skin layer, and if there are any such collisions detected, the Hero object is replaced with HeroDead. Is the Hero object a moral patient now? (My moral judgment remains “no.”)
5. Health meter: Next, we give the Hero object an integer-type variable called SelfHealth, which initializes at 1000. When it reaches 0, the Hero object is replaced with the HeroDead object. Each collision detection in the Hero’s skin layer reduces the SelfHealth variable by 1, and each collision detection “inside” the Hero’s skin layer reduces the SelfHealth variable by 5. Now is the Hero a moral patient? (I still think “no.”)
6. Nociception sent to a brain: Now, a new sub-object of the Hero, called Brain, is the object that can call the NociceptiveReflex() function. It also runs its own collision detection for a 50×50 box of pixels (the “brain pixels”) in the middle of the Hero’s “head,” and if it detects collisions with other “external” objects (e.g. a Worm) there, SelfHealth immediately goes to 0. Moreover, rather than a single Hero-wide CheckNociception() function checking for pixel collisions at each of the pixels designated “nociceptors,” each nociceptor is instead defined in the game as its own object, and it runs its own collision detection function. If a nociceptor detects a collision, it creates a new object called NociceptiveSignal, which thereafter moves at a speed of 1 pixel per 0.1ms toward the nearest of the “brain pixels.” If the Brain object’s CheckNociception() function detects a collision between a “brain pixel” and a NociceptiveSignal object (instead of with an “external” object like a Worm), then it executes the NociceptiveReflex() function, using data stored in the NociceptiveSignal object to determine which edge of the Hero to move “away” from. Is the Hero object, finally, a moral patient?
By now, the program I’m describing seems like it might satisfy several of the criteria that Braithwaite (2010) uses to argue that fishes are conscious, including the presence of nociceptors, the transmission of nociceptive signals to a brain for central processing, the ability to use mental representations, a rudimentary form of self-modeling (e.g. via the SelfHealth variable, and via making plans to navigate the Hero object to the Exit while avoiding events that would cause the Hero object to be replaced with HeroDead), and so on.267 And yet, I don’t think this version of the Hero object is conscious, and I’d guess that Braithwaite would agree. But if this isn’t what Braithwaite means by “nociception,” “mental representations,” and so on, then what does she mean? What program would satisfy one or more of her indicators of consciousness?
I think this exercise can be continued, in tiny steps, until we’ve described a sophisticated 2D Hero object that seems to exhibit many commonly-endorsed criteria for (or indicators of) moral patienthood or consciousness.268 Moreover, such sophisticated Hero objects could not just be described, but (I claim) programmed and run. And yet, when I carry out that exercise (in my head), I typically do not end up having the intuition that any of those versions of the MESH: Hero code — especially those described above — are conscious, or moral patients.
There are, however, two kinds of situations, encountered when continuing this exercise in my head, in which I begin to worry that the program I’m imagining might be a phenomenally conscious moral patient if it was coded and run.
First, I begin to worry about the Hero object’s moral patienthood when the program I’m imagining gets so complicated that I can no longer trace what it’s doing, e.g. if I control the Hero agent using a very large deep reinforcement learning agent that has learned to navigate the game world via millions of play-throughs using only raw pixel data, or if I control the Hero object using a complicated candidate solution discovered via an evolutionary algorithm.269
Second, I begin to worry about the Hero object’s moral patienthood when it begins to look like the details of my own phenomenal experience might be pretty fully captured by how the program I’m imagining works, and thus I start to worry it might be a moral patient precisely because I can trace what it’s doing. My approach assumes that phenomenal consciousness is how a certain kind of algorithm feels “from the inside,”270 and, after some thought, I was able to piece together (in my head) a very rough sketch of a program that, from the outside, looks to me like it might, with some elaboration and careful design beyond what I was able to sketch in my head, feel (from the inside) something like my own phenomenal experience feels to me. (Obviously, this conclusion is very speculative, and I don’t give it much weight, and I don’t make use of it in the rest of this report, but it is also quite different from my earlier state of understanding, under which no theory or algorithm I had read about or considered seemed to me like it might even come close to feeling from the inside like my own phenomenal experience feels to me.)
Unfortunately, it would require a very long report for me to explore and then explain what I think such a program looks like (given my intuitions), so for this report all I’ve done is pointed to some of the key inspirations for my intuitive, half-baked “maybe-conscious” program (my “GDAK” account described here). In the future, I hope to describe this program in some detail, and then show how my moral intuitions respond to various design tweaks, but we decided this exercise fell beyond the scope of this initial report on moral patienthood.
In any case, I hope I have explained at least a few things about how my moral intuitions work with respect to moral patienthood and consciousness, so that my readers have some sense of “where I’m coming from.”
6.2 Appendix B. Toward a more satisfying theory of consciousness
In this appendix, I describe some popular theories of (human) consciousness, explain the central reason why I find them unsatisfying, and conclude with some thoughts about how a more satisfying theory of consciousness could be constructed. (See also my comments above about Michael Tye’s PANIC theory.)
In short, I think even the most compelling extant theories of consciousness are, in the words of Cohen & Dennett (2011):
…merely the beginning, rather than the end, of the study of consciousness. There is still much work to be done…
Neuroscientist Michael Graziano states the issue more vividly (and less charitably):271
I was in the audience watching a magic show. Per protocol a lady was standing in a tall wooden box, her smiling head sticking out of the top, while the magician stabbed swords through the middle.
A man sitting next to me whispered to his son, “Jimmy, how do you think they do that?”
The boy must have been about six or seven. Refusing to be impressed, he hissed back, “It’s obvious, Dad.”
“Really?” his father said. “You figured it out? What’s the trick?”
“The magician makes it happen that way,” the boy said.
Graziano’s point is that “the magician makes it happen” is not much of an explanation. There is still much work to be done. Current theories of consciousness take a few steps toward explaining the details of our conscious experience, but at some point they end up saying “and then [such-and-such brain process] makes consciousness happen.” And I want to say: “Well, that might be right, but how do those processes make consciousness happen?”272 Or in some cases, a theory of consciousness might not make any attempt to explain some important feature of consciousness, not even at the level of “[such-and-such brain process] makes it happen.”
As I said earlier, I think a successful explanation of consciousness would show how the details of some theory predict, with a fair amount of precision, the explananda of consciousness — i.e., the specific features of consciousness that we know about from our own phenomenal experience and from (reliable, validated) cases of self-reported conscious experience (e.g. in experiments, or in brain lesion studies).
Current theories of consciousness, I think, do not “go far enough” — i.e., they don’t explain enough consciousness explananda, with enough precision — to be compelling (yet).273 Below, I elaborate this issue with respect to three popular theories of consciousness (for illustrative purposes): temporal binding theory, integrated information theory, and global workspace theory.274
It’s possible this “doesn’t go far enough” complaint would be largely accepted by the leading proponents of these theories, because (I would guess) none of them think they have described a “final” theory of consciousness, and (I would guess) all of them would admit there are many details yet to be filled in. This is, after all, a normal way to make progress in science: propose a simple model, use the model to make novel predictions, test those predictions, revise the model in response to experimental results, and so on. Nevertheless, in some cases the leading proponents of these theories write as though they have already put forward a near-final theory of consciousness, and I hope to illustrate below why I think we have “a long way to go,” even if these theories are “on the right track,” and then explain how I think we can do better (with a lot of hard work).
6.2.1 Temporal binding theory
Of the modern theories of consciousness, the first one Graziano (2013) complains about (ch. 1) is Francis Crick and Christof Koch’s temporal binding theory:
[Crick and Koch] suggested that when the electrical signals in the brain oscillate they cause consciousness. The idea… goes something like this: the brain is composed of neurons that pass information among each other. Information is more efficiently linked from one neuron to another, and more efficiently maintained over short periods of time, if the electrical signals of neurons oscillate in synchrony. Therefore, consciousness might be caused by the electrical activity of many neurons oscillating together.
This theory has some plausibility. Maybe neuronal oscillations are a precondition for consciousness. But note that… the hypothesis is not truly an explanation of consciousness. It identifies a magician. Like the Hippocratic account, “The brain does it” (which is probably true)… this modern theory stipulates that “the oscillations in the brain do it.” We still don’t know how. Suppose that neuronal oscillations do actually enhance the reliability of information processing. That is impressive and on recent evidence apparently likely to be true. But by what logic does that enhanced information processing cause the inner experience? Why an inner feeling? Why should information in the brain — no matter how much its signal strength is boosted, improved, maintained, or integrated from brain site to brain site — become associated with any subjective experience at all? Why is it not just information without the add-on of awareness?
I should note that Graziano is too harsh, here. Crick & Koch (“C&K”) make more of an effort to connect the details of their model to the explananda of consciousness than Graziano suggests. There is more to C&K’s account than just “the oscillations in the brain do it.”275 But, in the end, I agree with Graziano that C&K do not “go far enough” with their theory to make it satisfying. As Graziano says elsewhere:
…the theory provides no mechanism that connects neuronal oscillations in the brain to a person being able to say, “Hey, I have a conscious experience!” You couldn’t give the theory to an engineer and have her understand, even in the foggiest way, how one thing leads to the other.
I think this is a good test for theories of consciousness: If you described your theory of consciousness to a team of software engineers, machine learning experts, and roboticists, would they have a good idea of how they might, with several years of work, build a robot that functions according to your theory? And would you expect it to be phenomenally conscious, and (additionally stipulating some reasonable mechanism for forming beliefs or reports) to believe or report itself to have phenomenal consciousness for reasons that are fundamentally traceable to the fact that it is phenomenally conscious?
For a similar attitude toward theories of consciousness, see also the (illusionist-friendly) introductory paragraph of Molyneux (2012):
…Instead of attempting to solve what appears unsolvable, an alternative reaction is to investigate why the problem seems so hard. In this way, Minsky (1965) hoped, we might at least explain why we are confused. Since a good way to explain something is often to build it, a good way to understand our confusion [about consciousness] may be to build a robot that thinks the way we do… I hope to show how, by attempting to build a smart self-reflective machine with intelligence comparable to our own, a robot with its own hard problem, one that resembles the problem of consciousness, may emerge.
6.2.2 Integrated information theory
Another popular theory of consciousness is Integrated Information Theory (IIT), according to which consciousness is equal to a measure of integrated information denoted Φ (“phi”). Oizumi et al. (2014) explains the basics:
Integrated information theory (IIT) approaches the relationship between consciousness and its physical substrate by first identifying the fundamental properties of experience itself: existence, composition, information, integration, and exclusion. IIT then postulates that the physical substrate of consciousness must satisfy these very properties. We develop a detailed mathematical framework in which composition, information, integration, and exclusion are defined precisely and made operational. This allows us to establish to what extent simple systems of mechanisms, such as logic gates or neuron-like elements, can form complexes that can account for the fundamental properties of consciousness. Based on this principled approach, we show that IIT can explain many known facts about consciousness and the brain, leads to specific predictions, and allows us to infer, at least in principle, both the quantity and quality of consciousness for systems whose causal structure is known. For example, we show that some simple systems can be minimally conscious, some complicated systems can be unconscious, and two different systems can be functionally equivalent, yet one is conscious and the other one is not.
I won’t explain IIT any further; see other sources for more detail.276 Instead, let me jump straight to my reservations about IIT.277
I have many objections to IIT, for example that it predicts enormous quantities of consciousness in simple systems for which we have no evidence of consciousness.278 But here, I want to focus on the issue that runs throughout this section: IIT does not predict many consciousness explananda with much precision.
Graziano provides the following example:279
[One way to test IIT] would be to test whether human consciousness fades when integration in the brain is reduced. Tononi emphasizes the case of anesthesia. As a person is anesthetized, integration among the many parts of the brain slowly decreases, and so does consciousness… But even without doing the experiment, we already know what the result must be. As the brain degrades in its function, so does the integration among its various parts and so does the intensity of awareness. But so do most other functions. Even many unconscious processes in the brain depend on integration of information, and will degrade as integration deteriorates.
The underlying difficulty here is… the generality of integrated information. Integrated information is so pervasive and so necessary for almost all complex functions in the brain that the theory is essentially unfalsifiable. Whatever consciousness may be, it depends in some manner on integrated information and decreases as integration in the brain is compromised.
In other words, IIT doesn’t do much to explain why some brain processes are conscious and others are not, since all of them involve integrated information. Indeed, as far as I can tell, IIT proponents think that a great many brain processes typically thought of as paradigm cases of unconscious cognitive processing are in fact conscious, but we are unaware of this.280 In principle, I agree that a well-confirmed theory could make surprising predictions about things we can’t observe (yet, or possibly ever), and that if the theory is well-enough supported then we should take those predictions quite seriously, but I don’t think IIT is so well-confirmed yet. In the meantime, IIT seems unsatisfying to the extent that it fails to predict some fairly important explananda of consciousness, for example that some highly “integrated” cognitive processing is, as far as we know, unconscious.
Moreover, Graziano says, IIT doesn’t do much to explain the reportability of consciousness (in any detail281):
The only objective, physically measurable truth we have about consciousness is that we can, at least sometimes, report that we have it. I can say, “The apple is green,” like a well-regulated wavelength detector, providing no evidence of consciousness; but I can also claim, “I am sentient; I have a conscious experience of green.”
…The integrated information [theory]… is silent on how we get from being conscious to being able to report, “I have a conscious experience.” Yet any serious theory of consciousness must explain the one objective fact that we have about consciousness: that we can, in principle, at least sometimes, report that we have it.
In discussion with colleagues, I have heard the following argument… The brain has highly integrated information. Highly integrated information is (so the theory goes) consciousness. Problem solved. Why do we need a special mechanism to inform the brain about something that it already has? The integrated information is already in there; therefore, the brain should be able to report that it has it.
…[But] the brain contains a lot of items that it can’t report. The brain contains synapses, but nobody can introspect and say, “Yup, those serotonin synapses are particularly itchy today.” The brain regulates the flow of blood through itself, but nobody has cognitive access to that process either. For a brain to be able to report on something, the relevant item can’t merely be present in the brain but must be encoded as information in the form of neural signals that can ultimately inform the speech circuitry.
The integrated information theory of consciousness does not explain how the brain, possessing integrated information (and, therefore, by hypothesis, consciousness), encodes the fact that it has consciousness, so that consciousness can be explicitly acknowledged and reported. One would be able to report, “The apple is green,” like a well-calibrated spectral analysis machine… One would be able to report a great range of information that is indeed integrated. The information is all of a type that a sophisticated visual processing computer, attached to a camera, could decode and report. But there is no proposed mechanism for the brain to arrive at the conclusion, “Hey, green is a conscious experience.” How does the presence of conscious experience get turned into a report?
To get around this difficulty and save the integrated information theory, we would have to postulate that the integrated information that makes up consciousness includes not just information that depicts the apple but also information that depicts what a conscious experience is, what awareness itself is, what it means to experience. The two chunks of information would need to be linked. Then the system would be able to report that it has a conscious experience of the apple…
These examples illustrate (but don’t exhaust) the ways in which IIT doesn’t predict the explananda of consciousness in as much detail as I’d like.
What about global workspace theory?
6.2.3 Global workspace theory
One particularly well-articulated theory of consciousness is Bernard Baars’ Global Workspace Theory (GWT), including variants such as Stanislas Dehaene’s Global Neuronal Workspace Theory (Dehaene 2014), and GWT’s implementation in the LIDA cognitive architecture (Franklin et al. 2012).282
Weisberg (2014), ch. 6, explains the basics of GWT succinctly:283
Perhaps the best developed empirical theory of consciousness is the global workspace view (Baars 1988; 1997). The basic idea is that conscious states are defined by their “promiscuous accessibility,” by being available to the mind in ways that nonconscious states are not. If a state is nonconscious, you just can’t do that much with it. It will operate automatically along relatively fixed lines. However, if the state is conscious, it connects with the rest of our mental lives, allowing for the generation of far more complex behavior. The global workspace (GWS) idea takes this initial insight and develops a psychological theory – one pitched at the level of cognitive science, involving a high-level decomposition of the mind into functional units. The view has also been connected to a range of data in neuroscience, bolstering its plausibility…
So, how does the theory go? First, the GWS view stresses the idea that much of our mental processing occurs modularly. Modules are relatively isolated, “encapsulated” mechanisms devoted to solving limited, “domain-specific” problems. Modules work largely independently from each other and they are not open to “cross talk” coming from outside their focus of operation. A prime example is how the early vision system works to create the 3-D array we consciously experience. Our early-vision modules automatically take cues from the environment and deliver rapid output concerning what’s in front of us. For example, some modules detect edges, some the intersection of lines or “vertices,” some subtle differences in binocular vision, and so on. To work most efficiently, these modules employ built-in assumptions about what we’re likely to see. In this way, they can quickly take an ambiguous cue and deliver a reliable output about what we’re seeing. But this increase in speed leads to the possibility of error when the situation is not as the visual system assumes. In the Müller-Lyer illusion [see right], two lines of the same length look unequal because of either inward- or outward-facing “points” on the end of the lines. And even if we know they’re the same length, because we’ve seen these dang lines hundreds of times, we still consciously see them as unequal. This is because the process of detecting the lines takes the vertices where the points attach to the lines as cues about depth. In the real world, when we see such vertices, we can reliably use them to tell us what’s closer to what. But the Müller-Lyer illusion uses this fact to trick early vision into seeing things incorrectly. The process is modular because it works automatically and it’s immune to correction from our conscious beliefs about the lines.
Modularity is held to be a widespread phenomenon in the mind. Just how widespread is a matter of considerable debate, but most researchers would accept that at least some processes are modular, and early perceptual processes are the best candidates. The idea of the GWS is that the workspace allows us to connect and integrate knowledge from a number of modular systems. This gives us much more flexible control of what we do. And this cross-modular integration would be especially useful to a mind more and more overloaded with modular processes. Hence, we get an evolutionary rationale for the development of a GWS: when modular processing becomes too unwieldy and when the complexity of the tasks we must perform increases, there will be advantages to having a cross-modular GWS.
Items in the global workspace are like things posted on a message board or a public blog. All interested parties can access the information there and act accordingly. They can also alter the info by adding their own input to the workspace. The GWS is also closely connected to short-term working memory. Things held in the workspace can activate working memory, allowing us to keep conscious percepts in mind as we work on problems. Also, the GWS is deeply intertwined with attention. We can activate attention to focus on specific items in the network. But attention can also influence what gets into the workspace in the first place. Things in the network can exert a global “top-down” influence on the rest of the mind, allowing for coordination and control that couldn’t be achieved by modules in isolation. To return to a functionalist way of putting things, if a system does what the GWS does, then the items in that system are conscious. That’s what consciousness amounts to [according to GWT].
[To sum up:] Much mental activity is nonconscious, occurring in low-level modules. However, when modular information is “taken up” by the GWS, it becomes available to a wide range of mental systems, allowing for flexible top-down control. This is the functional mark of consciousness.
I won’t explain GWT any further here; see other sources for more detail.284 Instead, I jump once again to the primary issue that runs throughout this section285 this time applied to GWT.
To be concrete, I’ll address Dehaene’s neurobiological version of GWT. What, exactly, is a conscious state, according to Dehaene?286
…a conscious state is encoded by the stable activation, for a few tenths of a second, of a subset of active workspace neurons. These neurons are distributed in many brain areas, and they all code for different facets of the same mental representation. Becoming aware of the Mona Lisa involves the joint activation of millions of neurons that care about objects, fragments of meaning, and memories.
During conscious access, thanks to the workspace neurons’ long axons, all these neurons exchange reciprocal messages, in a massively parallel attempt to achieve a coherent and synchronous interpretation. Conscious perception is complete when they converge.
Perhaps Dehaene is right that a conscious state results from the stable activation of workspace neurons that collectively code for all the different facets of that state, which occurs when the messages being passed by these neurons “converge.” But I still want to know: how does merely pooling information into a global workspace, allowing that information to be accessed by diverse cognitive modules, result in a phenomenal experience? Why should this make the brain insist that it is “conscious” of some things and not others? Why does this result in the intuition of an explanatory gap (the “hard problem”)? And so on.
6.2.4 What a more satisfying theory of consciousness could look like
I could make similar comments about many other theories of consciousness, for example the theories which lean heavily on prediction error minimization (Hohwy 2012; Clark 2013), recurrent processing (Lamme 2010), higher-order representations (Carruthers 2016), and “multiple drafts” (Dennett 1991). In all these cases, my concern is not so much that they are wrong (though they may be), but instead that they don’t “go far enough.”
In fact, I think it’s plausible that several of these theories say something important about how various brain functions work, including brain functions that are critical to conscious experience (in humans, at least). Indeed, on my view, it is quite plausibly the case that consciousness depends on integrated information and higher-order representations.287 And it would not surprise me if human consciousness also depends on prediction error minimization, recurrent processing, “multiple drafts,” and a global workspace. The problem is just that none of these ideas, or even all of these ideas combined, seem sufficient to explain, with a decent amount of precision, most of the key features of consciousness we know about.
Graziano’s own “attention schema theory” (described in a footnote288) has this problem, too, but (in my opinion) it “goes further” than most theories do (though, not by much).289 In fact, it does so in part by assuming that integrated information, higher-order representations, a global workspace, and some features of Dennett’s “multiple drafts” account do play a role in consciousness, and then Graziano adds some details to that foundation, to construct a theory of consciousness which (in my opinion) explains the explananda of consciousness a bit more thoroughly, and with a bit more precision, than any of those earlier theories do on their own.
Note that I likely have this opinion of Graziano’s theory largely because it offers an (illusionist) explanation of our dualist intuitions, and our dualist intuitions constitute one explanandum of consciousness that, as far as I can tell, the theories I briefly surveyed above (temporal binding theory, IIT, GWT) don’t do much to explain.
Furthermore, I can think of ways to supplement Graziano’s theory with additional details that explain some additional consciousness explananda beyond what Graziano’s theory (as currently stated) can explain. For example, Graziano doesn’t say much about the ineffability of qualia, but I think a generalization of Gary Drescher’s “qualia as gensyms” account,290 plus the usual points about how the fine-grained details of our percepts “overflow” the concepts we might use to describe those percepts,291 explain that explanandum pretty well, and could be added to Graziano’s account. Graziano also doesn’t explain why we have the conviction that qualia cannot be “just” brain processes and nothing more, but intuitively it seems to me that an inference algorithm inspired by Armstrong (1968) might explain that conviction pretty well.292 But why do we find it so hard to even make sense of the hypothesis of illusionism about consciousness, even though we don’t have trouble understanding how other kinds of illusions could be illusions? Perhaps an algorithm inspired by Kammerer (2016) could instantiate this feature of human consciousness.293
And so on. I take this to be the sort of work that Marinsek & Gazzaniga (2016) call for in response to Frankish (2016b)’s defense of illusionism about consciousness:
One major limitation of [illusionism as described by Frankish] is that it does not offer any mechanisms for how the illusion of phenomenal feelings works. As anyone who has seen a magic trick knows, it’s quite easy to say that the trick is an illusion and not the result of magical forces. It is much, much harder to explain how the illusion was created. Illusionism can be a useful theory if mechanisms are put forth that explain how the brain creates an illusion of phenomenal feelings…
…phenomenal consciousness may not be the product of one grand illusion. Instead, phenomenal consciousness may be the result of multiple ‘modular illusions’. That is, different phenomenal feelings may arise from the limitations or distortions of different cognitive modules or networks… Illusionism therefore may not have to account for one grand illusion, but for many ‘modular illusions’ that each have their own neural mechanisms.
If I were a career consciousness theorist, I think this is how I would try to make progress toward a theory of consciousness, given my current intuitions about what is most likely to be successful:
- First, I’d write some “toy programs” that instantiate some of the key aspects of a Graziano / Drescher / Armstrong / Kammerer (GDAK) account to consciousness.294
- If step (1) seemed productive, I’d consider taking on the more ambitious project of working with a team of software engineers and machine learning experts to code a GDAK-inspired cognitive architecture295 for controlling an agent in a simple virtual 3D world. We’d share the source code, and we’d write an explanation of how we think it explains, with some precision, many of the key explananda of consciousness.
- We’d think about which features of our own everyday internal experiences, including our felt confusions about consciousness, don’t yet seem to be captured by the cognitive architecture we’ve coded, and we’d try to find ways to add those features to the cognitive architecture, and then explain how we think our additions to the cognitive architecture capture those additional features of consciousness.
- We’d do the same thing for additional consciousness explananda drawn not from our own internal experiences, but from (reliable, validated) self-reports from others, e.g. from experimental studies and from brain lesion cases.296
- We’d invite others to explain why they don’t think this cognitive architecture captures the explananda we claim it captures, and which additional most-important explananda are still not captured by the architecture, and we’d try to modify and extend the cognitive architecture accordingly, and then explain why we think those modifications are successful.
- We’d use the latest version of the cognitive architecture to make novel predictions about what human subjects will self-report under various experimental conditions if their consciousness is similar in the right ways to our cognitive architecture, and then test those predictions, and modify the cognitive architecture in response to the experimental results.
One caveat to all this is that I’m not sure the cognitive architecture could ever be run in this case, as some parts of the code would have to be left as “black boxes” that we don’t know how to code. Coding a virtual agent that really acted like a conscious human, including in its generated speech about qualia, might be an AI-complete problem. However, the hope would be that all the incomplete parts of the code wouldn’t be specific to consciousness, but would concern other features, such as general-purpose learning. As a result, the predictions generated from the cognitive architecture couldn’t be directly computed, but would instead need to be argued for, as in usual scientific practice.297
Perhaps this process sounds like a lot of work. Surely, it is. But it does not seem impossible. In fact, it is not too dissimilar from the process Bernard Baars, Stan Franklin, and others have used to implement global workspace theory in the LIDA cognitive architecture.
6.3 Appendix C. Evidence concerning unconscious vision
In this appendix, I summarize much of the evidence cited in favor of the theory that human visual processing occurs in multiple streams, only one of which leads to conscious visual experience, as described briefly in an earlier section. To simplify the exposition, I present here only the positive case for this theory, even though there is also substantial evidence that challenges the theory (see below), and thus I think we should only assign it (or something like it) moderate credence.
My primary source for most of what follows is Goodale & Milner (2013).298 (Hereafter, I refer to Goodale & Milner as “G&M,” and I refer to their 2013 book as “G&M-13.”)
6.3.1 Multiple vision systems in simpler animals
First, consider “vision for action” in organisms much simpler than humans:299
A single-cell organism like the Euglena, which lives in ponds and uses light as a source of energy, changes its pattern of swimming according to the different levels of illumination it encounters in its watery world. Such behavior keeps Euglena in regions of the pond where an important resource, sunlight, is available. But although this behavior is controlled by light, no one would seriously argue that the Euglena “sees” the light or that it has some sort of internal model of the outside world. The simplest and most obvious way to understand this behavior is that it works as a simple reflex, translating light levels into changes in the rate and direction of swimming. Of course, a mechanism of this sort, although activated by light, is far less complicated than the visual systems of multicellular organisms. But even in complex organisms like vertebrates, many aspects of vision can be understood entirely as systems for controlling movement, without reference to perceptual experience or to any general-purpose representation of the outside world.
Vertebrates have a broad range of different visually guided behaviors. What is surprising is that these different patterns of activity are governed by quite independent visual control systems. The neurobiologist, David Ingle, for example, showed during the 1970s that when frogs catch prey they use a quite separate visuomotor “module” from the one that guides them around visual obstacles blocking their path [Ingle (1973)]. These modules run on parallel tracks from the eye right through the brain to the motor output systems that execute the behavior. Ingle demonstrated the existence of these modules by taking advantage of the fact that nerves… in the frog’s brain, unlike those in the mammalian brain, can regenerate new connections when damaged. In his experiments, he was able to “rewire” the visuomotor module for prey catching by first removing a structure called the optic tectum on one side. The optic nerves that brought information from the eye to the optic tectum on the damaged side of the brain were severed by this surgery. A few weeks later, however, the cut nerves re-grew, but finding their normal destination missing, crossed back over and connected with the remaining optic tectum on the other side of the brain. As a result, when these “rewired” frogs were later tested with artificial prey objects, they turned and snapped their tongue to catch the prey — but in the opposite direction… This “mirror-image” behavior reflected the fact that the prey-catching system in these frogs was now wired up the wrong way around.
But this did not mean that their entire visual world was reversed. When Ingle tested the same frogs’ ability to jump around a barrier blocking their route, their movements remained quite normal, even when the edge of the barrier was located in the same part of space where they made prey-catching errors… It was as though the frogs saw the world correctly when skirting around a barrier, but saw the world mirror-imaged when snapping at prey. In fact, Ingle discovered that the optic nerves were still hooked up normally to a separate “obstacle avoidance module” in a part of the brain quite separate from the optic tectum. This part of the brain, which sits just in front of optic tectum, is called the pretectum. Ingle was subsequently able to selectively rewire the pretectum itself in another group of frogs. These animals jumped right into an obstacle placed in front of them instead of avoiding it, yet still continued to show normal prey catching.
So what did these rewired frogs “see”? There is no sensible answer to this. The question only makes sense if you believe that the brain has a single visual representation of the outside world that governs all of an animal’s behavior. Ingle’s experiments reveal that this cannot possibly be true. Once you accept that there are separate visuomotor modules in the brain of the frog, the puzzle disappears. We now know that there are at least five separate visuomotor modules in the brains of frogs and toads, each looking after a different kind of visually guided behavior and each having distinct input and output pathways. Obviously the outputs of these different modules have to be coordinated, but in no sense are they all guided by a single visual representation of the world residing somewhere in the frog’s brain.
The same kind of visuomotor “modularity” exists in mammals as well. Evidence for this can be seen even in the anatomy of the visual system. …[The neurons] in the retina send information (via the optic nerve) directly to a number of different sites in the brain. Each of these brain structures in turn gives rise to a distinctive set of outgoing connections. The existence of these separate input–output lines in the mammalian brain suggests that they may each be responsible for controlling a different kind of behavior — in much the same way as they are in the frog. The mammalian brain is more complex than that of the frog, but the same principles of modularity still seem to apply. In rats and gerbils, for example, orientation movements of the head and eyes toward morsels of food are governed by brain circuits that are quite separate from those dealing with obstacles that need to be avoided while the animal is running around. In fact, each of these brain circuits in the mammal shares a common ancestor with the circuits we have already mentioned in frogs and toads. For example, the circuit controlling orientation movements of the head and eyes in rats and gerbils involves the optic tectum (or “superior colliculus” as it is called in mammals), the same structure in the frog that controls turning and snapping the tongue at flies.
The fact that each part of the animal’s behavioral repertoire has its own separate visual control system refutes the common assumption that all behavior is controlled by a single, general-purpose representation of the visual world. Instead, it seems, vision evolved, not as a single system that allowed organisms to “see” the world, but as an expanding collection of relatively independent visuomotor modules.
According to G&M, at least, “vision for action” systems seem to be primary in most animals, while “vision for perception” systems are either absent entirely or much less developed than what we observe in primates:300
…vision in vertebrates evolved in response to the demands of motor output, not for perceptual experience. Even with the evolution of the cerebral cortex this remained true, and in mammals such as rodents the major emphasis of cortical visual processing still appears to be on the control of navigation, prey catching, obstacle avoidance, and predator detection [Dean (1990)]. It is probably not until the evolution of the primates, at a late stage of phylogenetic history, that we see the arrival on the scene of fully developed mechanisms for perceptual representation. The transformations of visual input required for perception would often be quite different from those required for the control of action. They evolved, we assume, as mediators between identifiable visual patterns and flexible responses to those patterns based on higher cognitive processing.
6.3.2 Two vision systems in primates
Given the evidence for multiple, largely independent vision systems in simpler animals, it should be no surprise that primates, too, have multiple, largely independent vision systems.
The direct evidence for two (mostly) functionally and anatomically distinct vision systems in the primate brain — one serving “vision for action” and the other serving “vision for perception” — comes from several sources, including:
- Lesion studies in humans and monkeys.
- Dissociation studies in healthy humans and monkeys.
- Single-neuron recordings, mostly in monkeys.
- Brain imaging studies.
- Studies which induce “temporary lesions” via transcranial magnetic stimulation (TMS).
Below, I summarize some of this evidence.
6.3.3 Visual form agnosia in Dee Fletcher
Let’s start with G&M’s most famous lesion patient, Dee Fletcher.301 In February 1988, Dee collapsed into a coma as a result of carbon monoxide poisoning caused by an improperly vented water heater in her home. Fortunately, her partner Carlo soon arrived home and rushed her to the hospital.
After a few days of recovery, it became clear that Dee’s vision was impaired. She could see colors and surface textures (e.g. the tiny hairs on someone’s hand), but she couldn’t recognize shapes, objects, or people unless (1) she could identify them via another sense (e.g. hearing someone’s voice, or touching a hand), or unless (2) she could guess the object or person’s identity with color and surface texture information alone, for example if a close friend visited her while wearing a distinctively blue sweater.
This was confirmed in formal testing. For example, she performed just as well “as a normally-sighted person in detecting a circular ‘Gabor’ patch of closely spaced fine lines on a background that had the same average brightness” (see right), but she had no idea whether the lines were horizontal or vertical. Hence, it wasn’t that her vision was a blur. She could see detail. She just couldn’t see edges and outlines that would allow her to identify shapes, objects, and people.
When G&M showed Dee a flashlight made of shiny metal and red plastic, she said: “It’s made of aluminium. It’s got red plastic on it. Is it some sort of kitchen utensil?” Given that she couldn’t see the object’s shape, and only its surface colors and texture, this was a sensible guess, since many kitchen tools are made of metal and plastic. As soon G&M placed the flashlight in her hand, she immediately recognized it as a flashlight.303
Dee often had trouble separating an object from the background. According to her, objects seemed to “run into each other,” such that “two adjacent objects of similar color, such as a knife and fork, will often look to her like a single entity.”
G&M showed Dee shapes whose edges were defined in four different ways: by color contrast, by differences in luminance, by differences in texture, and by way of some dots remaining still while others moved (see left). In none of these cases was she able to reliably detect objects or shapes, though she could report the colors accurately.
G&M also tested Dee on “Efron shapes,” a series of rectangles that differ in shape but not in total surface area. For each round of the test, Dee was shown a pair of these shapes and asked to say whether they were the same or different. D&M-13 reports:
When we used any of the three rectangles that were most similar to the square, she performed at chance level. She sometimes even made mistakes when we used the most elongated rectangle, despite taking a long time to decide. Under each rectangle [in the image below] is the number of correct judgments (out of 20) that Dee made in a test run with that particular rectangle.
Dee’s problem is not that she struggles to verbally name shapes or objects, and nor is it a deficit in remembering what common objects look like. G&M-13 reports:
Dee has great difficulties in copying drawings of common objects or geometric shapes [see image below]. Some brain-damaged patients who are unable to identify pictures of objects can still slavishly copy what they see, line by line, and produce something recognizable. But Dee can’t even pick out the individual edges and contours that make up a picture in order to copy them. Presumably, unlike those other patients, Dee’s problem is not one of interpreting a picture that she sees clearly — her problem is that she can’t see the shapes in the picture to start with.
Dee couldn’t recognize any of the drawings in the left-most column above. When she tried to copy those objects (middle column), she could incorporate some elements of the drawing (such as the small dots representing text), but her overall copies are unrecognizable. However, when asked to draw objects from memories she formed before her accident (right-most column), she did just fine, except for the fact that when she lifted her pencil and put it back down, she sometimes put it back down in the wrong place (presumably due to her inability to see shapes and edges even as she was drawing them). When she was later shown the objects she had drawn from memory, she couldn’t identify them.
Dee’s ability to draw objects from memory suggests that she can see things “in her mind’s eye” just fine. So does her correct responses to queries like this: “Think of the capital letter D; now imagine that it has been rotated flat-side down; now put it on top of the capital letter V; what does it look like?” Most people say “an ice cream cone,” and so does Dee.
Dee also still dreams normally:
[Dee] still sometimes reports experiencing a full visual world in her dreams, as rich in people, objects, and scenes as her dreams used to be before the accident. Waking up from dreams like this, especially in the early years, was a depressing experience for her. Remembering her dream as she gazed around [her now edgeless, shapeless, object-less] bedroom, she was cruelly reminded of the visual world she had lost.
However, despite her severe deficits in identifying shapes, objects, and people, Dee displayed a nearly normal ability to walk around in her environment and use her hands to pick things up and interact with them. G&M report the moment they realized just how striking the difference was between Dee’s ability to recognize objects and her ability to interact with them:307
[In the summer of 1988] we were showing [Dee] various everyday objects to see whether she could recognize them, without allowing her to feel what they were. When we held up a pencil, we were not surprised that she couldn’t tell us what it was, even though she could tell us it was yellow. In fact, she had no idea whether we were holding it horizontally or vertically. But then something quite extraordinary happened. Before we knew it, Dee had reached out and taken the pencil, presumably to examine it more closely… After a few moments, it dawned on us what an amazing event we had just witnessed. By performing this simple everyday act she had revealed a side to her vision which, until that moment, we had never suspected was there. Dee’s movements had been quick and perfectly coordinated, showing none of the clumsiness or fumbling that one might have expected in someone whose vision was as poor as hers. To have grasped the pencil in this skillful way, she must have turned her wrist “in flight” so that her fingers and thumb were well positioned in readiness for grasping the pencil — just like a fully sighted person. Yet it was no fluke: when we took the pencil back and asked her to do it again, she always grabbed it perfectly, no matter whether we held the pencil horizontally, vertically, or obliquely.
How could Dee do this? She had to be using vision; a blind person couldn’t have grabbed the pencil so effortlessly. But she couldn’t have been using her conscious visual experience, either, as her conscious visual experience didn’t include any information about the rotation of the pencil or its exact shape.
G&M soon put this difference to a more formal test. They built a simple mailbox-like slot that could be rotated to any angle (while Dee closed her eyes), and then they gave Dee a thin card to “post” into the slot. When asked to “post” the card, she had no difficulty. However, when she was asked to merely turn the card so that it matched the orientation of the slot, without reaching toward the slot, she performed no better than chance.309 She couldn’t consciously see the orientation of the slot, but nevertheless when posting the card into the slot, she had no trouble rotating the card properly so that it went into the slot. The diagrams on the right310 show Dee’s performance relative to healthy control subjects, with the “correct” orientation always shown as vertical even though the slot was rotated to many different orientations. Video showed that when posting the card, Dee rotated it well before reaching the slot — clearly, a visually-guided behavior, even if it wasn’t guided by conscious vision.
G&M also tested Dee’s grasping movements. When a normal patient is asked to reach out and grab an object on a table, they open their fingers and thumb as soon as their hand leaves the table. About 75% of the way to the object, the gulf between fingers and thumb is as wide as it gets — the “maximum grip aperture” (MGA). Thereafter, they begin to close their fingers and thumb so that a good grasp is achieved (see right). The MGA is always larger than the width of the target object, but the two are related: the bigger the object, the bigger the MGA.
G&M tested Dee’s grasping behavior using some 3D wooden blocks they called “Efron blocks,” because they were modeled after the Efron shapes (again, with the same overall size but different dimensions). As expected, her grasping motions showed the same mid-flight grip scaling as those of healthy controls, and she grasped the Efron blocks just as smoothly as anyone else. She performed just fine regardless of the orientation of Efron blocks, and she effortlessly rotated her wrist to grasp them width-wise rather than length-wise (just like healthy subjects).312 She did this despite the fact that she performed very poorly when asked to distinguish the blocks when they were presented as pairs, and despite the fact that she could not show G&M how wide each block was by looking at it and then using her fingers and thumb to indicate its width. When asked to estimate, with her thumb and forefinger, the width of a familiar object stored in her memory, such as a golf ball, she did fine.
G&M also tested Dee on “Blake shapes,” a set of pebble-like objects that are smooth and rounded but irregular in shape, and thus are stably grasped at some points but not others. Again, Dee could reach out and grasp these objects just as well as healthy controls, even though she was unable to say whether pairs of the Blake shapes were the same or different.
G&M also tested Dee’s ability to navigate obstacles. They visited a laboratory in which obstacles of various heights could be placed along a path, and sophisticated equipment could precisely measure the adjustments people made to their gait to step over the obstacles. Once again, Dee performed just like healthy subjects, stepping confidently over the obstacles without tripping, just barely clearing them (again, like healthy subjects). However, when asked to estimate the height of these obstacles, she performed terribly.
In short, as G&M-13 puts it:
The most amazing thing about Dee is that she is able to use visual properties of objects such as their orientation, size, and shape, to guide a range of skilled actions — despite having no conscious awareness of those same visual properties. This… indicates that some parts of the brain (which we have good reason to believe are badly damaged in Dee) play a critical role in giving us visual awareness of the world while other parts (relatively undamaged in her) are more concerned with the immediate visual control of skilled actions.
Dee’s condition is now known as “visual form agnosia” (an inability to see “forms” or shapes), and a few other cases beside’s Dee have been reported.313
6.3.4 Optic ataxia
The case of Dee Fletcher raises the question: are there patients with the “opposite” condition, such that they can recognize shapes, objects, and people just fine, but have difficulty with visually-guided behavior, such as when grasping and manipulating objects?
Indeed there are:314
The Hungarian neurologist Rudolph Bálint was the first to document a patient with this kind of problem, in 1909. The patient was a middle-aged man who suffered a massive stroke to both sides of the brain in a region called the parietal lobe… He could recognize objects and people, and could read a newspaper. He did tend to ignore objects on his left side and had some difficulty moving his eyes from one object to another. But his big problem was not a failure to recognize objects, but rather an inability to reach out and pick them up. Instead of reaching directly toward an object, he would grope in its general direction much like a blind man, often missing it by a few inches. Unlike a blind man, however, he could see the object perfectly well — he just couldn’t guide his hand toward it. Bálint coined the term “optic ataxia”… to refer to this problem in visually guided reaching.
Bálint’s first thought was that this difficulty in reaching toward objects might be due to a general failure in his patient to locate where the objects were in his field of vision. But it turned out that the patient showed the problem only when he used his right hand. When he used his left hand to reach for the same object, his reaches were pretty accurate. This means that there could not have been a generalized problem in seeing where something was. The patient’s visual processing of spatial location per se was not impaired. After further testing, Bálint discovered that the man’s reaching difficulty was not a purely motor problem either — some kind of generalized difficulty in moving his right arm correctly. He deduced this from asking the patient to point to different parts of his own body using his right hand with his eyes closed: there was no problem.
…It was not until the 1980s that research on patients with optic ataxia was kick-started again, mostly by Marc Jeannerod and his group in Lyon, France. In one landmark study, his colleagues Marie-Thérèse Perenin and Alain Vighetto made detailed video recordings of a sizeable group of patients with optic ataxia performing a number of different visuomotor tests… Like Bálint, they observed that although their patients couldn’t accurately point to the targets, they were able to give pretty accurate verbal reports of where those same objects were located. Also like Bálint, Perenin and Vighetto demonstrated that the patients had no difficulty in directing hand movements toward different parts of their own body. Subsequent work in their laboratory went on to show that the reaching and pointing errors made by many patients with optic ataxia are most severe when they are not looking directly at the target. But even when pointing at a target in the center of the visual field, the patients still make bigger errors than normal people do, albeit now on the order of millimeters rather than centimeters. In short, Perenin and Vighetto’s research confirms Bálint’s original conclusion: optic ataxia is a deficit in visually guided reaching, not a general deficit in spatial vision.
Patients with optic ataxia also have difficulty avoiding collisions with obstacles as they reach for an object. For example, neuroscientist Robert McIntosh designed a test in which subjects are asked to reach from a fixed starting point to a strip 25cm away, between two vertical rubber cylinders. The location of the cylinders is varied, and healthy control subjects always vary their reach trajectory so as to stay well clear of the rubber cylinders. In contrast, optic ataxia patients do not vary their reach trajectory in response to where the rubber cylinders are located, and thus often come somewhat close to knocking over the rubber cylinders as they reach for the strip at the back of the table.
However, the failure of patients with optic ataxia to adjust their reach trajectory in response to the location of the cylinders is not due to a failure to (consciously) see where the cylinders are. When asked to point to the midpoint between the two cylinders, patients with optic ataxia are just as accurate as healthy controls.
G&M were able to run this test on a patient with optic ataxia for only one hand. Morris Harvey has damage in his left parietal lobe, which means that his optic ataxia affects only his right hand, and only when reaching toward objects in his right visual field. How did Morris perform at the cylinders task? When reaching with his left hand, his reach trajectory was the same as healthy subjects, adjusted to maximally avoid the cylinders. But when reaching with his right hand, he studiously avoided the cylinder on the left, but took no account of the cylinder on the right.
(In contrast to those with optic ataxia, Dee Fletcher avoided the cylinders as normal when reaching out to the strip at the back of the table, but she performed poorly when asked to point to the midpoint between the two cylinders.)
Some optic ataxia patients also have trouble changing their reach trajectory mid-flight:
Our French colleagues Laure Pisella and Yves Rossetti had [optic ataxia patient] Irène make a series of reaches to touch a small LED target. From time to time, however, the target would unpredictably shift leftwards or rightwards at the very instant Irène’s hand started to move toward it. Healthy volunteers doing this task had no problem in making the necessary in-flight corrections to their reach, and in fact they adjusted their reaches seamlessly as if their movements were on “automatic pilot,” particularly when under time pressure to move quickly. Yet Irène found these changes in target location frustratingly impossible to deal with. It was as if she no longer had that automatic pilot. To put it another way, Irène’s reaches seemed to be entirely predetermined at the outset of the movement, and remained impervious to unexpected changes in the position of the target, even though she could see them clearly enough and knew they might occur. On occasions when the target moved, she found herself reaching first to its original location, and only then shifting her finger to the new location.
How do patients with optic ataxia perform on the mail slot task described in the previous section? Just as you’d expect:
…[Perenin and Vighetto] examined the ability of their [optic ataxia] patients to reach out and pass their hand through an open slot cut in a disk, which could be positioned at different orientations at random… Remarkably, not only did the patients tend to make the expected spatial errors, in which their hand missed the slot altogether, but they also made orientation errors, in which the hand would approach the slot at the wrong angle. Yet most of these same patients could easily tell one orientation of the slot from another when asked to do so. So again we see a familiar story unfolding. The failure of the patients to rotate their hand as they reached out to pass it through a slot was not due to a difficulty in perceiving the orientation of the slot — the problem was visuomotor in nature, not perceptual. (Of course when their hand made contact with the disk they could correct themselves using touch, and then pass their hand through the slot. In other words the deficit was restricted to the modality of sight, and did not extend to touch.)
What about the measures of grasping movements described in the previous section? Again, patients with optic ataxia perform just as you’d expect:
Instead of first opening the hand during the early part of the reach, and then gradually closing it as it moved toward the target object, the optic ataxia patient would keep the hand widely opened throughout the movement, much as a person would do if reaching blindfolded toward the object… Jeannerod and his colleagues were the first to carry out systematic tests with Anne Thiérry, the optic ataxia patient we described earlier in this chapter. They used similar matching and grasping tasks to those we had used earlier with Dee… Anne was found to show poor scaling of her grip when reaching for objects of different sizes, while remaining well able to demonstrate the sizes of the objects by use of her forefinger and thumb. Again, the pattern of deficits and spared abilities in Anne and the pattern in Dee complement each other perfectly.
Next, what about Blake shapes? Again, the optic ataxia patient’s performance seems to be the mirror image of Dee Fletcher’s:
Although [Ruth Vickers’] symptoms had cleared to some degree by the time we saw her, it was obvious that she still had severe optic ataxia. She could not reach with any degree of accuracy to objects that she could see but was not looking at directly. She could, however, reach reasonably accurately to objects directly in her line of sight.
Nevertheless, the reaches Ruth made to pick up an object she was looking at, although spatially accurate, were far from normal. Like Anne Thiérry, she would open her hand wide as she reached out, no matter how big or small the objects were, showing none of the grip scaling seen in healthy people… Yet despite this, when asked to show us how big she thought the object was using her finger and thumb, she performed quite creditably, again just like Anne. And she could describe most of the objects and pictures we showed her without any difficulty. In fact, although her strokes had left her unable to control a pencil or pen very well, she could draw quite recognizable copies of pictures she was shown… In other words, Ruth’s visual experience of the world seemed pretty intact, and she could readily convey to us what she saw — in complete contrast to Dee Fletcher.
Because Ruth could distinguish between many different shapes and patterns, we did not expect her to have much difficulty with the smooth pebble-like shapes we had tested Dee with earlier. We were right — when she was presented with a pair of “Blake shapes” she could generally tell us whether or not the two shapes were the same. Although she sometimes made mistakes, particularly when two identical shapes were presented in different orientations, her performance was much better than Dee’s. When it came to picking up the shapes, however, the opposite was the case. Ruth had real problems. Instead of gripping the Blake shapes at stable “grasp points,” she positioned her finger and thumb almost at random… This inevitably meant that after her fingers contacted the pebble she had to correct her grip by means of touch — if she did not, the pebble would often slip from her grasp. In other words, although some part of Ruth’s brain could code the shape of these objects to inform her visual experience, her hand was unable to use such shape information to guide its actions.
6.3.5 Lesions in monkeys
These lesion studies in humans provide suggestive evidence for two different streams of visual processing, one of which (the “vision for action” system) seems to be unconscious. Now we turn to the evidence from lesion studies in monkeys, which, I was surprised to learn, goes back to the 1860s:315
During the 1860s, [neurologist David Ferrier] removed what we now call the dorsal stream in a monkey, and discovered that the animal would misreach and fumble for food items set out in front of it. In a similar vein, recent work by Mitchell Glickstein in England has shown that small lesions in the dorsal stream can make a monkey unable to pry food morsels out of narrow slots set at different orientations. The monkey is far from blind, but it cannot use vision to insert its finger and thumb at the right angle to get the food. It eventually does it by touch, but its initial efforts, under visual guidance, fail. Yet the same monkey has no difficulty in telling apart different visual patterns, including lines of different orientation. These observations, and a host of others, have demonstrated that dorsal-stream damage in the monkey results in very similar patterns of disabilities and spared abilities to those we saw in [patients with optic ataxia]. In other words, monkeys with dorsal-stream lesions show major problems in vision for action but evidently not in vision for perception.
In direct contrast, Heinrich Klüver and Paul Bucy, working at the University of Chicago in the 1930s, found that monkeys with lesions of the temporal lobes, including most of what we now know as the ventral stream, did not have any visuomotor problems at all, but did have difficulties in recognizing familiar objects, and in learning to distinguish between new ones. Klüver and Bucy referred to these problems as symptoms of “visual agnosia,” and indeed they do look very like the problems that Dee Fletcher has. Moreover, like Dee, these monkeys with ventral-stream lesions had no problem using their vision to pick up small objects. The influential neuroscientist, Karl Pribram, once noted that monkeys with ventral-stream lesions that had been trained for months to no avail to distinguish between simple visual patterns, would sit in their cages snatching flies out of the air with great dexterity. Mitchell Glickstein recently confirmed that such monkeys do indeed retain excellent visuomotor skills. He found that monkeys with ventral-stream damage had no problem at all using their finger and thumb to retrieve food items embedded in narrow slots — quite unlike his monkeys with dorsal-stream lesions.
Such studies in monkeys are widely thought to be informative for our understanding of human neuroscience, given the many similarities between human brains and monkey brains.
6.3.6 Dissociation studies in healthy subjects
G&M hypothesize that the dorsal and ventral streams of visual processing use different frames of reference, in part due to computational constraints:316
When we perceive the size, location, orientation, and geometry of an object, we implicitly do so in relation to other objects in the scene we are looking at. In contrast, when we reach out to grab that same object, our brain needs to focus on the object itself and its relationship to us — most particularly, to our hand — without taking account of the… scene in which the object is embedded. To put it a different way, perception uses a scene-based frame of reference while the visual control of action uses egocentric frames of reference.
…The use of scene-based metrics means that the brain can construct this representation in great detail without having to compute the absolute size, distance, and geometry of each object in the scene. To register the absolute metrics of the entire scene would in fact be computationally impossible, given the rapidity with which the pattern of light changes on our retina. It is far more economical for perception to compute just the relational metrics of the scene, and even these computations do not generally need to be precise. It is this reliance on scene-based frames of reference that lets us watch the same scene unfold on a small television or on a gigantic movie screen without being confused by the differences in scale.
…But… scene-based metrics are the very opposite of what you need when you act upon the world. It is not enough to know that an object you wish to pick up is bigger or closer than a neighboring object. To program your reach and scale your grasp, your brain needs to compute the size and distance of the object in relation to your hand. It needs to use absolute metrics set within an egocentric frame of reference. It would be a nuisance, and potentially disastrous, if the illusions of size or distance that are a normal part of [scene and object] perception were to intrude into the visual control of your movements.
If this account is right, it suggests a way to test for dorsal-ventral dissociation even in healthy subjects, since object and scene recognition should be subject to certain kinds of visual illusions that visually-guided action is not.
One way to test for this dissociation is to use virtual reality displays. In one study, Hu & Goodale (2000) used a virtual reality display to show healthy subjects a series of 3D images of target blocks (marked with a red spot), each of which was displayed along with another “virtual” block that was either 10% wider or narrower than the target block. These blocks were shown for a half second or less, and then the subject was asked to either (1) reach out and grab the target block using their thumb and index finger, or to (2) indicate the size of the target block using their thumb and index finger, but not reach out toward it. To ensure a “real” (not pantomimed) grasping motion, a physical but unseen block was placed exactly where the virtual target block appeared to be (see right).
The point of having two (virtual) blocks of slightly different sizes was to induce a “size-contrast effect,” akin to the effect observed when a person you normally think of as tall stands next to a professional basketball player and suddenly seems shorter than they usually do. Hu & Goodale’s expectation was that this size-contrast effect would would affect the subject’s (ventral) perception of the target block, and thus their attempt to indicate its size with their thumb and index finger (without reaching for it), but would not affect their (dorsally-guided) attempt to grasp the target block.
And this is just what happened. When the target block was paired with a larger companion block, subjects consistently judged it to be smaller than when the same target block was paired with a smaller companion block. But when subjects reached out to grasp the target block, they opened their thumb and index finger to an identical degree no matter which companion block appeared. In other words, the size-contrast effect affected the subjects’ perception of the target block, but didn’t affect their physical interaction with the target block.
Another proposed difference between the dorsal and ventral streams is that the dorsal system should operate only in real-time, whereas the ventral stream interacts with short- and long-term memory to help guide decision-making and action over a longer period of time. Consistent with this hypothesis, the subjects’ grip size calibration was affected by the size-contrast illusion when a delay was inserted between viewing the (virtual) blocks and reaching toward the target block:
When the students [subjects] had to wait for five seconds before picking up the target object that they had just seen, the scaling of their grasp now fell prey to the influence of the companion block. Just as they did when they made perceptual judgments, they opened their hand wider when the target block was accompanied by a small block than when it was accompanied by a large block. This intrusion of the size-contrast effect into grip scaling after a delay is exactly what we had predicted. Since the dedicated visuomotor systems in the dorsal stream operate only in real time, the introduction of a delay disrupts their function. Therefore when a delay is introduced, the calibration of the grasp has to depend on a memory derived from perceptual processing in the ventral stream, and becomes subject to the same size-contrast illusions that perception is prone to.
In another experiment, Haffendon & Goodale (1998) tested for a ventral-dorsal dissociation using the well-known Ebbinghaus illusion, which also makes use of size-contrast effects. In one version, two physically identical circles are perceived as being of different size (top right). In another version, two physically different circles are perceived as being of identical size (bottom right).
To test for an action-perception dissociation, Haffendon & Goodale placed some flat disks atop the classic Ebbinhaus backgrounds, and then asked subjects to either “match” (indicate the size of) or “grasp” (reach out to grab) the target disk. As expected, subjects’ “match” attempts were affected by the visual illusion, but their “grasp” attempts were not.
Several experiments with other visual illusions have demonstrated similar results. Furthermore, just as G&M’s theory predicts, both perception and visuomotor control are affected if the illusion used is one that results from early visual processing (before the dorsal-ventral split).319
6.3.7 Single-neuron recordings
Further evidence for the “two streams” hypothesis comes from single-neuron recordings in monkeys:320
The 1980 Nobel laureates David Hubel and Torsten Wiesel… found that neurons in primary visual cortex [V1] would [fire] every time a visual edge or line was shown to the eye, so long as it was shown at the right orientation and in the right location within the field of view. The small area of the retina where a visual stimulus can activate a given neuron is called the neuron’s “receptive field.” Hubel and Wiesel discovered, in other words, that these neurons are “encoding” the orientation and position of particular edges that make up a visual scene out there in the world. Different neurons prefer (or are “tuned” to) different orientations of edges… Other neurons are tuned for the colors of objects, and still others code the direction in which an object is moving…
…The 1960s and early 1970s heralded great advances in single-cell recording as investigators pushed well beyond the early visual areas, out into the dorsal and ventral streams. It soon became apparent that neurons in the two streams coded the visual world very differently…
To be more specific, but still oversimplify: neurons in the ventral stream tend to respond to fairly complex visual patterns (e.g. entire objects, or even specific faces), but many of them don’t “care” so much about details such as the angle of the object, its lighting conditions, or how far the object is from the eye: just the sort of behavior you’d expect from neurons in a pathway that specializes in perceiving objects. In contrast, neurons in the dorsal stream seem to typically code for more action-specific features, for example the motion of objects or small differences in their orientation, and often only respond when a monkey responds to a visual target, for example by reaching out to it or tracking the object’s motion with its eyes.321
What about single neuron recordings in humans? Such studies are still rare for ethical reasons,322 but their findings are illuminating. For example, some cells just downstream of the inferotemporal cortex (in the ventral stream), in the medial temporal lobe (MTL), have been found to respond only to specific faces. For example, in one patient, a specific neuron responded most strongly to pictures (from any angle) of either Jennifer Aniston or Lisa Kudrow, both actresses on Friends. Another cell responded to any picture of actress Halle Berry, even when she was masked as Catwoman (a character she played), and also to the written words “HALLE BERRY,” but not to pictures of other people, or to other written names.323 Unfortunately, I don’t know whether single-neuron recordings have been made in the human dorsal stream.
6.3.8 Challenges
There is additional evidence for this “two streams” account of visual processing, for example from fMRI studies,324 but I won’t describe that evidence here (see G&M-13). Instead, I’d like to briefly mention some challenges for the two streams theory:325
- Many of the relevant primary studies can be interpreted to support alternate hypotheses.326
- Several studies suggest that the division of labor between the dorsal and ventral streams is not clear-cut: e.g. some neurons in the dorsal stream seem to subserve object recognition, and some neurons in the ventral stream seem to subserve visually-guided motor control.327
- Personally, I would not be surprised if some of the neuroimaging studies used to argue in favor of G&M’s view could be undermined by a careful examination of interpretive complications328 and statistical errors329 — though, this worry is not unique to imaging studies of conscious and unconscious vision (see Appendix Z.8).
Considering all the evidence I’ve studied or skimmed, my impression is that something like G&M’s “two streams” account of visual processing has a good chance of being true (with many complications), but also has a good chance of being quite mistaken.
If something like the “two streams” account is right, then it could provide some evidence in favor of certain kinds of “cortex-required views” about consciousness, especially if we observe other kinds of cognitive processing having both conscious and unconscious components, with the conscious components being computed by the same broad regions of the brain that compute conscious vision.
6.4 Appendix D. Some clarifications on nociception and pain
In this appendix, I clarify how I use nociception-related and pain-related terms in this report, and provide my sources for the judgments I made in two rows of my table of PCIFs: those for “Has nociceptors” and “Has neural nociceptors.”
As I use the terms, nociception is the encoding and processing of noxious stimuli, where a noxious stimulus is an actually or potentially body-damaging event (either external or internal, e.g. cutaneous or visceral). A body-damaging event can be chemical (e.g. a strong acid), mechanical (e.g. pinching), or thermal (e.g. excessive heat). A sensory receptor that responds only or preferentially to noxious stimuli is a nociceptor. Not all noxious stimuli are successfully detected by nociceptors, e.g. when no nociceptor is located at the location of a noxious stimulus’ contact with the body. Those noxious stimuli that are detected are called nociceptive stimuli.
These definitions are identical to those of the International Association for the Study of Pain (IASP) — see Loeser & Treede (2008) — except that I have dropped the word “neural,” and replaced the phrase “tissue-damaging” with “body-damaging.” I made both these modifications because I want to use nociception-related terms in the context of a wide variety of cognitive systems, including e.g. personal computers and robots, which in some cases have nociception-specific sensory receptors but not “neurons” in the usual sense of that word, and which are more typically said to have “bodies” than “tissues.” Also, in accordance with many other definitions (e.g. Ringkamp et al. 2013), I have clarified that nociceptors can in some cases respond to both noxious and non-noxious stimuli, but must respond preferentially to noxious stimuli to be counted as “nociceptors.”
Some nociceptors respond to only one kind of noxious stimuli, while other (“polymodal”) nociceptors respond to multiple kinds of noxious stimuli. Some polymodal nociceptors are dedicated to noxious stimuli only, whereas other polymodal nociceptors — called wide dynamic range (WDR) neurons — respond to both noxious and non-noxious stimuli. (See e.g. Derbyshire 2014; Gebhart & Schmidt 2013, p. 4266; Walters 1996, p.97.)
Pain, in contrast to mere nociception, is an unpleasant conscious experience associated with actual or potential body damage (or akin to unpleasant experiences associated with noxious stimuli). The IASP’s definition of pain (Loeser & Treede 2008) is “an unpleasant sensory and emotional experience associated with actual or potential tissue damage or described in terms of such damage.” I have dropped the phrase about description because I want to talk about pain in cognitive systems that may or may not be able to describe their pain to others. Following Price (1999), ch. 1, I have also added the parenthetical phrase “or akin to such experiences” so as to capture pain that “feels like” the unpleasant experiences normally associated (in humans, at least) with actual or potential body damage, even if those experiences are not in fact associated with such actual or potential damage, as with some cases of neuropathic pain, and perhaps also as with some cases of psychologically-created experiences of pain, e.g. when a subject is hallucinating or dreaming a painful experience. But I keep my definition simpler than that of e.g. Sneddon (2009), which adds that animals in pain should “quickly learn to avoid the noxious stimulus and demonstrate sustained changes in behaviour that have protective function to reduce further injury and pain, prevent the injury from recurring, and promote healing and recovery.” Whether such phenomena are indicative of pain or just nociception is an empirical question, and I do not wish to burden my definition of pain with such assumptions.
Nociception can occur without pain, and pain can occur without nociception. Loeser & Treede (2008) provide examples: “after local anesthesia of the mandibular nerve for dental procedures, there is peripheral nociception without pain, whereas in a patient with thalamic pain [a kind of neuropathic pain resulting from stroke], there is pain without peripheral nociception.” Rose et al. (2014) provide another example of nociception without pain: “carpal tunnel surgery is sometimes performed in awake patients following axillary local anesthetic injection, which blocks conduction in axons passing from receptors in the hand and arm to the spinal cord. Consequently, the patient can watch the surgery but feel nothing, in spite of intense nociceptor activation.” See p. 127 of Le Neindre et al. (2017) for a table summarizing several types of nociception-related processing that (in humans) are varyingly conscious or unconscious. (But, my usual caveats about the possibility of hidden qualia apply.)
The ability to detect and react to noxious stimuli is a basic adaptive capability, and thus nociceptors are found in humans (Purves et al. 2011) and many other species (Sneddon et al. 2014; Smith & Lewin 2009), including fruit flies (Im & Galko 2012), nematode worms (Wittenberg & Baumeister 1999), and bacteria (Paoni et al. 1981). Not all the nociceptors are neural nociceptors, however.
Different species have evolved different nociceptors, presumably because they are exposed to different noxious stimuli, and because a stimulus that is noxious for one species might not be noxious for another species (Smith & Lewin 2009). For example, the threshold for noxious heat in one species of trout is ~33 °C or perhaps ~25 °C (Ashley et al. 2007), while in chickens it is ~49 °C (Gentle et al. 2001), and it may be higher still in the Pompeii worms that live near hydrothermal vents and regularly experience temperatures above 80 °C (Cary et al. 1998). Another example is this: while most mammalian species have acid-detecting nociceptors — and so do many species of other phyla, including e.g. the leech H. medicinalis (Pastor et al. 1996) — African naked mole-rats do not have acid-detecting nociceptors (Park et al. 2008; Smith et al. 2011).
Non-neural nociceptors are also built into many personal computers (PCs). For example, many PCs contain a sensor which detects whether the computer’s central processing unit (CPU) is becoming dangerously hot, such that if it is, a fan can be signalled to turn on, cooling the CPU (Mueller 2013, chapter 3, section: “Processor Cooling”). Some robots are equipped with sensors that detect both noxious and non-noxious stimuli (Dahl et al. 2011; Kühn & Haddadin 2017), somewhat analogous to the previously-mentioned WDR neurons in humans.
Some readers may be skeptical of (non-neural) nociception in bacteria. For an overview of bacterial signal transduction and subsequent chemotaxis (movement in response to chemical stimuli), see Wadhams & Armitage (2004); Wuichet & Zhulin (2010); Sourjik & Wingreen (2012). In the literature on bacterial chemotaxis, a noxious stimulus is typically called a “repellent,” and the behavioral response away from noxious chemical stimuli is sometimes called “negative chemotaxis.” Example papers describing negative chemotaxis in bacteria include Shioi et al. (1987); Yamamoto et al. (1990); Kaempf & Greenberg (1990); Ohga et al. (1993); Khan et al. (1995); Liu & Fridovich (1996); Karmakar et al. (2015). For a more general account of how a single-celled organism can engage in fairly sophisticated computation, see Bray (2011).
As for nociceptors in the human enteric nervous system, Wood (2011) writes: “Presence in the gastrointestinal tract of pain receptors (nociceptors) equivalent to those connected with C-fibers and A-δ fibers elsewhere in the body is likely…”
My sources for the presence or absence of neural nociceptors in multiple taxa are Purves et al. (2011), Sneddon et al. (2014), Smith & Lewin (2009), and Wood (2011). My source for neural nociceptors in the common fruit fly is Terada et al. (2016).
Because my investigation at one point paid special attention to similarities between rainbow trout and chickens, I collected additional specific sources for those taxa, listed in a footnote.330
Neural nociceptors in humans come in many types. The varieties of human nociceptors are described succinctly by Basbaum et al. (2009):
The cell bodies of nociceptors are located in the dorsal root ganglia (DRG) for the body and the trigeminal ganglion for the face, and have both a peripheral and central axonal branch that innervates their target organ and the spinal cord, respectively… There are two major classes of nociceptors… The first includes medium diameter myelinated [insulated] (Aδ) afferents [“afferent” means “nerve fiber of a sensory neuron”] that mediate acute, well-localized “first” or fast pain. These myelinated afferents differ considerably from the larger diameter and rapidly conducting Aβ fibers that respond to innocuous mechanical stimulation (i.e., light touch). The second class of nociceptor includes small diameter unmyelinated “C” fibers that convey poorly localized, “second” or slow pain.
Electrophysiological studies have further subdivided Aδ nociceptors into two main classes. Type I (HTM: high-threshold mechanical nociceptors) respond to both mechanical and chemical stimuli but have relatively high heat thresholds (>50 °C). If, however, the heat stimulus is maintained, these afferents will respond at lower temperatures. And most importantly, they will sensitize (i.e., the heat or mechanical threshold will drop) in the setting of tissue injury. Type II Aδ nociceptors have a much lower heat threshold, but a very high mechanical threshold. Activity of this afferent almost certainly mediates the “first” acute pain response to noxious heat… By contrast, the type I fiber likely mediates the first pain provoked by pinprick and other intense mechanical stimuli.
The unmyelinated C fibers are also heterogeneous. Like the myelinated afferents, most C fibers are polymodal, that is, they include a population that is both heat and mechanically sensitive (CMHs)… Of particular interest are the heat-responsive, but mechanically insensitive, unmyelinated afferents (so-called silent nociceptors) that develop mechanical sensitivity only in the setting of injury… These afferents are more responsive to chemical stimuli (capsaicin or histamine) compared to the CMHs and probably come into play when the chemical milieu of inflammation alters their properties. Subsets of these afferents are also responsive to a variety of itch-producing [stimuli]. It is worth noting that not all C fibers are nociceptors. Some respond to cooling, and [others]… appear to mediate pleasant touch…
The variety of nociceptors across the entire animal kingdom is of course much broader.
6.5 Appendix E. Some clarifications on “neuroanatomical similarity”
In this appendix, I clarify how I estimated the values for “neuroanatomical similarity” when applying my “theory-agnostic estimation process” described above.
As far as I know, there is no widely used measure of neuroanatomical similarity. Moreover, hypotheses about structural homologies are often highly uncertain, and revised over time.
I am not remotely an expert in comparative neuroanatomy, and my very rough ratings for “neuroanatomical similarity with humans” are drawn from my cursory understanding of the field, and would likely be disputed by experts in comparative neuroanatomy. For a brief overview of the major “similarity” factors I’m considering, see e.g. Powers (2014).331
To illustrate some of the similarities and differences I have in mind, I present below a table of the animal taxa I rated for “neuroanatomical similarity with humans” above, shown alongside my rating of neuroanatomical similarity, and some (but not all) of the factors influencing that rating judgment.332 Note the “processing power” measures (e.g. brain mass, neuron counts, or neuronal scaling rules) are excluded here, because they are captured by a separate column in my previous table.
Because the width of the table doesn’t fit within the horizontal space allotted for this page, the table must be scrolled horizontally to view all its contents.
MY RATING | BILATERAL OR RADIAL SYMMETRY? | GANGLIA OR BRAIN? | MIDBRAIN? | RETICULAR FORMATION? | DIENCEPHALON? | TELENCEPHALON? | NEOCORTEX? | DORSOLATERAL PREFRONTAL CORTEX? | EXTREME HEMISPHERIC SPECIALIZATION? | |
---|---|---|---|---|---|---|---|---|---|---|
Humans (for comparison) | Bilateral | Brain | Yes | Yes | Yes | Yes, via evagination | Yes, disproportionately large | Yes | Yes333 | |
Chimpanzees | High | Bilateral | Brain | Yes | Yes | Yes | Yes, via evagination | Yes, disproportionately large | Yes | No |
Cows | Moderate/high | Bilateral | Brain | Yes | Yes | Yes | Yes, via evagination | Yes | Debated, but probably not334 | No |
Chickens | Low/moderate | Bilateral | Brain | Yes | Yes | Yes | Yes, via evagination | Debated, but “not really”335 | No | No |
Rainbow trout | Low | Bilateral | Brain | Yes | Yes | Yes | Yes, via eversion336 | No | No | No |
Gazami crabs | Very low | Bilateral | Ganglia | No | No | No | No | No | No | No |
Common fruit flies | Very low | Bilateral | Ganglia | No | No | No | No | No | No | No |
6.6 Appendix F. Illusionism and its implications
In this appendix, I elaborate on my earlier brief explanation of illusionism, and say more about its implications for my tentative conclusions in this report.
6.6.1 What I mean by “illusionism”
Frankish (2016b) explains illusionism this way:
Suppose we encounter something that seems anomalous, in the sense of being radically inexplicable within our established scientific worldview. Psychokinesis is an example. We would have, broadly speaking, three options. First, we could accept that the phenomenon is real and explore the implications of its existence, proposing major revisions or extensions to our science… In the case of psychokinesis, we might posit previously unknown psychic forces and embark on a major revision of physics to accommodate them. Second, we could argue that, although the phenomenon is real, it is not in fact anomalous and can be explained within current science. Thus, we would accept that people really can move things with their unaided minds but argue that this ability depends on known forces, such as electromagnetism. Third, we could argue that the phenomenon is illusory and set about investigating how the illusion is produced. Thus, we might argue that people who seem to have psychokinetic powers are employing some trick to make it seem as if they are mentally influencing objects.
The first two options are realist ones: we accept that there is a real phenomenon of the kind there appears to be and seek to explain it. Theorizing may involve some modest reconceptualization of the phenomenon, but the aim is to provide a theory that broadly vindicates our pre-theoretical conception of it. The third position is an illusionist one: we deny that the phenomenon is real and focus on explaining the appearance of it. The options also differ in explanatory strategy. The first is radical, involving major theoretical revision and innovation, whereas the second and third are conservative, involving only the application of existing theoretical resources.
Turn now to consciousness. Conscious experience has a subjective aspect; we say it is like something to see colours, hear sounds, smell odours, and so on. Such talk is widely construed to mean that conscious experiences have introspectable qualitative properties, or ‘feels’, which determine what it is like to undergo them. Various terms are used for these putative properties. I shall use ‘phenomenal properties’, and, for variation, ‘phenomenal feels’ and ‘phenomenal character’, and I shall say that experiences with such properties are phenomenally conscious… Now, phenomenal properties seem anomalous. They are sometimes characterized as simple, ineffable, intrinsic, private, and immediately apprehended, and many theorists argue that they are distinct from all physical properties, inaccessible to third-person science, and inexplicable in physical terms… Again, there are three broad options.
First, there is radical realism, which treats phenomenal consciousness as real and inexplicable without radical theoretical innovation. In this camp I group dualists, neutral monists, mysterians, and those who appeal to new physics… Second, there is conservative realism, which accepts the reality of phenomenal consciousness but seeks to explain it in physical terms, using the resources of contemporary cognitive science or modest extensions of it. Most physicalist theories fall within this camp, including the various forms of representational theory. Both radical and conservative realists accept that there is something real and genuinely qualitative picked out by talk of the phenomenal properties of experience, and they adopt this as their explanandum. That is, both address [Chalmers’] hard problem.
The third option is illusionism. This shares radical realism’s emphasis on the anomalousness of phenomenal consciousness and conservative realism’s rejection of radical theoretical innovation. It reconciles these commitments by treating phenomenal properties as illusory. Illusionists deny that experiences have phenomenal properties and focus on explaining why they seem to have them. They typically allow that we are introspectively aware of our sensory states but argue that this awareness is partial and distorted, leading us to misrepresent the states as having phenomenal properties… Whatever the details, they must explain the content of the relevant states in broadly functional terms, and the challenge is to provide an account that explains how real and vivid phenomenal consciousness seems. This is the illusion problem.
Illusionism comes in many varieties. Here is an example of what Frankish calls “weak illusionism,” from Carruthers (2000), pp. 93-94:
What would it take for [an explanatory theory] of phenomenal consciousness to succeed? What are the desiderata for a successful theory? I suggest that the theory would need to explain, or explain away, those aspects of phenomenal consciousness which seem most puzzling and distinctive, of which there are five:
- Phenomenally conscious states have a subjective dimension; they have feel; there is something which it is like to undergo them.
- The properties involved in phenomenal consciousness seem to their subjects to be intrinsic and non-relationally individuated.
- The properties distinctive of phenomenal consciousness can seem to their subjects to be ineffable or indescribable.
- Those properties can seem in some way private to their possessors.
- It can seem to subjects that we have infallible (as opposed to merely privileged) knowledge of phenomenally conscious properties.
Note that only (1) is expressed categorically, as a claim about the actual nature of phenomenal consciousness. The other strands are expressed in terms of ‘seemings’, or what the possessors of phenomenally conscious mental states may be inclined to think about the nature of those states. This is because (1) is definitive of the very idea of phenomenal consciousness… whereas (2) to (5), when construed categorically, are the claims concerning phenomenal consciousness which raise particular problems for physicalist and functionalist conceptions of the mind…
Aspect (1) therefore needs to be explained in any successful account of phenomenal consciousness; whereas (2) to (5) – when transposed into categorical claims about the nature of phenomenal consciousness – should be explained away. If we can explain (2) to (5) in a way which involves no commitment to the truth of the things people are inclined to think about phenomenal consciousness, then we can be qualia irrealists (in the strong sense of ‘qualia’…). But if we can explain (1), then we can maintain that we are, nevertheless, naturalistic realists concerning phenomenal consciousness itself.
Frankish calls Carruthers’ theory an example of “weak illusionism” because, while Carruthers suggests that several key features of consciousness are illusions, he seems to accept that perhaps the most central feature of consciousness — its “subjective,” “something it’s like” nature — is real, and needs to be “explained” rather than “explained away.”337 In contrast, Frankish stipulates, a “strong illusionist” would say that even the subjective, qualitative, “what it’s like”-ness of consciousness is an illusion.
My own view is probably best described as a variant of “strong illusionism,” and hereafter I will (like Frankish) use “illusionism” to mean “strong illusionism,” unless otherwise specified. (As Frankish argues, weak illusionism may collapse into strong illusionism anyway.)
However, unlike Frankish, I avoid saying things like “phenomenal consciousness is an illusion” or “phenomenal properties are illusory,” because whereas Frankish defines “phenomenal consciousness” and “phenomenal properties” in a particular philosophical way, I’m instead taking Schwitzgebel’s approach of defining these terms by example (see above).338 On this way of talking, phenomenal consciousness is real, and so are phenomenal properties, and there’s “something it’s like” to be me, and probably there’s “something it’s like” to be a chimpanzee, and probably there isn’t “something it’s like” to be a chess-playing computer, and these “phenomenal properties” and this “something it’s like”-ness aren’t what they seem to be when we introspect about them, and they don’t have the properties that many philosophers have assumed they must have, and that is the sense in which these features of consciousness are “illusory.”
Frankish sounds like he would likely accept this way of talking, too, so long as we have some way to distinguish what the phenomenal realist means by phenomenality-related terms and what the illusionist means by them.339 Below and elsewhere in this report, I use terms like “consciousness” and “qualia” and “phenomenal properties” to refer to the kinds of experiences defined by example above, which both “realists” and “illusionists” agree exist. To refer to the special kinds of qualia and phenomenal properties that “realists” think exist (and that illusionists deny), I use phrases such as “the realist’s notion of qualia.”
Frankish (2016b) and the other papers in the same journal issue do a good job of explaining the arguments for and against illusionism, and I won’t repeat them here. I will, however, make some further effort to explain what illusionism is, since the idea can be difficult to wrap one’s head around, and also these issues are hard to talk about clearly.340 After that, I’ll make some brief comments about what implications illusionism seems to have for the distribution question and for my moral intuitions about moral patienthood.
6.6.2 Other cognitive illusions
First, it’s worth calling to mind some cognitive illusions about other things, which can help to set the stage for understanding how some “more central” features of consciousness might also be illusory.
Consider the “tabletops” illusion on the right. Would you believe that these two tabletops are exactly the same shape? When I first saw this illusion, I couldn’t make my brain believe they were the same shape no matter what I tried. Clearly, the table on the right is longer and narrower than the one on the left. To test this, I cut a piece of paper to be the same shape as the first tabletop, and then I moved and rotated it to cover the second tabletop. Sure enough, they were the same shape! After putting away the piece of paper, my brain still cannot perceive them as the same shape. But, with help from the piece of paper, I can convince myself they are the same shape, even though I may never be able to perceive them as the same shape.
This example illustrates a lesson that will be useful later: we can know something is an illusion even though our direct perception remains as fooled as ever, and even though we do not know how the illusion is produced.342
Of course, our brains don’t merely trick us about particular objects or stimuli. Other cognitive illusions affect us continuously, from birth to death, and some of these illusions have only been discovered quite recently. For example, the human eye’s natural blind spot — mentioned above — wasn’t discovered until the 1660s.343 Or, consider the fact that your entire visual field seems to be “in color,” but in fact you have greatly diminished color perception in the periphery your visual field, such that you cannot distinguish green and red objects at about 40° eccentricity (away from the center of your visual field), depending on the size of the objects.344 As far as I know, this basic fact about our daily visual experience, which is very easy to test, wasn’t discovered until the 19th century.345
Next, consider a class of pathological illusions, in which patients are either convinced they have a disability they don’t have, or convinced they don’t have a disability they do have. For example, patients with Anton’s syndrome are blind, but they don’t think they are blind:346
Patients with Anton’s syndrome… cannot count fingers or discriminate objects, shapes, or colors… Some patients with Anton’s syndrome cannot even correctly tell if the room lights are on or off. Despite being profoundly… blind, patients typically deny having any visual difficulty. They confabulate responses such that they guess how many fingers the examiner is holding up or whether the lights are on or off. When confronted with their errors, they often make excused such as “The lights are too dim” or “I don’t have my glasses.”
Or, consider a patient with inverse Anton’s syndrome:347
Although denying visual perception, [the patient] correctly named objects, colors, and famous faces, recognized facial emotions, and read various types of single words with greater than 50% accuracy when presented in the upper right visual field. Upon confrontation regarding his apparent visual abilities, the patient continued to deny visual perceptual awareness… [and] alternatively replied “I feel it,” “I feel like something is there,” “it clicks,” or “I feel it in my mind.”
A patient can even be wrong about the most profound disability of all, death. From a case report on a patient called “JK”:348
When she was most ill, JK claimed that she was dead. She also denied the existence of her mother, and believed that her (JK’s) body was going to explode. On one occasion JK described herself as consisting of mere fresh air and on another she said that she was “just a voice and if that goes I won’t be anything … if my voice goes I will be lost and I won’t know where I have gone”…
…[JK felt] guilty about having claimed social security benefits (to which she was fully entitled) on the grounds that she was dead while she was claiming…
…Her subjective experience of eating was similarly unreal; she felt as though she were “just placing food in the atmosphere”, rather than into her body…
We wanted to know whether the fact that JK had thoughts and feelings (however abnormal) struck her as being inconsistent with her belief that she was dead. We therefore asked her, during the period when she claimed to be dead, whether she could feel her heart beat, whether she could feel hot or cold, and whether she could feel when her bladder was full. She said she could. We suggested that such feelings surely represented evidence that she was not dead, but alive. JK said that since she had such feelings even though she was dead, they clearly did not represent evidence that she was alive. She said she recognised that this was a difficult concept for us to grasp and one which was equally difficult for her to explain, partly because the experience was unique to her and partly because she did not fully understand it herself.
We then asked JK whether she thought we would be able to feel our hearts beat, to feel hunger, and so on if we were dead. JK said that we wouldn’t, and repeated that this experience was unique to her; no one else had ever experienced what she was going through. However, she eventually agreed that it “might be possible”. Hence, JK recognised the logical inconsistency between someone’s being dead and yet remaining able to feel and think, but thought that she was none the less in this state.
What is it like to be a patient with one of these illusions?349 In at least some such cases, a plausible interpretation seems to be that when the patient introspects about whether they are blind or dead, they seem to just know, “directly,” that they are dead, or blind, or not blind, and this feeling of “knowing” trumps the evidence presented to them by the examiner. (Perhaps if these patients had been trained as philosophers, they would claim they had “direct acquaintance”350 with the fact that they were dead, or blind, or not blind.)
In any case, it’s clear that we can be subject to profound illusions about the external world, about ourselves, and about our own capacities. But can we be wrong about our own subjective experience? When perceiving the tabletops illusion above, we are wrong about the shape of the tabletops, but presumably we are right about what our subjective experience of the tabletops is like — right? Isn’t it the case that “where consciousness is concerned, the existence of the appearance is the reality”?351
In fact, I think we are very often wrong about our own subjective experiences.352 To get a sense for why I think so, try this experiment: close your eyes, picture in as much detail as you can the front of your house or apartment building from across the street, and ask a friend to read you these questions one at a time (pausing for several seconds between each question, so you have a chance to think about the answer):353
How much of the scene can you vividly visualize at once? Can you keep the image of the chimney vividly in mind at the same time that you vividly imagine your front door, or how does the image of the chimney fade as you begin to think about the door? How much detail does your image have? How stable is it? If you can’t visually imagine the entire front of your house in rich detail all at once, what happens to the aspects of the image that are relatively less detailed? If the chimney is still experienced as part of the imagery when your imagemaking energies are focused on the front door, how exactly is it experienced? Does it have determinate shape, determinate color? In general, do the objects in your image have color before you think to assign color to them, or do some of the colors remain indeterminate, at least for a while…? If there is indeterminacy of color, how is that indeterminacy experienced? As gray? Does your visual image have depth in the same way that your sensory experience does… or is your imagery somehow flatter…? …Do you experience the image as located somewhere in egocentric space – inside your head, or before your eyes, or in front of your forehead – or does it make no sense to attempt to assign it a position in this way?
When questioned in this way, I suspect many people will quickly become quite uncertain about the subjective character of their own conscious experience of imagining the front of their house or apartment building.
Here is another exercise from Dennett (1991), ch. 4:
…would you be prepared to bet on the following propositions? (I made up at least one of them.)
- You can experience a patch that is red and green all over at the same time — a patch that is both colors (not mixed) at once.
- If you look at a yellow circle on a blue background (in good light), and the luminance or brightness of the yellow and blue are then adjusted to be equal, the boundary between the yellow and blue disappears.
- There is a sound, sometimes called the auditory barber pole, which seems to keep on rising in pitch forever, without ever getting any higher.
- There is an herb an overdose of which makes you incapable of understanding spoken sentences in your native language. Until the effect wears off, your hearing is unimpaired, with no fuzziness or added noise, but the words you hear sound to you like an entirely foreign language, even though you somehow know they aren’t.
- If you are blindfolded, and a vibrator is applied to a point on your arm while you touch your nose, you will feel your nose growing like Pinocchio’s; if the vibrator is moved to another point, you will then have the eerie feeling of pushing your nose inside out, with your index finger coming to rest somewhere inside your skull.
Do you know which one Dennett fabricated? I reveal the answer in a footnote.354
To try additional exercises of this sort, see Schwitzgebel (2011).355
6.6.3 Where do the illusionist and the realist disagree?
In sum, the illusions to which we are susceptible are deep and pervasive.356 But even given all this, what could it mean for the realist’s notion of the “what it’s like”-ness of conscious experience to be an illusion? What is it, exactly, that the realist and the weak illusionist both think exists, but which the strong illusionist does not?357
This question is difficult to answer clearly because, after all, the realist (and perhaps the weak illusionist) is claiming that (their notion of) “phenomenal property” is sui generis, and rather unlike anything else we understand.
More centrally, it is not clear what the realist and the weak illusionist think is left to explain once the strong illusionist has explained (or explained away) the apparent privacy, ineffability, subjectivity, and intrinsicness of qualia. Frankish (2012a)358 makes this point by distinguishing three (soda-inspired) notions of “qualia” that have been discussed by philosophers:
- Classic qualia: “Introspective qualitative properties of experience that are intrinsic (see footnote359), ineffable, and subjective.” (Classic qualia are widely thought to be incompatible with physicalism.)
- Diet qualia: “The phenomenal characters (subjective feels, what-it-is-likenesses, etc.) of experience.”
- Zero qualia: “The properties of experiences that dispose us to judge that experiences have introspectable qualitative properties that are intrinsic, ineffable, and subjective.”
Which of these notions of qualia should be the core “explanandum” (thing to be explained) of consciousness? In my readings, the most popular option these days seems to be diet qualia, since assuming classic qualia as the explanandum seems rather presumptuous, and would seem to beg the question against the physicalist. Diet qualia, in contrast, appears to be a theory-neutral explanandum of consciousness. One can take diet qualia to be the core explanandum of consciousness, and then argue that the best explanation of that explanandum is that classic qualia exist, or argue for a physicalist account of diet qualia, or argue something else.
Frankish, however, argues that the notion of diet qualia, when we look at it carefully, turns out to have no distinctive content beyond that of zero qualia. Frankish asks: if an experience could have zero qualia without having diet qualia,
…what exactly would be missing? Well, a phenomenal character, a subjective feel, a what-it-is-likeness. But what is that supposed to be, if not some intrinsic, ineffable, and subjective qualitative property? This is the crux of the matter. I can see how the properties that dispose us to judge that our experiences have classic qualia might not be intrinsic, ineffable, and subjective, but I find it much harder to understand how a phenomenal character itself might not be. What could a phenomenal character be, if not a classic quale? How could a phenomenal residue remain when intrinsicality, ineffability, and subjectivity have been stripped away?
The worry can be put another way. There are competing pressures on the concept of diet qualia. On the one hand, it needs to be weak enough to distinguish it from that of classic qualia, so that functional or representational theories of consciousness are not ruled out a priori. On the other hand, it needs to be strong enough to distinguish it from the concept of zero qualia, so that belief in diet qualia counts as realism about phenomenal consciousness. My suggestion is that there is no coherent concept that fits this bill. In short, I understand what classic qualia are, and I understand what zero qualia are, but I do not understand what diet qualia are; I suspect the concept has no distinctive content.
So what might the “something it’s like”-ness of diet qualia be, if it is more than zero qualia and less than classic qualia? Frankish surveys several possible answers, and finds all of them wanting. He concludes that, as far as he can tell, “there is no viable ‘diet’ notion of qualia which is stronger than that of zero qualia yet weaker than that of classic qualia and which picks out a theory-neutral explanandum [of consciousness].”
Frankish then complains about “the diet/zero shuffle”:
I have argued that the notion of diet qualia has no distinctive content. If there are no classic qualia, then all that needs explaining (as far as ‘what-it-is-likeness’ goes) are zero qualia. This is not a popular view, but it is one that is tacitly reflected in the practice of philosophers who offer reductive accounts of consciousness. Typically, these accounts involve a three-stage process. First, diet qualia are introduced as a neutral explanandum. Second, diet qualia are identified with some natural, usually relational, property of experience, such as possession of a form of non-conceptual intentional content or availability to higher-order thinking. Third, this identification is defended by arguing that we would be disposed to judge that experiences with this property have intrinsic, ineffable, and subjective qualitative properties. In the end, diet qualia are not explained at all but simply identified with some other feature, and what actually get explained are zero qualia. I shall call this the diet/zero shuffle.
If Frankish is right, the upshot is that there is no coherent “what it’s like”-ness that needs explaining above and beyond zero qualia, unless non-physicalists can argue convincingly for the existence of classic qualia. And if zero qualia are all that need to be explained, then the strong illusionist still has lots of work to do, but it looks like much less mysterious kind of work than one might have previously thought, and perhaps fairly similar to the kind of work that remains to explain human memory, attentional systems, and the types of cognitive illusions described above.
Now, supposing (strong) illusionism is right, what are the implications for the distribution question, and for my intuitions about which properties of consciousness are important for moral patienthood? I discussed the former question here, and I discuss the latter question next.
6.6.4 Illusionism and moral patienthood
What are the implications of illusionism for my intuitions about moral patienthood? In one sense, there might not be any.360 After all, my intuitions about (e.g.) the badness of conscious pain and the goodness of conscious pleasure were never dependent on the “reality” of specific features of consciousness that the illusionist thinks are illusory. Rather, my moral intuitions work more like the example I gave earlier: I sprain my ankle while playing soccer, don’t notice it for 5 seconds, and then feel a “rush of pain” suddenly “flood” my conscious experience, and I think “Gosh, well, whatever this is, I sure hope nothing like it happens to fish!” And then I reflect on what was happening prior to my conscious experience of the pain, and I think “But if that is all that happens when a fish is physically injured, then I’m not sure I care.” And so on. (For more on how my moral intuitions work, seeAppendix A.)
But if we had a better-developed understanding of how consciousness works, this could of course have important implications for my intuitions about moral patienthood. Perhaps a satisfactory illusionist theory of consciousness developed 20 years from now will show that some of the core illusions of human consciousness are irrelevant to what I morally care about, such that animals without those particular illusions of human consciousness still have a morally relevant sort of consciousness. This is, indeed, the sort of reasoning that leads me to assign a higher probability that various animal taxa will have “consciousness of a sort I morally care about” than “consciousness as defined by example above.” But getting further clarity about this must await more clarity about how consciousness works.
Personally, my hunch is that the cognitive algorithms which produce (e.g.) the illusion of an explanatory gap, and other “core” illusions of (human) consciousness, are going to be pretty important to what I find morally important about consciousness. In order words, my hunch is that if we could remove those particular illusion-producing algorithms from how a human mind works, then all “pain” might be to that person like the “first 5 seconds of unnoticed nociception” from my sprained-ankle example.361 But this is just a hunch, and my hunch could easily being wrong, and I could imagine ending up with a different hunch if I spent ~20 hrs trying to extract where my intuitions on this are coming from, and I could also imagine myself having different moral intuitions about the sprained-ankle example and other issues if I took the time to do something closer to my “extreme effort” process.
Also, it could even be that a brain similar to mine except that it lacked particular illusions might actually be a “moral superpatient,” i.e. a moral patient with especially high moral weight. For example, suppose a brain like mine was modified such that its introspective processes had much greater access to, and understanding of, the many other cognitive processes going on within the brain, such that it’s experiences seemed much less “ineffable.” Arguably, such a being would have especially morally weighty experiences.
6.7 Appendix G. Consciousness and fuzziness
In this appendix, I elaborate on my earlier comments about the “fuzziness” of consciousness.
6.7.1 Fuzziness and moral patienthood
First, how does a “fuzzy” view of consciousness (both between and within individuals) interact with the question of moral patienthood?
The question of which beings should be considered moral patients by virtue of their phenomenal consciousness requires the collaboration of two kinds of reasoning:
- Scientific explanation: How does consciousness work? How can it vary? How can we measure it?
- Moral judgment: Which kinds of processes — in this case, related to consciousness — are sufficient for moral patienthood?
In theory, a completed scientific explanation of consciousness could reveal a relatively clear dividing line between the conscious and the not-conscious. The discovery of a clear dividing line could allow for relatively straightforward moral judgments about which beings are moral patients (via their phenomenal consciousness). But if we discover (as I expect we will) that there is no clear dividing line between the conscious and the not-conscious — i.e. that consciousness is “fuzzy” — this could make our consciousness-derived moral judgments quite difficult.362 To illustrate this point, I’ll tell a story about two fictional pre-scientific groups of people, each united by a common “sacred” value.
The first group, the water-lovers, have a sacred value for water. They don’t know what water is made of or how it works, but they have positive and negative (or probably negative) examples of it. The clear liquid in lakes is water; the clear liquid that falls from the sky sometimes is water; the discolored liquid in a pond is probably water with extra stuff in it, but they’re not absolutely certain; the red liquid that comes out of a stab wound probably isn’t water, but it might be water with some impurities, like the water in a pond. The water-lovers also have some observations and ideas about how water seems to work. It can quench thirst; it slips and slides over the body; it can’t be walked on; if it gets very cold it turns to ice; it seems to eventually float away into the air if placed in a pot with a fire beneath it; if snow falls on one’s hand it seems to transform into water; etc.
Then, some scientists invent modern chemistry and discover that water is H2O, that it can be separated into hydrogen and oxygen via electrolysis, that it has particular boiling and melting points, that the red liquid that flows from a stab wound is partly water but partly other things, and so on. In this case, the water-lovers have little trouble translating their sacred value for “water” — defined in a pre-scientific ontology — into a value for certain things in the ontology of the real world discovered by science. The resolution to this “cross-ontology value translation” challenge is fairly straightforward: they value H2O.363
A few of the water-lovers conclude that what scientists have really shown is that water doesn’t exist — only H2O exists. They suffer an existential crisis and become nihilists. But most water-lovers carry on as before, assigning sacred value to water in lakes, rain that falls from the sky, and so on. A few of the group’s intellectuals devote themselves to the task of making judgments about edge cases, such as whether “heavy water” (2H2O) should be thought to have sacred value.
Another group of people at the time are the life-lovers, who have a sacred value for life.364 They don’t know how life works, but they have many positive and probably-negative examples of it, and they have some observations and ideas about how it seems to work. Humans are alive, up to the point where they stop responding even upon being dunked in water. Animals are also alive, up to the point where they stop responding even when poked with a knife. Plants are alive, but they move much more slowly than animals do. The distinction between living and non-living things is thought to be that living things are possessed by a “life force,” or perhaps by some kind of supernatural soul, which allows living things — unlike non-living things — to grow and reproduce and respond to the environment.
Then, some scientists invent modern biology. They discover that the ill-defined set of processes previously gestured at with terms like “life” is fully explained by the mechanistic activity of atoms and molecules, with no role for a separate life force or soul. This set of processes includes the development and maintenance of a cellular structure, homeostasis, metabolism, reproduction, growth, a wide variety of mechanisms for responding to stimuli, and more. A beginner’s introduction to how these processes commonly work is not captured by a simple chemical formula, but by an 800-page textbook.
Moreover, the exact set of processes at work, and the parameters of those processes, vary widely across systems. Virtually all “living systems” depend on photosynthesis-derived materials, but some don’t. Most survive in a relatively narrow range of temperatures, chemical environments, radiation levels, and gravity strengths, but many thrive in more extreme environments. Most actively regulate internal variables (e.g. temperature) within a relatively narrow range and die if they fail to do so, but some can instead enter a potentially years-long state of non-activity, from which they can later be revived.365 Most age, but some don’t. They range from ~0.5 µm to 2.8 km in length, and from mere weeks to thousands of years in life span.366 There are many edge cases, such as viruses, parasites, and bacterial spores.367
All this leaves the life-lovers with quite a quandary. How should they preserve their sacred value for “life” — defined in a pre-scientific ontology — as a value for some but not all of these innumerable varieties of life-related processes discovered by science? How should they resolve their cross-ontology value translation problem?
Some conclude they never should have cared about living things in the first place. Others conclude that any carbon-based thing which grows and reproduces should have sacred value — but one consequence of this is that they assign sacred value to certain kinds of crystals,368 and some life-lovers find this a bit nutty. Others assign sacred value only to systems which satisfy a longer list of life-related criteria that includes “autonomous reproduction,” but this excludes the cuckoo yellowjacket wasp and many other animals, which again seems strange to many other life-lovers. The life-lovers experience a series of schisms over which life-related processes have sacred value.
Like many physicalists,369 I expect the scientific explanation of phenomenal consciousness to look less like the scientific explanation of water and more like the scientific explanation of life. That is, I don’t expect there to be a clear dividing line between the conscious and the non-conscious. And as scientists continue to decompose consciousness into its component processes and reveal their great variety, I expect people to come to radically different moral judgments about which kinds of consciousness-related processes are moral patients and which are not. I suspect this will be true even for people who started out with very similar values (as defined in a first-person ontology, or as defined in the inchoate scientific ontologies available to us in 2017).370
I don’t have a knock-down argument for the fuzzy view, but consider: does it seem likely there was a single gene mutation in phylogeny such that earlier creatures had no conscious experience at all, while carriers of the mutation do have some conscious experience? Does it seem likely there is some moment in the development of a human fetus or infant before which it has no conscious experience at all, and after which it does have some conscious experience? Is there a clear dividing line between what is and isn’t alive, or between software that does and doesn’t implement some form of “attention,” “memory,” or “self-modeling”? Is “consciousness” likely to be as simple as electrons, crystals, and water, or is it more likely to be a more complex set of interacting processes like “life” or “vision” or “language use” or “face-recognition,” which even in fairly “minimal” form can vary along many different dimensions such that there is no obvious answer as to whether some things should count as belonging to one of these classes or not, except by convention? (I continue this line of thinking below.)
As I mentioned above, one consequence of holding a fuzzy view of consciousness is that it can be hard to give a meaningful response to questions like “How likely do you think it is that chickens / fishes / fruit flies are conscious?” Or as Dennett (1995) puts it,
Wondering whether it is “probable” that all mammals have [consciousness] thus begins to look like wondering whether or not any birds are wise or reptiles have gumption: a case of overworking a term from folk psychology that has [lost] its utility along with its hard edges.
I considered multiple options for how to proceed given this difficulty,371 and in the end I decided to take the following approach for this report: I temporarily set my moral judgments aside, and investigated the likely distribution of “consciousness” (as defined above), while acknowledging the extreme fuzziness of the concept. Then, at the end, I brought my moral judgments back into play, and explored what my empirical findings might imply given my moral judgments.
If you’re interested, I explain a few of my moral intuitions in Appendix A. I suspect these intuitions affect this report substantially, because they probably affect my intuitions about which beings are “conscious,” even when I try to pursue that investigation with reference to consciousness as defined above rather than with reference to “types of consciousness I intuitively morally care about.”372
6.7.2 Fuzziness and Darwin
Here, I’d like to say a bit more about why I expect consciousness to be fuzzy.
In part, this is because I think we should expect the vast majority of biological concepts — including concepts defined in terms of biological cognition (as consciousness is defined above, even if it can also be applied to non-biological systems) — to be at least somewhat “fuzzy.”
One key reason for this, I think, is Darwin. Dennett (2016b) explains:
Ever since Socrates pioneered the demand to know what all Fs have in common, in virtue of which they are Fs, the ideal of clear, sharp boundaries has been one of the founding principles of philosophy. Plato’s forms begat Aristotle’s essences, which begat a host of ways of asking for necessary and sufficient conditions, which begat natural kinds, which begat difference-makers and other ways of tidying up the borders of all the sets of things in the world. When Darwin came along with the revolutionary discovery that the sets of living things were not eternal, hard-edged, in-or-out classes but historical populations with fuzzy boundaries… the main reactions of philosophers were to either ignore this hard-to-deny fact or treat it as a challenge: Now how should we impose our cookie-cutter set theory on this vague and meandering portion of reality?
“Define your terms!” is a frequent preamble to discussions in philosophy, and in some quarters it counts as Step One in all serious investigations. It is not hard to see why. The techniques of argumentation inaugurated by Socrates and Plato and first systematized by Aristotle are not just intuitively satisfying… but demonstrably powerful tools of discovery… Euclid’s plane geometry was the first parade case, with its crisp isolation of definitions and axioms, inference rules, and theorems. If only all topics could be tamed as thoroughly as Euclid had tamed geometry! The hope of distilling everything down to the purity of Euclid has motivated many philosophical enterprises over the years, different attempts to euclidify all the topics and thereby impose classical logic on the world. These attempts continue to this day and have often proceeded as if Darwin never existed…
…
An argument that exposes the impact of Darwinian thinking is David Sanford’s (1975) nice “proof” that there aren’t any mammals:
- Every mammal has a mammal for a mother.
- If there have been any mammals at all, there have been only a finite number of mammals.
- But if there has been even one mammal, then by (1), there have been an infinity of mammals, which contradicts (2), so there can’t have been any mammals. It’s a contradiction in terms.
Because we know perfectly well that there are mammals, we take this argument seriously only as a challenge to discover what fallacy is lurking within it. And we know, in a general way, what has to give: if you go back far enough in the family tree of any mammal, you will eventually get to the therapsids, those strange, extinct bridge species between the reptiles and the mammals… A gradual transition occurred over millions of years from clear reptiles to clear mammals, with a lot of intermediaries filling in the gaps. What should we do about drawing the lines across this spectrum of gradual change? Can we identify a mammal, the Prime Mammal, that didn’t have a mammal for a mother, thus negating premise (1)? On what grounds? Whatever the grounds are, they will compete with the grounds we could use to support the verdict that that animal was not a mammal – after all, its mother was a therapsid. What could be a better test of therapsid-hood than that? Suppose that we list ten major differences used to distinguish therapsids from mammals and declare that having five or more of the mammal marks makes an animal a mammal. Aside from being arbitrary – why ten instead of six or twenty, and shouldn’t they be ordered in importance? – any such dividing line will generate lots of unwanted verdicts because during the long, long period of transition between obvious therapsids and obvious mammals there will be plenty of instances in which mammals (by our five + rule) mated with therapsids (fewer than five mammal marks) and had offspring that were therapsids born of mammals, mammals born of therapsids born of mammals, and so forth! …What should we do? We should quell our desire to draw lines. We can live with the quite unshocking and unmysterious fact that, you see, there were all these gradual changes that accumulated over many millions of years and eventually produced undeniable mammals.
The insistence that there must be a Prime Mammal, even if we can never know when and where it existed, is an example of hysterical realism. It invites us to reflect that if we just knew enough, we’d see – we’d have to see – that there is a special property of mammal-hood – the essence of mammal-hood – that defines mammals once and for all. To deny that there is such an essence, philosophers sometimes say, is to confuse metaphysics with epistemology: the study of what there (really) is with the study of what we can know about what there is. I reply that there may be occasions when thinkers do go off the rails by confusing a metaphysical question with a (merely) epistemological question, but this must be shown, not just asserted. In this instance, the charge of confusing metaphysics with epistemology is just a question-begging way of clinging to one’s crypto essentialism in the face of difficulties.
…
In particular, the demand for essences with sharp boundaries blinds thinkers to the prospect of gradualist theories of complex phenomena, such as life, intentions, natural selection itself, moral responsibility, and consciousness.
If you hold that there can be no borderline cases of being alive (such as, perhaps, viruses or even viroids or motor proteins), you are more than halfway to élan vital before you start thinking about it. If no proper part of a bacterium, say, is alive, what “truth maker” gets added that tips the balance in favor of the bacterium’s being alive? The three more or less standard candidates are having a metabolism, the capacity to reproduce, and a protective membrane, but since each of these phenomena, in turn, has apparent borderline cases, the need for an arbitrary cutoff doesn’t evaporate. And if single-celled “organisms” (if they deserve to be called that!) aren’t alive, how could two single-celled entities yoked together with no other ingredients be alive? And if not two, what would be special about a three-cell coalition? And so forth.
Of course, “fuzziness” is not limited to biological and biology-descended concepts. Mathematical concepts and some concepts used in fundamental physics have sharp boundaries, but the further from these domains we travel, the less sharply defined our concepts tend to be.373
6.7.3 Fuzziness and auto-activation deficit
Finally, here is one more “intuition pump” (Dennett 2013) in favor of a “fuzzy” view about consciousness.
Consider a form of akinetic mutism known variously as “athymhormia,” “psychic akinesia,” or “auto-activation deficit.” Leys & Henon (2013) explain:
Patients with akinetic mutism appear alert or at least wakeful, because their eyes are open and they have active gaze movements. They are mute and immobile, but they are able to follow the observer or moving objects with their eyes, to whisper a few monosyllables, and to have slow feeble voluntary movements under repetitive stimuli. The patients can answer questions, but otherwise never voluntarily start speaking. In extreme circumstances such as noxious stimuli, they can become agitated and even say appropriate words. This neuropsychological syndrome occurs despite the lack of obvious alteration of sensory motor functions. This syndrome results in impaired abilities in communicating and initiating motor activities.
…Akinetic mutism is due to disruption of reticulo-thalamo-frontal and extrathalamic reticulo-frontal afferent pathways…
…A discrepancy may be present between hetero- and auto-activation, the patient having an almost normal behavior under external stimuli. This peculiar form of akinetic mutism has been reported as [athymhormia], “loss of psychic self-activation” or “pure psychic akinesia”…
Below, I quote some case case reports of auto-activation deficit, and then explain why I think they (weakly) suggest a fuzzy view of consciousness.
Case report #1: A 35-year-old woman
From ch. 4 of Heilman & Satz (1983), by Damasio & Van Hoesen, on pp. 96-99:
J is a 35-year-old woman…
On the night of admission she was riding in a car driven by her husband and talking normally, when she suddenly slumped forward, interrupted her conversation, and developed weakness of the right leg and foot. On arrival at the hospital she was alert but speechless…
…There was a complete absence of spontaneous speech. The patient lay in bed quietly with an alert expression and followed the examiner with the eyes. From the standpoint of affect her facial expression could be best described as neutral. She gave no reply to the questions posed to her, but seemed perplexed by this incapacity. However, the patient did not appear frustrated… She never attempted to mouth words… She made no attempt to supplement her verbal defect with the use of gesture language. In striking contrast to the lack of spontaneous speech, the patient was able, from the time of admission, to repeat words and sentences slowly, but without delay in initiation. The ease in repetition was not accompanied by echolalia [unsolicited repetition of vocalizations made by someone else], and the articulation and melody of repeated speech were normal. The patient also gave evidence of good aural comprehension of language by means of nodding behavior… Performance on the Token Test was intact and she performed normally on a test of reading comprehension…
Spontaneous and syntactically organized utterances to nurses and relatives appeared in the second week postonset, in relation to immediate needs only. She was at this point barely able to carry a telephone conversation using mostly one- and two-word expressions. At 3 weeks she was able to talk in short simple but complete sentences, uttered slowly… Entirely normal articulation was observed at all times…
On reevaluation, 1 month later, the patient was remarkably recovered. She had considerable insight into the acute period of the illness and was able to give precious testimony about her experiences then. Asked if she ever suffered anguish for being apparently unable to communicate she answered negatively. There was no anxiety, she reported. She didn’t talk because she had “nothing to say.” Her mind was “empty.” Nothing “mattered.” She apparently was able to follow our conversations even during the early period of the illness, but felt no “will” to reply to our questions. In the period after discharge she continued to note a feeling of tranquility and relative lack of concern…
Case report #2: A 61-year-old clerk
Bogousslavsky et al. (1991) explain:
A 61-year old clerk with known [irregular heartbeat] was admitted because of “confusion” and [a drooping eyelid]…
According to his family, [following a stroke] he had become “passive” and had lost any emotional concern. He was [drowsy] and was orientated in time and place…, but remained apathetic and did not speak spontaneously. He moved very little unless asked to do so, only to go to the bathroom three or four times a day. He would sit down at the table to eat only when asked by the nurses or family and would stop eating after a few seconds unless repeatedly stimulated. During the day, he would stay in bed or in an armchair unless asked to go for a walk. He did not react to unusual situations in his room, such as Grand Mal seizures in another patient. He did not read the newspapers and did not watch the television. This behaviour contrasted with preserved motor and speech abilities when he was directly stimulated by another person: with constant activation, he was able to move and walk normally, he could play cards, answer questions, and read a test and comment on it thereafter; however, these activities would stop immediately if the external stimulation disappeared. He did not show imitation and utilization behaviour [imitation behavior occurs when patients imitate an examiner’s behavior without being instructed to do so; utilization behavior occurs when patients try to grab and use everyday objects presented to them, without being instructed to do so], and could inhibit socially inadequate acts, even when asked to perform them by an examiner (shouting in the room, undressing during daytime); however, he did not react emotionally to such orders. Also, he showed no emotional concern [about] his illness, though he did not deny it, and he remained indifferent when he had visitors or received gifts. He did not smile, laugh or cry. He never mentioned his previous activities and, when asked about his job, he answered he had no project to go back to work. When asked about his private thoughts, he just said “that’s all right”, “I think of nothing,” “I don’t want anything.” Because his motor and mental abilities seemed normal when stimulated by another person, his family and friends wondered whether he was really ill or was inactive on purpose, to annoy them. They complained he had become “a larva.”
Formal neuropsychological examination using a standard battery of tests was performed 3, 10, 25 and 60 days after stroke, including naming, repetition, comprehension, writing, reading, facial recognition, visuospatial recognition, topographic orientation on maps, drawing, copy of the Rey-Osterrieth figure, which were normal. No memory dysfunction was found: the patient could evoke remote and recent events of his past, visual… and verbal… learning, delayed reproduction of the Rey-Osterrieth figure showed normal results for age. Only minor disturbances were found on “frontal lobe tests”… His symbolic understanding of proverbs was preserved. The patient could cross out 20 lines distributed evenly on a sheet of paper; with no left- or right-side preference. [Editor’s note: see the paper for sources describing these tests.]
The patient was discharged unchanged two months after stroke to a chronic care institution, because his family could not cope with his behavioural disturbances, though they recognized that his intellect was spared.
I quote from five additional AAD case studies in a footnote.374
Also, Laplane & Dubois (2001) summarize findings from several other cases that I have not read because they were published in French. For example, citing the case study by Damasio & Van Hoesen plus three French papers containing additional case studies, they write:
It is surprising that subjects who are cognitively unimpaired can remain inactive for hours without complaining of boredom. Their mind is “empty, a total blank,” they say. In the most typical cases, they have no thoughts and no projections in the future. Although purely subjective, this feeling of emptiness seems to be a reliable symptom, since it has been reported in almost the same terms by numerous patients.
What can we learn from these case studies? One lesson, I think, is that phenomena we might have previously thought were inseparable are, in fact, separable, as Watt & Pincus (2004) argue. They suggest that “milder versions” of akinetic mutism, such as cases in which patients “respond to verbal inquiry” (as in the cases above), “appear to offer evidence of the independence of consciousness from an emotional bedrock, and that the former can exist without the latter.” They also offer a slightly different interpretation, according to which
lesser versions of [akinetic mutism]… may allow some phenomenal content, while the more severe versions… may show a virtual “emptying out” of consciousness. In these cases, events may be virtually meaningless and simply don’t matter anymore… [In more severe cases] patients [might] live in a kind of strange, virtually unfathomable netherworld close to the border of a persistent vegetative state.
They go on to say that their summary of disorders of consciousness “emphasizes their graded, progressive nature and eschews an all-or-nothing conceptualization. While intuitively appealing, an all-or-nothing picture of consciousness provides a limited basis for heuristic empirical study of the underpinnings of consciousness from a neural systems point of view, as compared to a graded or hierarchical one that emphasizes the core functional envelopes of emotion, intention, and attention.”
For a similar illustration involving various pain phenomena rather than AAD, see Corns (2014).
6.8 Appendix H. First-order views, higher-order views, and hidden qualia
In this appendix, I describe what I see as some of the most important arguments in the debate between first-order and higher-order theorists, which (in the present literature) is perhaps the most common way for theorists to argue about the complexity of consciousness and the distribution question (as mentioned above). See here for an explanation of what distinguishes first-order and higher-order views about consciousness.
Let’s start with the case for a relatively complex account of consciousness. In short, a great deal of cognitive processing that seems to satisfy (some) first-order accounts of consciousness — e.g. the processing which enables the blindsight subject to correctly guess the orientation of lines in her blind spot, or which occurs in the dorsal stream of the human visual system,375 or which occurs in “smart” webcams or in Microsoft Windows — is, as far as we know, unconscious. This is the basic argument that consciousness must be more complex than first-order theorists suggest, whether or not “higher-order” theories, as narrowly defined in e.g. Block (2011), are correct. What can the first-order theorist say in reply?
Most responses I’ve seen seem unpersuasive to me.376 However, there is at least one reply that seems (to me) to have substantial merit, though it does not settle the issue.377
This reply says that there may be “hidden qualia” (perhaps including even “hidden conscious subjects”), in the sense that there may be conscious experiences — in the human brain and perhaps in other minds — that are not accessible to introspection, verbal report, and so on. If so, then this would undermine the basic argument (outlined above) that consciousness must be more complex than first-order theorists propose. Perhaps (e.g.) the dorsal stream is conscious, but its conscious experiences simply are not accessible to “my” introspective processes and the memory and verbal reporting modules that are hooked up to “my” introspective processes.
This “hidden qualia” view certainly seems coherent to me.378 The problem, of course, is that (by definition) we can’t get any introspective evidence of hidden qualia, and without a stronger model of how consciousness works, we can’t use third-person methods to detect hidden qualia, either. Nevertheless I see some reasons to think hidden qualia are at least plausible, given what little we know so far about consciousness. Nevertheless, there are some (inconclusive) reasons to think that hidden qualia may exist.379
6.8.1 Block’s overflow argument
First, consider the arguments in Block (2007b):
No one would suppose that activation of the fusiform face area all by itself is sufficient for face-experience. I have never heard anyone advocate the view that if a fusiform face area were kept alive in a bottle, that activation of it would determine face-experience – or any experience at all… The total neural basis of a state with phenomenal character C is itself sufficient for the instantiation of C. The core neural basis of a state with phenomenal character C is the part of the total neural basis that distinguishes states with C from states with other phenomenal characters or phenomenal contents, for example the experience as of a face from the experience as of a house… So activation of the fusiform face area is a candidate for the core neural basis – not the total neural basis – for experience as of a face…
Here is the illustration I have been leading up to. There is a type of brain injury which causes a syndrome known as visuo-spatial extinction. If the patient sees a single object on either side, the patient can identify it, but if there are objects on both sides, the patient can identify only the one on the right and claims not to see the one on the left… With competition from the right, the subject cannot attend to the left. However… when [a patient named] G.K. claims not to see a face on the left, his fusiform face area (on the right, fed strongly by the left side of space) lights up almost as much as when he reports seeing the face… Should we conclude that [a] G.K. has face experience that – because of lack of attention – he does not know about? Or that [b] the fusiform face area is not the whole of the core neural basis for the experience, as of a face? Or that [c] activation of the fusiform face area is the core neural basis for the experience as of a face but that some other aspect of the total neural basis is missing? How are we to answer these questions, given that all these possibilities predict the same thing: no face report?
Block argues that option [a] is often what’s happening inside our brains (whether or not it happens to be happening for G.K. and face experiences in particular). In other words, he thinks there are genuine phenomenal experiences going on inside our heads to which we simply don’t have cognitive access, because the capacity/bandwidth of the cognitive access mechanisms are more limited than the capacity/bandwidth of the phenomenality mechanisms — in other words, that “phenomenality overflows access.”
Clark & Kiverstein (2007) summarize Block’s “overflow argument” like this:
The psychological data seem to show that subjects can see much more than working memory enables them to report. Thus, in the Landman et al. (2003) experiments, for instance, subjects show a capacity to identify the orientation of only four rectangles from a group of eight. Yet they typically report having seen the specific orientation of all eight rectangles. Working memory here seems to set a limit on the number of items available for conceptualization and hence report.
Work in neuroscience then suggests that unattended representations, forming parts of strong-but-still-losing clusters of activation in the back of the head, can be almost as strong as the clusters that win, are attended, and hence get to trigger the kinds of frontal activity involved in general broadcasting (broadcasting to the “global workspace”). But whereas Dehaene et al. (2006) treat the contents of such close-seconds as preconscious, because even in principle (given their de facto isolation from winning frontal coalitions) they are unreportable, Block urges us to treat them as phenomenally conscious, arguing that “the claim that they are not conscious on the sole ground of unreportability simply assumes metaphysical correlationism”… That is to say, it simply assumes what Block seeks to question – that is, that the kind of functional poise that grounds actual or potential report is part of what constitutes phenomenology. Contrary to this way of thinking, Block argues that by treating the just-losing coalitions as supporting phenomenally conscious (but in principle unreportable) experiences, we explain the psychological results in a way that meshes with the neuroscience.
The argument from mesh (which is a form of inference to the best explanation) thus takes as its starting point the assertion that the only grounds we have for treating the just-losing back-of-the-head coalitions as non-conscious is the unreportability of the putative experiences.
Block’s arguments don’t settle the issue, of course. As the numerous replies to Block (2007b) in that same journal issue point out, there are a great many models which fit the data Block describes (plus other data reported by others). I haven’t evaluated these experimental data and these models in enough detail to have a strong opinion about the strength of Block’s argument, but at a glance it seems to deserve at least some weight, at least in our current state of ignorance about how consciousness works.380
6.8.2 Split-brain patients
Second, consider the famous studies of split-brain patients, conducted by Michael Gazzaniga and others, which have often been argued to provide evidence of at least two separate conscious subjects in a single human brain, with each one (potentially) lacking introspective access to the other.
Gazzaniga himself originally interpreted his split-brain studies to indicate two separate streams of consciousness (Gazzaniga & LeDoux 1978), but later (Gazzaniga 1992) rejected the “double-consciousness” view, and suggested instead that consciousness is computed by the left hemisphere. Later (Gazzaniga 2002), he “conceded that the right hemisphere might be conscious to some degree, but the left hemisphere has a qualitatively different kind of consciousness, which far exceeds what’s found in the right.”381 Most recently, in 2016, Gazzaniga wrote that “there is ample evidence suggesting that the two hemispheres possess independent streams of consciousness following split-brain surgery.”382
Glancing at the literature myself, the evidence seems unclear. For example, as far as I know, we do not have detailed verbal reports of conscious experience from both hemispheres, even though three different split-brain patients have been able to learn to engage in some limited verbal communication from the right (and not just the left) hemisphere.383 I don’t know whether we lack this evidence because those patients just weren’t asked the relevant questions (e.g. they weren’t asked to describe their experience), or because their right-hemisphere verbal communication is too deficient (as with the “verbal communication” of the tiny portion of those with hydranencephaly that can utter word-ish vocalizations at all). See also the arguments back and forth in Schechter (2012) and Pinto et al. (2017).
Also note that if the evidence ends up supporting the view that there are (at least) two streams of consciousness in split-brain patients, this would seem to undermine the “basic argument” for relatively complex theories of consciousness outlined above less than e.g. Block’s overflow argument does, since a “double consciousness” view might just show that each hemisphere has the resources to support (a still quite complex) stream of consciousness if the two hemispheres are disconnected, rather than suggesting that the human brain may support a multitude of (potentially quite simple) streams of conscious processing.
6.8.3 Other cases of hemisphere disconnection
As Blackmon (2016) points out, in addition to split-brain patients there are also patients who have undergone a variety of “hemisphere disconnection” procedures:
surgical hemisphere disconnections are distinct from the more familiar “split-brain” phenomenon in which both hemispheres, despite having their connection via the corpus callosum severed, are connected with the rest of the brain as well as with the body via functioning sensory and motor pathways. Split-brain patients have two functioning hemispheres which receive sensory data and send motor commands; hemispherectomy patients do not.
Hemisphere disconnection procedures include:
- Anatomical hemispherectomy, in which an entire hemisphere is surgically removed from the cranium.
- Functional hemispherectomy, in which some of a hemisphere is removed while the rest is left inside the cranium (but still disconnected).
- Hemispherotomy, in which a hemisphere is disconnected entirely from the rest of the brain but left entirely in the cranium.
- The Wada test, which (successively) anesthetizes each hemisphere while the other hemisphere remains awake, in order to test how well each hemisphere does with memory, language, etc. without the help of the other hemisphere. We can think of this as a “temporary” hemisphere disconnection.
Like split-brain patients, patients undergoing these procedures seem to remain conscious, recover quickly, hold regular jobs, describe their experiences on online forums, and so on.
Studies of hemisphere disconnection provide data which complement the consciousness-relevant data we have from studies of split-brain patients. For example, hemisphere disconnection studies provide unambiguous data about the capabilities of the surviving hemisphere, whereas there is some ambiguity on this matter from split-brain studies, since interhemispheric cortical transfer of information remains possible in split-brain patients (via the superior colliculus), and the outputs of each hemisphere in split-brain patients might be integrated elsewhere in the brain (e.g. in the cerebellum).
I haven’t examined the hemisphere disconnection literature. But as with split-brain research, it seems as though it might (upon further examination) do some work to undermine the “basic argument” for relatively complex theories of consciousness outlined above.
6.8.4 Shiller’s arguments
Shiller (2016) makes two additional arguments for the plausibility of hidden qualia actually existing.384
His exceptionality argument goes like this: just like our other senses didn’t evolve to detect and make use of all potentially available information (e.g. about very small things, or far away things, or very high-pitched sounds, or ultraviolet light), because doing so would be more costly than it’s worth (evolutionarily), probably our introspective powers also haven’t evolved to access all the qualia and other processes going on in our heads. Unless introspection is exceptional (in a certain sense) among our senses, we should expect there to be qualia (and other cognitive processes) that our introspection just doesn’t have access to.
Next, Shiller’s argument from varieties goes like this:
The exceptionality argument centered on the likely limitations of introspection. The second argument that I will present focuses on the variety of kinds of hidden qualia that we might possibly have. Since it is independently plausible that we have many different varieties of qualia, it is more than merely plausible that we have at least one variety of hidden qualia. I have in mind a probabilistic argument: if we judge that there is a not too small probability that we have hidden qualia of each kind, then (given their independence) we are committed to thinking that it is fairly probable that we have at least one kind of hidden qualia. I will briefly describe five kinds of hidden qualia and present a few considerations for thinking that we might have hidden qualia of each kind…
At a glance, these two arguments seem to have some force, especially the first one. But I haven’t spent much time evaluating them in detail.
Of course, even if these and other arguments related to the complexity of consciousness turned out to strongly favor either a “relatively simple” or a “relatively complex” account of consciousness as defined by example above, one could still argue that “morally-relevant consciousness” can be more or less simple than “consciousness as defined by example above.”
6.9 Appendix Z. Miscellaneous elaborations and clarifications
This appendix collects a variety of less-important sub-appendices.
6.9.1 Appendix Z.1. Some theories of consciousness
Below, I list some theories of consciousness that I either considered briefly or read about in some depth.
The items in the table below overlap each other, they are not all theories of the exact same bundle of phenomena, and several of them are families of theories rather than individual theories. In some cases, the author(s) of a theory might not explicitly endorse both physicalism and functionalism about consciousness, but it seems to me their theories could be adapted in some way to become physicalist functionalist theories.
Obviously, this list is not comprehensive.
In no particular order:
Other theories I looked at briefly include prediction error minimization (Hohwy 2012), Nicholas Humphrey’s theory (Humphrey 2011), sensorimotor theory (O’Regan & Noe 2001; O’Regan 2011, 2012), the geometric theory (Fekete et al. 2016), semantic pointer competition (Thagard & Stewart 2014), the radical plasticity thesis (Cleeremans 2011), Björn Merker’s theory (Merker 2005, 2007), Markkula’s narrative behavior theory (Markkula 2015), Derek Denton’s theory (Denton 2006), attention schema theory (Graziano 2013; Webb & Graziano 2015; Graziano & Webb 2017), Julian Jaynes’ theory (Jaynes 1976; Kuijsten 2008), Gary Drescher’s theory (Drescher 2006, ch. 2; notes from my conversation with Gary Drescher), and Orch-OR theory (Hameroff & Penrose 2014).
What, exactly, must a theory of phenomenal consciousness explain? Example lists of desiderata include Van Gulick (1995), pp. 93-95 of Carruthers (2000), chapter 3 of Metzinger (2003), table 1 of Seth & Baars (2005), Aleksander (2007), chapter 1 of Prinz (2012), much of Baars (1988), and section 4.3 of Shevlin (2016).
For some other thoughts on theories of consciousness, see Appendix B.
6.9.2 Appendix Z.2. Some varieties of conscious experience
In the table below, I list some varieties of conscious experience, illustrating the diversity of conscious states for which we have some evidence about the subjective qualities of the phenomena from human verbal self-report.
These and other phenomena which reveal the variety of human conscious experience are collected in Kunzendorf & Wallace (2000), Vaitl et al. (2005), Bayne et al. (2009), Windt (2011) pp. 238-244, Cardeña & Winkelman (2011), Cardeña et al. (2014), Giacino et al. (2014), Bayne & Hohwy (2016), Laureys et al. (2015), Kriegel (2015); ch. 4 of Gennaro (2016), chs. 3-19 of Perry et al. (2002), Part III of Schneider & Velmans (2017), and other sources.
Phenomena not included in the table above, because we don’t (to my knowledge) have human verbal self-report of what it is like to subjectively experience them, include:
- Possible consciousness detected via neuroimaging of vegetative state patients: see e.g. Owen (2013); Chaudhary et al. (2017); Klein (2017a).
- Hydranencephaly: According to Aleman & Merker (2014), a small number of persons with hydranencephaly seem to use words meaningfully, and a small (perhaps overlapping) number also survive into their teenage or later years. However, I doubt there are any cases in which hydranencephalics can verbally report some details of their subjective experiences (if they have any).392
6.9.3 Appendix Z.3. Challenging dualist intuitions
I suspect that our dualist intuitions are the biggest barrier to embracing a physicalist, functionalist, illusionist theory of consciousness, which might otherwise be, as Dennett (2016a) puts it, “the obvious default theory of consciousness.”
Some sources that might be especially useful here include Dennett (1991, 2017), chapter 2 of Drescher (2006), Metzinger (2010), and Graziano (2013).
I think it can also be helpful to think about what kind of scientific progress might be needed for one to grok, at a “gut level,” how phenomenal consciousness could be “just” a set of physical processes and nothing more — even though the feeling of understanding is not necessarily strong evidence of accurate understanding (Trout 2007, 2016).
In my experience, the subjective feeling of understanding often comes when I can “see” (visualize) how a system of processes I already understand could “add up to” the system I’m trying to understand. Eliezer Yudkowsky illustrates this phenomenon with two examples, heat and socks:
On a high level, we can see heat melting ice and flowing from hotter objects to cooler objects. We can, by imagination, see how vibrating particles could actually constitute heat rather than causing a mysterious extra ‘heat’ property to be present. Vibrations might flow from fast-vibrating objects to slow-vibrating objects via the particles bumping into each other and transmitting their speed. Water molecules vibrating quickly enough in an ice cube might break whatever bonds were holding them together in a solid object.
…
For an even more transparent reductionist identity, consider, “You’re not really wearing socks, there are no socks, there’s only a bunch of threads woven together that looks like a sock.” Your visual cortex can represent this identity directly, so it feels immediately [obvious] that the sock just is the collection of threads; when you imagine sock-shaped woven threads, you automatically feel your visual model recognizing a sock.
…
The gap between mind and brain is larger than the gap between heat and vibration, which is why humanity understood heat as disordered kinetic energy long before anyone had any idea how ‘playing chess’ could be decomposed into non-mental simpler parts [as when chess-playing computers were invented].
Similarly, Dennett (1986) relates the following:
Sherry Turkle… talks about the reactions small children have to computer toys when they open them up and look inside. What they see is just an absurd little chip and a battery and that’s all. They are baffled at how that could possibly do what they have just seen the toy do. Interestingly, she says they look at the situation, scratch their heads for a while, and then they typically say very knowingly: “It’s the battery!” (A grown-up version of the same fallacy is committed by the philosopher John Searle, 1980, when he, arriving at a similar predicament, says: “It’s the mysterious causal powers of the brain that explain consciousness.”) Suddenly facing the absurdly large gap between what we know from the inside about consciousness and what we see if we take off the top of somebody’s skull and look in can provoke such desperate reactions. When we look at a human brain and try to think of it as the seat of all that mental activity, we see something that is just as incomprehensible as the microchip is to the child when she considers it to be the seat of all the fascinating activity that she knows so well as the behavior of the simple toy.
Unfortunately, current theories of consciousness aren’t yet detailed enough for most or all of us to visualize how the processes they posit could “add up to” subjective experience. (See also Appendix B.)
When I feel as though no amount of functional cognitive processing could ever “add up to” the phenomenality of phenomenal consciousness, I try to remind myself that some ancient biologists probably felt the same way when they tried to imagine how the interaction of inanimate, non-living parts could ever “add up to” living systems. I also remind myself of Edgar Allen Poe, in 1836, failing to see how mechanical parts could ever “add up to” the intelligent play of chess, despite his awareness of Charles Babbage’s Analytical Engine:
Arithmetical or algebraical calculations are, from their very nature, fixed and determinate. Certain data being given, certain results necessarily and inevitably follow… But the case is widely different with the Chess-Player [i.e. von Kempelen’s “Mechanical Turk”]. With him there is no determinate progression. No one move in chess necessarily follows upon any one other. From no particular disposition of the [chess pieces] at one period of a game can we predicate their disposition at a different period… Now even granting that the movements of the Automaton Chess-Player were in themselves determinate, they would be necessarily interrupted and disarranged by the indeterminate will of his antagonist. There is then no analogy whatever between the operations of the Chess-Player, and those of the calculating machine of Mr. Babbage… It is quite certain that the operations of the Automaton are regulated by mind, and by nothing else. Indeed this matter is susceptible of a mathematical demonstration, a priori.393
So, if you cannot now intuitively grok how phenomenal consciousness could be “just” a set of physical processes and nothing more, perhaps you can nevertheless take it to be a lesson of history that, once the mechanisms of consciousness are much more thoroughly understood and described than they are now, you may then be able to “see” how they add up to phenomenal consciousness, just as you can now see how some computer hardware and software can add up to intelligent chess play. (If you’re not familiar with how computers play chess, see e.g. Levy & Newborn 1991.)
Of course, nothing I’ve said in this section engages the arguments that have been put forward for why a physicalist, functionalist explanation of consciousness may not be forthcoming (e.g. Chalmers 1997, 2010). I don’t discuss those arguments in this report (see above);394 the purpose of this appendix is merely to give the reader a sense of why I distrust my dualistic intuitions about consciousness, and point to some recommended readings (see above).
6.9.4 Appendix Z.4. Brief comments on unconscious emotions
Previously, I mentioned that research on unconscious emotions might lend support to some “cortex-required views” about consciousness. I did not investigate this literature thoroughly, but I make some brief comments on unconscious emotions below.
In common parlance, typical emotion words such as “fear” and “desire” and “excitement” refer to a particular kind of conscious experience in addition to a set of physiological and behavioral responses. However, most scientific studies of “emotion” do not measure (self-reported) conscious experience, but instead measure only physiological or behavioral responses. This is obviously true for studies of animal emotion, since animals cannot report conscious experiences. But it is also true of many studies of human emotion.395 Hence, most scientific studies of “emotion,” especially in animals, don’t necessarily assume that “emotions” must involve conscious experiences.
Thus, in this report, I use “emotion” to refer to certain kinds of cognitive processing, physiological responses, and behavioral responses, which might or might not also involve conscious experience. But, in keeping with everyday usage, I reserve the word “feelings,” and terms for specific emotions such as “fear,” for emotional responses that do involve certain kinds of conscious experiences. In contrast, when referring only to cognitive processing, physiological responess, and behavioral responses — e.g. as examined in animal studies — I avoid consciousness-implying terms like “fear” in favor of more neutral terms like “threat response.”396
Under this terminology, an “unconscious emotion” is an emotional response — involving certain kinds of cognitive processing, physiological response, and/or behavioral response — without an accompanying conscious experience of that emotion. This is analogous to the above discussion of “unconscious vision,” which involves certain kinds of cognitive processing and behavioral response without any conscious experience of that visual processing.
If humans exhibit genuinely unconscious emotions, and the conscious experience of emotion seems to depend on neural circuits in certain cortical structures, this could lend some further support to some “cortex-required” views.
So, what is the current state of the evidence? As far as I can tell, the matter of unconscious emotions remains under considerable debate.397 Moreover, my sense is that the neuroscience of emotions is less well-developed than the neuroscience of vision (likely, because neuroscience of vision is easier to study).
Below, I make some brief remarks related to just one example of “unconscious emotion”: namely, unconscious “pleasure.”
Perhaps the leading theory in the neuroscience of pleasure is the liking/wanting theory developed by Kent Berridge and others. In short, this theory claims that (Berridge & Kringelbach 2016):
Affective neuroscience studies have further indicated that even the simplest pleasant experience, such as a mere sensory reward, is actually a more complex set of processes containing several psychological components, each with distinguishable neurobiological mechanisms… These include, in particular, distinct components of reward wanting versus reward liking (as well as reward learning), and each psychological component has both conscious and nonconscious subcomponents. Liking is the actual pleasure component or hedonic impact of a reward; wanting is the motivation for reward; and learning includes the associations, representations, and predictions about future rewards based on past experiences.
We distinguish between the conscious and nonconscious aspects of these subcomponents because both exist in people… At the potentially nonconscious level, we use quotation marks to indicate that we are describing objective, behavioral, or neural measures of these underlying brain processes. As such, “liking” reactions result from activity in identifiable brain systems that paint hedonic value on a sensation such as sweetness, and produce observable affective reactions in the brain and in behavior such as facial expressions. Similarly, “wanting” includes incentive salience or motivational processes within reward that mirror hedonic “liking” and make stimuli into motivationally attractive incentives. “Wanting” helps spur and guide motivated behavior, when incentive salience is attributed to stimulus representations by the mesolimbic brain systems. Finally, “learning” includes a wide range of processes linked to implicit knowledge as well as associative conditioning, such as basic Pavlovian and instrumental associations.
…By themselves, core “liking” and “wanting” processes can occur nonconsciously, even in normal people.
Berridge & Kringelbach (2015) summarizes the evidence on unconscious vs. conscious “liking” and “wanting”:
…in humans, the [unconscious and conscious] forms of hedonic reaction can be independently measured. For example, objective hedonic “liking” reactions can sometimes occur alone and unconsciously in ordinary people without any subjective pleasure feeling at all, at least in particular situations (e.g., evoked by subliminally brief or mild affective stimuli)… Unconscious “liking” reactions still effectively change goal-directed human behavior, though those changes may remain undetected or be misinterpreted even by the person who has them… More commonly, “liking” reactions occur together with conscious feelings of liking and provide a hedonic signal input to cognitive ratings and subjective feelings. However, dissociations between [unconscious and conscious] hedonic reaction[s] can still sometimes occur in normal people due to the susceptibility of subjective ratings of liking to cognitive distortions by framing effects, or as a consequence of theories concocted by people to explain how they think they should feel… For example, framing effects can cause two people exposed to the same stimulus to report different subjective ratings, if one of them had a wider range of previously experienced hedonic intensities (e.g., pains of childbirth or severe injury)… In short, there is a difference between how people feel and report subjectively versus how they objectively respond with neural or behavioral affective reactions. Subjective ratings are not always more accurate about hedonic impact than objective hedonic reactions and the latter can be measured independently of the former.
I have not read the primary studies cited in these review articles, but my sense is that there is much less evidence concerning unconscious pleasure than there is concerning unconscious vision.
6.9.5 Appendix Z.5. The lack of consensus in consciousness studies
Here I use a small set of examples to illustrate the lack of consensus about several different aspects of consciousness.
Note that while I have made some effort to ensure that the sources cited below are “talking about the same thing” — phenomenal consciousness, as opposed to e.g. self-consciousness or the capacity for distinct waking/sleeping states — there is (perhaps unavoidably) much ambiguous language in the consciousness literature, and thus no doubt some of the apparent diversity of opinion results from experts talking about different things rather than having different views about “the same thing.” For example, some of the discussions below may have been intended as accounts of a limited class of phenomenal consciousness (e.g. human phenomenal consciousness), rather than as accounts account of all types of phenomenal consciousness.
Example disagreements in consciousness studies:
Is consciousness physical? The largest survey of professional philosophers I’ve seen — the PhilPapers Survey, conducted in late 2009 (results; paper) — found that among “Target Faculty” (i.e. “all regular faculty members in 99 leading departments of philosophy”), 56.5% of respondents accepted or leaned toward physicalism about the mind, 27.1% of respondents accepted or leaned toward non-physicalism about the mind, and 16.4% of respondents gave an “Other” response.
What is the extent of consciousness in the natural world / when did it evolve? For overviews, see Velmans (2012), Swan (2013), and Godfrey-Smith (2017) (from Andrews & Beck 2017). Some specific possibilities include:
- Consciousness precedes the emergence of living systems (Chalmers 2015; Howe 2015).
- Consciousness evolved at least as early as some single-celled organisms (Baluškaa & Mancuso 2014; Braun 2015; and for context see Lyon 2015 and Bray 2011).
- Consciousness evolved at least as early as some plants (Nagel 1997; Smith 2016).
- Consciousness evolved ~520 million years ago, during or just before the Cambrian explosion (Feinberg & Mallatt 2016; Graziano 2014; Ginsburg & Jablonka 2007).
- Consciousness evolved at least as early as some insects (Barron & Klein 2016; maybe also Merker 2005).
- Consciousness evolved after fishes but before birds and mammals (Cabanac et al. 2009; Rose et al. 2014; Key 2016).
- Consciousness evolved very late, in humans and perhaps in some apes (Macphail 2000; Carruthers 2000; Dennett 1995).
Was consciousness selected for, or is it a spandrel? For an overview of the debate, see Robinson et al. (2015).
What is the functional role of consciousness? Metzinger (2010), p. 55, summarizes some of the proposed options:
Today, we have a long list of potential candidate functions of consciousness: Among them are the emergence of intrinsically motivating states, the enhancement of social coordination, a strategy for improving the internal selection and resource allocation in brains that got too complex to regulate themselves, the modification and interrogation of goal hierarchies and long-term plans, retrieval of episodes from long-term memory, construction of storable representations, flexibility and sophistication of behavioral control, mind reading and behavior prediction in social interaction, conflict resolution and troubleshooting, creating a densely integrated representation of reality as a whole, setting a context, learning in a single step, and so on.
Which theory of consciousness is most likely correct? Lists, taxonomies, and collections of theories of consciousness include chapter 3 of Carruthers (2005), Katz (2013), Cavanna & Nani (2014), Sun & Franklin (2007), McGovern & Baars (2007), Van Gulick (2014), chs. 10-11 of Revonsuo (2009), and Part IV of Schneider & Velmans (2017). Some introductory texts on consciousness also capably survey the leading theories, e.g. Weisberg (2014).
Finally, to illustrate the methodological diversity of consciousness studies, compare:
- David Chalmers’ metaphysical thought experiments (Chalmers 1997, 2010).
- The largely science-driven arguments of Dennett (1991), Metzinger (2003), Merker (2005), Prinz (2012), Dehaene (2014), and Tye (2016), chs. 5-9.
- The paradigms for probing human consciousness discussed in sources such as Miller (2015), Overgaard (2015), Goodale & Milner (2013), Laureys et al. (2015), and Breitmeyer & Ogmen (2006).
- Consciousness-relevant investigations in comparative ethology and comparative neuroanatomy, such as can be found in Shettleworth (2009), Vonk & Shackelford (2012), Pearce (2008), Wynne & Udell (2013), and Herculano-Houzel (2016).
- Standish (2013)’s anthropic argument against ant consciousness.
Obviously, this list of examples is far from exhaustive.
6.9.6 Appendix Z.6. Against hasty eliminativism
Earlier, I argued against “hasty” eliminativism. Here, I develop this line of thinking a bit further.
I agree with a great deal of Irvine (2013)’s survey of the scientific and measurement difficulties currently obstructing progress in consciousness studies, but whereas Irvine is ready to embrace eliminativism about consciousness, I prefer to wait to see how our concept of consciousness evolves in response to empirical discoveries, and decide later whether we want to modify the concept or toss it in the trash next to “phlogiston.”
One reason to resist quick elimination of “consciousness” is articulated nicely by Flanagan (1992), pp. 23-24:
Two formidable arguments for [eliminating] consciousness involve attempts to secure an analogy between the concept of consciousness and some other concept in intellectual ruin. It has been suggested that the concept of consciousness is like the concept of phlogiston or the concept of karma. One shouldn’t think in terms of such concepts as phlogiston or karma. And it would be a philosophical embarrassment to try to develop a positive theory of karma or phlogiston. There simply are no such phenomena for there to be theories about. Let me explain why the analogies with karma and phlogiston do not work to cast doubt on the existence of consciousness or on the usefulness of the concept.
…There is no single orthodox concept of consciousness. Currently afloat in intellectual space are several different conceptions of consciousness, many of them largely inchoate. The Oxford English Dictionary lists 11 senses for ‘conscious’ and 6 for ‘consciousness’. The meanings cover self-intimation, being awake, phenomenal feel, awareness of one’s acts, intentions, awareness of external objects, and knowing something with others. The picture of consciousness as a unified faculty has no special linguistic privilege, and none of the meanings of either ‘conscious’ or ‘consciousness’ wear any metaphysical commitment to immaterialism on its sleeve. The concept of consciousness is neither unitary nor well regimented, at least not yet.
This makes the situation of the concept of consciousness in the late twentieth century very different from that of the concept of phlogiston in the 1770s. I don’t know if ordinary folk had any views whatsoever about phlogiston at that time. But the concept was completely controlled by the community of scientists who proposed it in the late 1600s and who characterized phlogiston as a colorless, odorless, and weightless “spirit” that is released rapidly in burning and slowly in rusting. Once the spirit is fully released, we are left with the “true material,” a pile of ash or rust particles.
…if I am right that the concept of consciousness is simply not owned by any authoritative meaning-determining group in the way the concept of phlogiston was owned by the phlogiston theorists, then it will be harder to isolate any single canonical concept of consciousness that has recently come undone or is in the process of coming undone, and thus that deserves the same tough treatment that the concept of phlogiston received.
For additional elaboration of (something like) my reasons for resisting quick eliminativism about consciousness (among other concepts), see e.g. Chang (2011), Arabatzis (2011), Ludwig (2014), Ludwig (2015), and Taylor & Vickers (2016). In particular, I want to highlight my agreement with Ludwig (2014) that
Scientific ontologies are constantly changing through the introduction of new entities and the elimination of old entities that have become obsolete… The ubiquity of elimination controversies in the human sciences raises the general but rarely discussed… question [of when] scientists should eliminate an entity from their ontology. Typically, elimination controversies focus on one specific entity and consider other cases of ontological elimination only briefly through analogies to obsolete entities in the history of science such as the élan vital, ether, phlogiston, phrenological organs, or even witchcraft… I want to argue that this situation is unfortunate as it often leads to the implicit use of an oversimplified “phlogiston model” of ontological elimination… that proves inadequate for many debates in the human sciences… [I propose] a more complex model that interprets ontological elimination as typically located on gradual scale between criticism of empirical assumptions and conceptual choices…
Another way to express my attitude on the matter is to express sympathy for the view that Baars & McGovern (1993) attribute to “average cognitive psychologists”:
…if we were to compel the average cognitive psychologists to give an opinion on the matter, we would no doubt hear something like… “Yes, of course, we will ultimately be able to explain human behavior and experience in neural terms. In the meantime, it is useful, and perfectly good science, to work at a higher level of analysis, in which we postulate mental events such as thoughts, percepts, goals, and the like.”
6.9.7 Appendix Z.7. Some candidate dimensions of moral concern
In this appendix I briefly remark on some candidate dimensions of moral concern that could be combined to estimate the relative “moral weight” of a species or other taxon of cognitive system, as mentioned briefly above.
Many commonly-discussed candidate dimensions of moral concern are captured by theories of well-being. Crisp (2013) organizes philosophical theories of well-being into three categories: hedonistic theories according to which well-being is the presence of pleasure and the absence of pain, desire theories according to which well-being is getting what one wants, and objective list theories according to which well-being is the presence or absence of certain objective characteristics (potentially including hedonistic and desire-related characteristics). Fletcher (2015) offers a different categorization, including chapters on hedonistic theories, perfectionistic theories, desire-fulfillment theories, objective list theories, hybrid theories, subject-sensitive theories, and eudaimonistic theories. In the social sciences, human objective well-being is often measured using variables such as education, health status, personal security, income, and political freedom (see e.g. OECD 2014), while human subjective well-being is typically conceived of in terms of life satisfaction, hedonic affect, and eudaimonia (psychological “flourishing”): see e.g. OECD (2013). For another overview of several approaches to well-being, see the chapters in Part II of Adler & Fleurbaey (2016).398
Some candidate dimensions of moral concern captured by these theories of well being — e.g. pain and pleasure — are widely thought to vary in their moral importance depending on other parameters such as “intensity” and duration.399 With respect to duration, there is also a question about objective vs. subjective duration.400
Another set of candidate dimensions of moral concern involve various ways in which consciousness can be “unified” or “disunified.” Following Bayne (2010)’s taxonomy (ch. 1), we might consider the moral relevance of “subject unity,” “representational unity,” and “phenomenal unity” — each of which has a “synchronic” (momentary) and “diachronic” (across time) aspect (see footnote401.) For example, Daniel Dennett seems to appeal to some kinds of disunity when explaining why he doesn’t worry much about the conscious experiences of most animals (if they have any).402
In a cost-benefit framework, one’s estimates concerning the moral weight of various taxa are likely more important than one’s estimated probabilities of the moral patienthood of those taxa. This is because, for the range of possible moral patients of most interest to us, it seems very hard to justify probabilities of moral patienthood much lower than 1% or much higher than 99%. In contrast, it seems quite plausible that the moral weights of different sorts of beings could differ by several others of magnitude. Unfortunately, estimates of moral weight are trickier to make than, and in many senses depend upon, one’s estimates concerning moral patienthood.
6.9.8 Appendix Z.8. Some reasons for my default skepticism of published studies
In several places throughout this report, I’ve mentioned my relatively high degree of skepticism about the expected reliability of consciousness-related scientific studies (e.g. on animal behavior, or self-reported human consciousness) that I have cited but not personally examined beyond a cursory look. Here, I provide some examples of published findings or personal research experiences that have led me to become unusually skeptical about the likely robustness of most published scientific studies (that I haven’t examined personally):
- Nosek et al. (2015) conducted high-powered replications of 100 studies from three top-ranked psychology journals, and found that “the mean effect size (r) of the replication effects… was half the magnitude of the mean effect size of the original effects,” 97% of the original studies had significant results (P < .05) whereas only 36% of replications had significant results, and only “39% of effects were subjectively rated to have replicated the original result.” Attempts to downplay the significance of this result have not been persuasive (to me).
- This “replication crisis” is not limited to psychology.403 Similarly discouraging results have been observed across many studies in cancer biology (Prinz et al. 2011; Begley & Ellis 2012; Kaiser 2017), economics (Hou et al. 2017; Camerer et al. 2016; Duvendack et al. 2015; Hubbard & Vetter 1992), genetics (Hirschhorn et al. 2002; Siontis et al. 2010; Benjamin et al. 2012; Farrell et al. 2015), marketing (Hubbard & Armstrong 1994), forecasting (Evanschitzky & Armstrong 2010), and other fields. For comments on how the replication crisis and some of the specific issues listed below may affect neuroimaging studies in particular, see e.g. Poldrack et al. (2017); Vul & Pashler (2017); Uttal (2012). For a popular introduction to the replication crisis in biomedicine, see Harris (2017).
- Similar results seem to often hold beyond the domain of strict replications. For example, a systematic review (Ioannidis 2005a) of highly-cited, top-journal clinical research studies published from 1990-2003 found that, among those 34 studies which could be compared against a later study (or meta-analysis) of the same questions and comparable or larger sample size or a better-controlled design, 41% were either contradicted by subsequent studies or found noticeably smaller effects than subsequent studies did. (The other 59% were supported by the later studies.)
- Or, consider the case of “medical reversal,” in which a current “standard of care” medical therapy, which has already undergone enough testing to be widely provided — sometimes to millions of patients and at the cost of billions of dollars — is found (usually via a large randomized controlled trial) to actually be inferior to a lesser or prior standard of care (e.g. not effective at all, and sometimes even harmful). In a systematic review of published articles testing a current standard of care, 40.2% of these studies resulted in medical reversal (Prasad et al. 2013; Prasad & Cifu 2015).
- Many (most?) published studies suffer from low power. It is commonly thought (and taught) that low power is a problem primarily because it can lead to a failure to detect true effects, resulting in a waste of research funding. But low power also has two arguably more insidious consequences: it increases the probability of false positives, and it exaggerates the size of measured true effects (Button et al. 2013). A systematic review of power issues in 10,000 papers in psychology and cognitive neuroscience suggested that the rate of false positives for this literature likely exceeds 50%. The problem seems to be especially bad in cognitive neuroscience (Szucs & Ioannidis 2017), with obvious implications for the scientific study of consciousness.
- Even merely re-analyzing the data from a study can often produce a different result. For example, in a systematic review of reanalyses of data from randomized clinical trials, 35% of the re-analyses “led to interpretations different from that of the original article” (Ebrahim et al. 2014).
- In my experience, when I read review articles in the life and social sciences, and then examine the primary evidence myself, I almost always come away with a lower opinion of the strength of the reported evidence than is suggested in the relevant review articles. For example, compare the review articles cited in my report on behavioral treatments for insomnia to my own conclusions in that report.
- Given the strong incentives to publish positive results, and the many available methods for doing so even in the absence of “true” positive results (via “researcher degrees of freedom”404), some simulations405 suggest that we should expect many (perhaps most) published research findings to be false. Moreover, researchers do seem to make extensive use of these researcher degrees of freedom: for example, in a systematic review of 241 fMRI studies, there were “nearly as many unique analysis pipelines as there were studies in the sample” (Carp 2012).
- In my experience, most top-journal primary studies and meta-analyses in the life and social sciences (that I read closely) turn out to rely heavily on statistical techniques that are inappropriate given the design of the study and/or the nature of the underlying data. Here is one example: in the life sciences and social sciences, the most common algorithm for quantitative synthesis of effect sizes from multiple primary studies is probably the DerSimonian-Laird (DL) algorithm (DerSimonian & Laird 1986), even though (a) it was never shown (in simulations) to be appropriate for use in most of the situations for which it is used406 (e.g. when primary studies vary greatly in sample size, or when primary studies are few and small and heterogeneous), (b) better-performing algorithms are available (e.g. IntHout et al. 2014), and (c) even the authors of the algorithm seem to concede it is not appropriate for establishing the statistical significance of summary effects (DerSimonian & Laird 2015, p. 142). Here is a second example: when studying the literature on subjective well-being, I learned (from Cranford et al. 2006) that studies using ecological momentary assessment (EMA) had, for decades, typically used statistical tests appropriate for studying between-person differences, whereas EMA focuses on within-person changes. A long-time leader in the field confirmed this in conversation.407
- In line with my personal experience, systematic reviews of the appropriateness of statistical tests used in published papers find high rates of straightforward statistical errors, for example inappropriate use of parametric tests, failure to account for multiple comparisons,408 the use of “fail-safe N” to protect against publication bias, and other problems (Assmann et al. 2000; Scales et al. 2005; Whittingham et al. 2006; Heene 2010; Button et al. 2013; Nuijten et al. 2016; Eklund et al. 2016; Westfall & Yarkoni 2016).409
- In surveys, a substantial fraction of researchers from a variety of fields admit to engaging in “questionable research practices” (QRPs) that may undermine the validity of their published results (Martinson et al. 2005; John et al. 2012; Necker 2014; Agnoli et al. 2017).410 This general finding is bolstered by a variety of analytic attempts to estimate the rate of QRPs in various fields, for example by examining how reported findings change from the dissertation version of a study to the published article version (Mazzola & Deuling 2013), or by directly inspecting published correlation matrices (Bosco et al. 2015). For an overall review, see Banks et al. (2016).
- Huge swaths of literature in the social sciences depend on self-report measures that have not been validated using the standards typically required (Reise & Revicki 2015; Millsap 2011) for measures used in (e.g.) high-stakes testing or patient-reported outcomes in health care. Also, systematic comparisons of self-reported data against data collected using “gold standard” objective measures (e.g. administrative data) suggest that self-report measures across a variety of fields result in substantial measurement error (see the sources in this footnote).
- Failure to share study data is widespread,411 and (in psychology, at least) predicts less-robust results (Wicherts et al. 2011). I would guess the same is true in many other fields.
Of course, failed replications and flawed methods don’t prove the original reported results are false, merely that they are not adequately supported. But that is just what I mean when I say that, prior to examining them myself, I expect most published studies not to “hold up” under attempted replication or close scrutiny.
Unfortunately, these points also undermine my ability to fully trust the studies I’ve cited above about the trustworthiness of published studies, only a few of which I’ve personally examined “somewhat closely” (i.e. for more than 30 minutes). Obviously, such “meta-research” is not immune from replication failures and flawed methodology.
To illustrate: when I first began to study the rate at which non-randomized studies are confirmed or contradicted by later (and presumably more trustworthy) randomized controlled trials, one of the first studies I came across was Young & Karr (2011). After reading the study, I excitedly tweeted it, along with the following comment: “Reminder: very few correlations from epidemiological studies hold up in RCTs — in this study, an impressive 0/52.” However, Young & Karr (2011) itself suffers multiple flaws. For example, the selection of primary studies was not systematic, and the authors provide no detail on how the studies they examined were chosen. Moreover, as I sought out additional reviews of this issue, I found that every large-scale review of this question that was conducted in a systematic, transparently-reported way found very different results than Young & Karr did (e.g. Deeks et al. 2003; Odgaard-Jensen et al. 2011; Anglemyer et al. 2014412). Today, I do not trust Young & Karr’s result.413
6.9.9 Appendix Z.9. Early scientific progress tends to lead to more complicated models of phenomena
I remarked above that, as far as I can tell, early scientific progress on a then-mysterious phenomenon tends to make our models of that phenomenon increasingly complicated (except in fundamental physics, and I strongly doubt that consciousness is a feature of fundamental physics414). If this pattern holds true for the scientific study of consciousness, too, then consciousness could turn out to be a relatively complicated phenomenon, and this would push me in the direction of thinking that consciousness can be found in fewer systems than if consciousness turned out to be simple.
Consider the case of recognizing oneself in the mirror. From the inside, it feels as though this is a fairly simple process: I see myself in the mirror, and I immediately recognize that it’s me. When I introspect about this process, I don’t detect any “steps” to the process. I don’t detect a series of complicated manipulations of data. Moreover, anyone trying to construct a theory of how mirror self-recognition works, before the age of computers and algorithms, wouldn’t have known enough about how mirror self-recognition might work to propose any complicated theory for it. But of course we now know — both from the neuroscientific study of vision and from attempts to build machine vision systems that can recognize objects (including themselves)415 — that this process of mirror self-detection is actually very complicated, and very few computational systems can execute it. Indeed, while chimpanzees seem to recognize themselves in mirrors, even gorillas generally do not.416 Mirror self-recognition is a highly specific and complex computational process, and is very rare in computational systems.
That said, complexity doesn’t necessarily imply rarity. Consider again the case of “life,” which turned out to be both more complicated and more extensive than we had once supposed.417
Back when life was fairly mysterious to us, any specific proposed theory of life was almost unavoidably simple. When we know so little about something that it is “mysterious” to us, we often don’t even know enough detail to propose a specific and complicated model, so we either propose specific and simple models, or we say “it’s probably complex, but I can’t propose any specific theory of that complexity at this time.”
So for example in the case of life, Xavier Bichat proposed in 1801 that living things are distinguished by certain “vital properties,” which he described as fundamental properties of the universe alongside gravity.418 Some of Bichat’s contemporaries instead argued that organic and inorganic processes differed in complexity rather than in fundamental properties or substances, but in my understanding, they couldn’t propose specific complicated theories of life — all they could say was that life would probably turn out to be complicated,419 as I am saying for consciousness.
Of course, early scientific progress on life revealed that life is, in fact, fairly complex: it is made of many parts, interacting in particular, complicated ways that were impossible to predict in detail prior to their discovery.
This pattern — that early scientific progress tends to lead to more complicated models of phenomena — seems especially true of behavioral and cognitive phenomena. In early theorizing about these phenomena, there were some who proposed specific and relatively simple models of a given phenomenon, and there were those who said “It’s probably complicated, but I don’t know enough detail to specify a complicated model,” and as far as I know, these phenomena have always turned out to be more complicated than any simple, specific model we could propose early on. (Of course, the early, simple, false models were often useful for guiding scientific progress,420 but that is different from saying that the early, simple models have tended to be correct.)
If this story is right — and I’m not sure it is, as I’m not an historian of science — then it should not be a surprise that most currently proposed and highly specific theories are quite simple,421 and thus imply a relatively extensive distribution of consciousness. That is what early specific models always look like, and if we consider the history of scientific progress on other behavioral and cognitive mechanisms, consciousness seems likely to turn out to be a good deal more complicated than our early specific models suggest.
Of course, as the example of life shows, we might discover that consciousness is highly complex and yet still surprisingly extensive.422 Or perhaps it will be complex and rare (like mirror self-recognition).
6.9.1o Appendix Z.10. Recommended readings
In this appendix, I provide links to some “especially recommended readings” related to a few of the major topics covered in this report. Many topics covered in this report are not listed below because I haven’t yet found introductory sources on those topics that I “especially recommend” — for example, this is the case for moral patienthood in general, PCIF arguments in general, cortex-required views in general, and the question of moral weight.
IF YOU WANT TO READ… | I RECOMMEND… |
---|---|
…a brief introduction to consciousness studies, focused on metaphysical debates and contemporary theories of consciousness | Weisberg, Consciousness (2014) |
…a detailed, reasonably theory-neutral discussion of the distribution question, across many animal taxa, which comes to different conclusions than I have | Tye, Tense Bees and Shell-Shocked Crabs (2016) |
…a book on consciousness I admire despite disagreeing with it at a fairly fundamental level | Chalmers, The Conscious Mind (1997) |
…a detailed effort to (properly, in my view) undermine commonly held, “default” intuitions about consciousness | Dennett, Consciousness Explained (1991) |
…the basic case for illusionism about consciousness | Frankish, “Quining diet qualia” (2012) and then “Illusionism as a theory of consciousness” (2016) |
…an introduction to animal cognition and behavior | Shettleworth, Cognition, Evolution, and Behavior, 2nd edition (2009) or Wynne & Udell, Animal Cognition, 2nd edition (2013) |
…a short introduction to some basic issues related to the evolution of consciousness | Godfrey-Smith, “The evolution of consciousness in phylogenetic context” (2017) |
…an introduction to unconscious vision | Goodale & Milner, Sight Unseen, 2nd edition (2013) |
7 Sources
DOCUMENT | SOURCE |
---|---|
Aaronson (2013) | Source (archive) |
Aaronson (2014a) | Source |
Aaronson (2014b) | Source |
Achen & Bartels (2006) | Source (archive) |
Ackerman (2016) | Source (archive) |
Adamo et al. (2009) | Source (archive) |
Adler & Fleurbaey (2016) | Source (archive) |
Agnoli et al. (2017) | Source (archive) |
Alanen (2003) | Source (archive) |
Alberts et al. (2016) | Source |
Aleksander (2007) | Source (archive) |
Aleksander (2017) | Source (archive) |
Aleman & Larøi (2008) | Source |
Aleman & Merker (2014) | Source (archive) |
Alkire et al. (2008) | Source (archive) |
Allen (2013) | Source (archive) |
Allen et al. (2009) | Source (archive) |
Allen-Hermanson (2008) | Source (archive) |
Allen-Hermanson (2016) | Source (archive) |
Almond (2008) | Source |
Althaus (2003) | Source (archive) |
Amazon Web Services | Source (archive) |
Amazon, “The Official Mike the Headless Chicken Book” | Source (archive) |
Amting et al. (2010) | Source (archive) |
Anderson (2004) | Source (archive) |
Anderson & Gallup Jr. (2015) | Source (archive) |
Andreotti et al. (2014) | Source (archive) |
Andrews & Beck (2017) | Source (archive) |
Anglemyer et al. (2014) | Source (archive) |
Anselme & Robinson (2016) | Source (archive) |
Antony (2008) | Source (archive) |
Arabatzis (2011) | Source (archive) |
Arbital, “Executable philosophy” | Source |
Arbital, “Extrapolated volition (normative moral theory)” | Source |
Arbital, “Ontology identification problem” | Source |
Arbital, “Rescuing the utility function” | Source |
Arizona State University, Department of Psychology, Clive Wynne | Source (archive) |
Armknecht et al. (2015) | Source (archive) |
Armstrong (1968) | Source (archive) |
Aronyosi (2013) | Source (archive) |
Arrabales (2010) | Source (archive) |
Arzimanoglou & Ostrowsky-Coste (2010) | Source (archive) |
Ashley et al. (2007) | Source (archive) |
Assael et al. (2016) | Source (archive) |
Assmann et al. (2000) | Source (archive) |
Baars (1988) | Source (archive) |
Baars & McGovern (1993) | Source (archive) |
Baars et al. (2003) | Source (archive) |
Baars et al. (2013) | Source (archive) |
Bach-y-Rita & Kercel (2003) | Source (archive) |
Bain (2014) | Source (archive) |
Baird (1905) | Source (archive) |
Baker (2016) | Source (archive) |
Bakker (2017) | Source (archive) |
Balcombe (2006) | Source (archive) |
Balcombe (2016) | Source (archive) |
Ballarin et al. (2016) | Source (archive) |
Baluškaa & Mancuso (2014) | Source (archive) |
Banissy et al. (2009) | Source (archive) |
Banissy et al. (2014) | Source (archive) |
Banks et al. (2016) | Source (archive) |
Bargh & Morsella (2010) | Source (archive) |
Barnow & Greenberg (2014) | Source (archive) |
Baron-Cohen (1995) | Source (archive) |
Barrett (2014) | Source |
Barrett (2017) | Source (archive) |
Barrett et al. (2005) | Source (archive) |
Barron & Klein (2016) | Source (archive) |
Bartlett & Youngner (1988) | Source (archive) |
Barton (2011) | Source (archive) |
Basbaum et al. (2009) | Source (archive) |
Basl (2014) | Source (archive) |
Bateson (1991) | Source (archive) |
Bauer et al. (1979) | Source (archive) |
Baumgarten (2013) | Source (archive) |
Bayne (2008) | Source (archive) |
Bayne (2010) | Source (archive) |
Bayne (2013) | Source (archive) |
Bayne & Hohwy (2016) | Source (archive) |
Bayne et al. (2009) | Source (archive) |
BBC, “The chicken that lived for 18 months without a head” | Source (archive) |
Beaney (2014) | Source (archive) |
Beauchamp & Childress (2012) | Source (archive) |
Beauchamp & Frey (2011) | Source (archive) |
Bechtel & Richardson (1998) | Source (archive) |
Beckstead (2013) | Source (archive) |
Begley & Ellis (2012) | Source (archive) |
Behavioural Ecology Research Group at the University of Oxford, “Tool Manufacture” | Source (archive) |
Behrmann & Nishimura (2010) | Source (archive) |
Bender et al. (2008) | Source (archive) |
Benjamin et al. (2012) | Source (archive) |
Bennett & Hill (2014) | Source (archive) |
Bermudez (2014) | Source (archive) |
Bernstein (1998) | Source (archive) |
Berridge & Kringelbach (2015) | Source (archive) |
Berridge & Kringelbach (2016) | Source (archive) |
Berridge & Winkielman (2003) | Source (archive) |
Beshkar (2008) | Source (archive) |
Best et al. (2008) | Source (archive) |
Bhandari & Wagner (2006) | Source (archive) |
Bhat & Rockwood (2007) | Source (archive) |
Bicchieri & Mercier (2014) | Source (archive) |
Bird (2011) | Source |
Biro & Stamps (2015) | Source (archive) |
Bishop (2004) | Source (archive) |
Bishop (2015) | Source (archive) |
Bishop & Trout (2004) | Source (archive) |
Bishop & Trout (2008) | Source (archive) |
Blackmon (2013) | Source (archive) |
Blackmon (2016) | Source (archive) |
Blackmore (2016) | Source (archive) |
Blanke & Metzinger (2009) | Source (archive) |
Blanke et al. (2015) | Source (archive) |
Block (1978) | Source (archive) |
Block (1993) | Source (archive) |
Block (1995) | Source (archive) |
Block (2007a) | Source (archive) |
Block (2007b) | Source (archive) |
Block (2011) | Source (archive) |
Block (2014) | Source (archive) |
Blom (2010) | Source (archive) |
Blom (2014) | Source (archive) |
Blom & Sommer (2012) | Source (archive) |
Bloom (2000) | Source (archive) |
Boeve (2010) | Source (archive) |
Bogosian (2016) | Source (archive) |
Bogousslavsky et al. (1991) | Source (archive) |
Boller & Grafman (2000) | Source (archive) |
Boly et al. (2013) | Source (archive) |
Borries et al. (2016) | Source (archive) |
Bosco et al. (2015) | Source (archive) |
Bostrom (2006) | Source (archive) |
Bostrom & Yudkowsky (2014) | Source (archive) |
Botha & Everaert (2013) | Source (archive) |
Bound et al. (2001) | Source (archive) |
Bourget & Chalmers (2013) | Source (archive) |
Braithwaite (2010) | Source (archive) |
Braun (2015) | Source (archive) |
Bray (2011) | Source (archive) |
Breed (2017) | Source (archive) |
Breitmeyer & Ogmen (2006) | Source (archive) |
Brennan & Lo (2015) | Source (archive) |
Bridgeman (1992) | Source (archive) |
Briscoe & Schwenkler (2015) | Source (archive) |
Brogaard (2016) | Source (archive) |
Brook & Raymont (2017) | Source |
Brown et al. (2011) | Source (archive) |
Bryant et al. (2014) | Source (archive) |
Buchanan & Powell (2016) | Source (archive) |
Bunge (1980) | Source (archive) |
Burghardt (2005) | Source (archive) |
Burkart et al. (forthcoming) | Source (archive) |
Burkeman (2015) | Source (archive) |
Button et al. (2013) | Source (archive) |
Buzzi (2011) | Source (archive) |
Bykvist (2017) | Source (archive) |
Cabanac et al. (2009) | Source (archive) |
Caetano & Aisenberg (2014) | Source (archive) |
Camerer et al. (2016) | Source (archive) |
Campbell (2002) | Source (archive) |
Campbell (2013) | Source (archive) |
Cardeña & Winkelman (2011) | Source (archive) |
Cardeña et al. (2013) | Source (archive) |
Cardeña et al. (2014) | Source (archive) |
Cardoso-Leite & Gorea (2010) | Source (archive) |
Carey (2001) | Source |
Carp (2012) | Source (archive) |
Carroll (2016) | Source (archive) |
Carruthers (1989) | Source (archive) |
Carruthers (1992) | Source (archive) |
Carruthers (1996) | Source (archive) |
Carruthers (1999) | Source (archive) |
Carruthers (2000) | Source (archive) |
Carruthers (2002) | Source (archive) |
Carruthers (2004) | Source (archive) |
Carruthers (2005) | Source (archive) |
Carruthers (2011) | Source (archive) |
Carruthers (2016) | Source |
Carruthers (2017) | Source (archive) |
Carruthers & Schier (2017) | Source (archive) |
Cary et al. (1998) | Source (archive) |
Castelvecchi (2016) | Source (archive) |
Cavanna & Nani (2014) | Source (archive) |
Center for Deliberative Democracy, “What is Deliberative Polling?” | Source |
Cerullo (2015) | Source (archive) |
Chafe (1996) | Source (archive) |
Chalmers (1990) | Source (archive) |
Chalmers (1995) | Source (archive) |
Chalmers (1996) | Source (archive) |
Chalmers (1997) | Source (archive) |
Chalmers (2003) | Source (archive) |
Chalmers (2004) | Source (archive) |
Chalmers (2009) | Source (archive) |
Chalmers (2010) | Source (archive) |
Chalmers (2011) | Source (archive) |
Chalmers (2012) | Source (archive) |
Chalmers (2015) | Source (archive) |
Chan (2009) | Source |
Chang (2004) | Source (archive) |
Chang (2011) | Source (archive) |
Chang (2012) | Source (archive) |
Chang & Li (2015) | Source (archive) |
Chappell (2010) | Source (archive) |
Chaudhary et al. (2017) | Source (archive) |
Chechlacz & Humphreys (2014) | Source (archive) |
Cheke & Clayton (2010) | Source (archive) |
Chen et al. (2013) | Source (archive) |
Chen et al. (2014) | Source (archive) |
Chen et al. (2016) | Source (archive) |
Cheng (2016) | Source (archive) |
Chervova et al. (1994) | Source |
Chrisley & Sloman (2016) | Source (archive) |
Christen et al. (2014) | Source (archive) |
Christensen et al. (2012) | Source (archive) |
Churchland (1988) | Source (archive) |
Clark (1996) | Source (archive) |
Clark (2001) | Source (archive) |
Clark (2009) | Source (archive) |
Clark (2013) | Source (archive) |
Clark & Kiverstein (2007) | Source (archive) |
Clarke and Harris (2001) | Source (archive) |
Cleeremans (2011) | Source (archive) |
Clegg (2001) | Source (archive) |
Cloney (1982) | Source (archive) |
Cohen & Dennett (2011) | Source (archive) |
Collerton et al. (2015) | Source (archive) |
Collignon et al. (2011) | Source (archive) |
Copeland (1996) | Source (archive) |
Corns (2014) | Source (archive) |
Coslett (2011) | Source |
Coslett & Lie (2008) | Source |
Cowey (2004) | Source (archive) |
Cowey (2010) | Source (archive) |
Crane & Piantanida (1983) | Source (archive) |
Cranford & Smith (1987) | Source (archive) |
Cranford et al. (2006) | Source (archive) |
Crew (2014) | Source (archive) |
Crick & Koch (1990) | Source (archive) |
Crick & Koch (1998) | Source (archive) |
Crisp (2013) | Source |
Crisp & Pummer (2016) | Source |
Crook & Walters (2011) | Source (archive) |
Dahl et al. (2011) | Source (archive) |
Dahlsgaard et al. (2005) | Source (archive) |
Damasio (1999) | Source (archive) |
Damasio (2010) | Source (archive) |
Damasio & Carvalho (2013) | Source (archive) |
Damasio & Van Hoesen (1983) | Source |
Damasio et al. (2013) | Source (archive) |
Daniels (2016) | Source |
Daswani & Leike (2015) | Source (archive) |
Davies & Levy (2016) | Source (archive) |
Davies et al. (2006) | Source (archive) |
Dawkins (2012) | Source (archive) |
Dawkins (2015) | Source (archive) |
Dawkins (2017) | Source (archive) |
de Blanc (2011) | Source (archive) |
de Gelder et al. (2002) | Source (archive) |
de Ribaupierre and Delalande (2008) | Source (archive) |
de Waal (1992) | Source (archive) |
de Waal (2001) | Source |
de Waal (2007) | Source (archive) |
de Waal (2016) | Source (archive) |
de Waal et al. (2014) | Source (archive) |
Dean (1990) | Source |
Deaner et al. (2007) | Source (archive) |
Debruyne (2009) | Source (archive) |
Deeks et al. (2003) | Source (archive) |
Degenaar & Lokhorst (2014) | Source |
DeGrazia (2016) | Source |
Dehaene (2014) | Source (archive) |
Dehaene et al. (1998) | Source (archive) |
Dehaene et al. (2014) | Source (archive) |
Dennett (1986) | Source (archive) |
Dennett (1988) | Source (archive) |
Dennett (1991) | Source (archive) |
Dennett (1993a) | Source (archive) |
Dennett (1993b) | Source (archive) |
Dennett (1994) | Source (archive) |
Dennett (1995) | Source (archive) |
Dennett (2005) | Source (archive) |
Dennett (2007) | Source (archive) |
Dennett (2013) | Source (archive) |
Dennett (2016a) | Source (archive) |
Dennett (2016b) | Source (archive) |
Dennett (2017) | Source (archive) |
Denton (2006) | Source (archive) |
Derbyshire (2014) | Source (archive) |
Deroy & Spence (2016) | Source (archive) |
DerSimonian & Laird (1986) | Source (archive) |
DerSimonian & Laird (2015) | Source (archive) |
Devor et al. (2014) | Source (archive) |
Dewsbury (1984) | Source (archive) |
Di Virgilio & Clarke (1997) | Source (archive) |
Dickinson (2011) | Source (archive) |
Dijkerman & De Haan (2007) | Source (archive) |
Dillon (2014) | Source |
Dittrich (2016) | Source (archive) |
Domhoff (2007) | Source (archive) |
Dominus (2011) | Source (archive) |
Donaldson & Grant-Vallone (2002) | Source (archive) |
Dorahy et al. (2014) | Source (archive) |
Doris (2010) | Source (archive) |
Dorsch (2015) | Source (archive) |
Dragoi (2016) | Source (archive) |
Drescher (2006) | Source (archive) |
Dretske (1995) | Source (archive) |
Droege & Braithwaite (2015) | Source (archive) |
Dugas-Ford et al. (2012) | Source (archive) |
Dugatkin (2013) | Source (archive) |
Duvendack et al. (2015) | Source (archive) |
Dyer (1994) | Source (archive) |
Ebrahim et al. (2014) | Source (archive) |
Edelman (1990) | Source (archive) |
Edelman (2008) | Source (archive) |
Edwards (2005) | Source (archive) |
Eeles et al. (2013) | Source (archive) |
Egan (2012a) | Source (archive) |
Egan (2012b) | Source (archive) |
Egger (1978) | Source (archive) |
Egger et al. (2014) | Source (archive) |
Eklund et al. (2016) | Source (archive) |
Emery (2016) | Source (archive) |
Engelborghs et al. (2000) | Source (archive) |
Epley (2011) | Source (archive) |
Epley et al. (2007) | Source |
Evanschitzky & Armstrong (2010) | Source (archive) |
Everett Kaser Software, “MESH” | Source (archive) |
Executable Philosophy | Source (archive) |
Farah (2004) | Source (archive) |
Farrell et al. (2015) | Source (archive) |
Favre (2011) | Source (archive) |
Fayers & Machin (2016) | Source (archive) |
Fazekas & Overgaard (2016) | Source (archive) |
Feinberg & Mallatt (2016) | Source (archive) |
Feinstein et al. (2016) | Source (archive) |
Fekete et al. (2016) | Source (archive) |
Fernandes et al. (2014) | Source (archive) |
Fernandez-Ballesteros & Botella (2007) | Source (archive) |
Feuillet et al. (2007) | Source (archive) |
Fidler et al. (2017) | Source (archive) |
Fiedler & Schwarz (2015) | Source (archive) |
Finn et al. (2009) | Source (archive) |
Finnerup and Jensen (2004) | Source (archive) |
Fireman et al. (2003) | Source (archive) |
Fischer & Collins (2015) | Source (archive) |
Fitch (2010) | Source (archive) |
FiveThirtyEight, “Hack Your Way To Scientific Glory” | Source (archive) |
Flanagan (1992) | Source (archive) |
Flanagan (2016) | Source (archive) |
Fleming (2014) | Source (archive) |
Fleming et al. (2007) | Source (archive) |
Fletcher (2015) | Source (archive) |
Foreign and Commonwealth Office London, “Consolidated Texts of the EU Treaties as Amended by the Treaty of Lisbon” | Source (archive) |
Frankish (2005) | Source (archive) |
Frankish (2012a) | Source (archive) |
Frankish (2012b) | Source (archive) |
Frankish (2016a) | Source (archive) |
Frankish (2016b) | Source (archive) |
Frankish (2016c) | Source (archive) |
Frankish & Dennett (2004) | Source (archive) |
Franklin et al. (2012) | Source (archive) |
Franklin et al. (2016) | Source (archive) |
Frasch et al. (2016) | Source (archive) |
Freud et al. (2016) | Source (archive) |
Friedman (2005) | Source (archive) |
Friedman-Hill et al. (1995) | Source (archive) |
Fruita Colorado, “Fruita Community Center” | Source (archive) |
Furness et al. (2014) | Source (archive) |
Gagliano et al. (2016) | Source (archive) |
Gallup Jr. et al. (2011) | Source (archive) |
Gamez (2008) | Source (archive) |
Gangopadhyay et al. (2010) | Source (archive) |
Garamszegi (2016) | Source (archive) |
Garfield (2016) | Source (archive) |
Gazzaniga (1992) | Source (archive) |
Gazzaniga (2000) | Source (archive) |
Gazzaniga (2002) | Source (archive) |
Gazzaniga & Campbell (2015) | Source (archive) |
Gazzaniga & LeDoux (1978) | Source (archive) |
Gebhart & Schmidt (2013) | Source (archive) |
Gelman (2016) | Source (archive) |
Gelman & Geurts (2017) | Source (archive) |
Gelman & Loken (2013) | Source |
Gelman & Loken (2014) | Source |
Gennaro (2011) | Source (archive) |
Gennaro (2016) | Source (archive) |
Gentle (2011) | Source (archive) |
Gentle et al. (2001) | Source (archive) |
Gerber et al. (2014) | Source (archive) |
Giacino et al. (2014) | Source (archive) |
Giambra (2000) | Source |
Gigerenzer (2007) | Source (archive) |
Gili et al. (2013) | Source (archive) |
Ginsburg & Jablonka (2007) | Source |
Godfrey-Smith (2009) | Source (archive) |
Godfrey-Smith (2016a) | Source (archive) |
Godfrey-Smith (2016b) | Source (archive) |
Godfrey-Smith (2017) | Source |
Goff (2017a) | Source (archive) |
Goff (2017b) | Source (archive) |
Goodale & Ganel (2016) | Source (archive) |
Goodale & Milner (2013) | Source (archive) |
Goodale et al. (2001) | Source (archive) |
Goodfellow et al. (2016) | Source (archive) |
Goodwin (2015) | Source (archive) |
Gorber et al. (2007) | Source (archive) |
Gorber et al. (2009) | Source (archive) |
Gorea (2015) | Source (archive) |
Gotman & Kostopoulos (2013) | Source (archive) |
Graham (2001) | Source (archive) |
Graham & Burghardt (2010) | Source (archive) |
Graham & Kennedy (2004) | Source (archive) |
Graham et al. (2013) | Source (archive) |
Grahek (2007) | Source (archive) |
Gray (2004) | Source (archive) |
Gray & Wegner (2009) | Source (archive) |
Graziano (2013) | Source (archive) |
Graziano (2014) | Source (archive) |
Graziano (2016a) | Source (archive) |
Graziano (2016b) | Source (archive) |
Graziano & Webb (2017) | Source (archive) |
Greaves & Ord (2016) | Source |
Green & Wikler (1980) | Source (archive) |
Greenwald (2012) | Source (archive) |
Grim (2004) | Source (archive) |
Grimes (1996) | Source (archive) |
Grisdale (2010) | Source (archive) |
Grosenick et al. (2007) | Source (archive) |
Groves et al. (2009) | Source (archive) |
Grzybowski & Aydin (2007) | Source (archive) |
Guthrie (1993) | Source (archive) |
Güzeldere et al. (2000) | Source (archive) |
Gyulai et al. (1996) | Source (archive) |
Haffendon & Goodale (1998) | Source (archive) |
Hajdin (1994) | Source |
Hájek (2011) | Source |
Hall (2007) | Source (archive) |
Halligan (2002) | Source (archive) |
Hameroff & Penrose (2014) | Source (archive) |
Hancock (2002) | Source (archive) |
Hankins (2012) | Source (archive) |
Hansen et al. (2009) | Source (archive) |
Hanson (2002) | Source (archive) |
Hanson (2016) | Source (archive) |
Hardin (1988) | Source (archive) |
Harman (1990) | Source (archive) |
Harman & Lepore (2014) | Source (archive) |
Harris (2017) | Source (archive) |
Hartmann et al. (1991) | Source (archive) |
Hassin (2013) | Source (archive) |
Hatzimoysis (2007) | Source (archive) |
HealthMeasures | Source (archive) |
Healy et al. (2013) | Source (archive) |
Heene (2010) | Source (archive) |
Heider (2000) | Source (archive) |
Heil (2013) | Source (archive) |
Heilman (1991) | Source (archive) |
Heilman & Satz (1983) | Source (archive) |
Held & Špinka (2011) | Source (archive) |
Held et al. (2011) | Source (archive) |
Helen Yetter-Chappell, “Published Papers” | Source (archive) |
Helton (2005) | Source (archive) |
Herbet et al. (2014) | Source (archive) |
Herculano-Houzel (2011) | Source (archive) |
Herculano-Houzel (2016) | Source (archive) |
Herculano-Houzel (2017a) | Source (archive) |
Herculano-Houzel (2017b) | Source (archive) |
Herculano-Houzel & Kaas (2011) | Source (archive) |
Hernandez-Orallo (2017) | Source (archive) |
Herndon et al. (1999) | Source (archive) |
Herzog et al. (2007) | Source (archive) |
Hesse et al. (2011) | Source (archive) |
Hirschhorn et al. (2002) | Source (archive) |
Hirstein (2010) | Source (archive) |
Hirstein (2012) | Source (archive) |
Hobson et al. (2000) | Source (archive) |
Hofree & Winkielman (2012) | Source (archive) |
Hohwy (2012) | Source (archive) |
Holt (2003) | Source (archive) |
Horowitz (2010) | Source (archive) |
Hou et al. (2017) | Source (archive) |
Howe (2015) | Source (archive) |
Howick & Mebius (2015) | Source (archive) |
Hu & Goodale (2000) | Source (archive) |
Hubbard & Armstrong (1994) | Source (archive) |
Hubbard & Vetter (1992) | Source (archive) |
Hudetz (2012) | Source (archive) |
Huemer (2016) | Source (archive) |
Humphrey (2011) | Source (archive) |
Humphreys (1999) | Source (archive) |
Humphreys & Riddoch (2013) | Source (archive) |
Hurlbert (2009) | Source (archive) |
Hurlburt & Schwitzgebel (2007) | Source (archive) |
Husain (2008) | Source (archive) |
Hutson (2012) | Source (archive) |
Hylton (2014) | Source (archive) |
Ihle et al. (2017) | Source (archive) |
Im & Galko (2012) | Source (archive) |
Ingle (1973) | Source (archive) |
Inglehart & Welzel (2010) | Source (archive) |
IntHout et al. (2014) | Source (archive) |
Introspection and Consciousness, Oxford University Press | Source (archive) |
Ioannidis (2005a) | Source (archive) |
Ioannidis (2005b) | Source (archive) |
Irvine (2013) | Source (archive) |
Jack & Robbins (2012) | Source (archive) |
Jackendoff (2007) | Source (archive) |
Jackson (1998) | Source (archive) |
Jackson & Lorber (1984) | Source (archive) |
James et al. (2009) | Source (archive) |
Jamieson (2007) | Source (archive) |
Jardri et al. (2013) | Source (archive) |
Jarvis et al. (2005) | Source (archive) |
Jaworska & Tannenbaum (2013) | Source |
Jaynes (1976) | Source (archive) |
Jaynes (2003) | Source (archive) |
Jelbert et al. (2014) | Source (archive) |
Jennings (1906) | Source |
Jennions & Møller (2003) | Source (archive) |
Jet Brains, “Download PyCharm” | Source (archive) |
Jiang et al. (2016) | Source (archive) |
John et al. (2012) | Source (archive) |
Johnson (1993) | Source (archive) |
Jonas & Kording (2017) | Source (archive) |
Jørgensen et al. (2016) | Source (archive) |
Jøsang (2016) | Source (archive) |
Journal of Consciousness Studies, Volume 18, Number 1 (2011) | Source (archive) |
Journal of Consciousness Studies, Volume 23, Numbers 11-12 (2016) | Source (archive) |
Joyce (2005) | Source (archive) |
Kaas (2009) | Source (archive) |
Kabadayi et al. (2016) | Source (archive) |
Kaempf & Greenberg (1990) | Source (archive) |
Kagan (2016) | Source (archive) |
Kaiser (2017) | Source (archive) |
Kaminski (2016) | Source (archive) |
Kammerer (2016) | Source (archive) |
Kapur et al. (1994) | Source (archive) |
Kardish et al. (2015) | Source (archive) |
Karmakar et al. (2015) | Source (archive) |
Karp (2016) | Source (archive) |
Karpathy (2016) | Source (archive) |
Katz (2000) | Source (archive) |
Katz (2013) | Source (archive) |
Keijzer (2012) | Source (archive) |
Keltner et al. (2013) | Source (archive) |
Kemmerer (2015) | Source (archive) |
Kent Berridge Affective Neuroscience & Biopsychology Lab | Source (archive) |
Key (2015) | Source (archive) |
Key (2016) | Source (archive) |
Khan et al. (1995) | Source (archive) |
Kihlstrom (2013) | Source (archive) |
Kilteni et al. (2015) | Source (archive) |
Kim (2010) | Source (archive) |
Kim et al. (2014) | Source (archive) |
King (2013) | Source (archive) |
King (2016a) | Source (archive) |
King (2016b) | Source (archive) |
Kirk (1994) | Source (archive) |
Kirk (2007) | Source (archive) |
Kirk (2015) | Source (archive) |
Kirk (2017) | Source (archive) |
Kitano (2007) | Source (archive) |
Klein (2010) | Source (archive) |
Klein (2015) | Source (archive) |
Klein (2017a) | Source (archive) |
Klein (2017b) | Source (archive) |
Klein & Barron (2016) | Source (archive) |
Klein & Hirachan (2014) | Source (archive) |
Klein & Hohwy (2015) | Source (archive) |
Koch (2004) | Source (archive) |
Koch et al. (2016) | Source (archive) |
Konishi & Smallwood (2016) | Source (archive) |
Kotseruba et al. (2016) | Source (archive) |
Kowalski et al. (2012) | Source (archive) |
Kravitz et al. (2011) | Source (archive) |
Kriegel (2015) | Source (archive) |
Kubovy & Pomerantz (1981) | Source (archive) |
Kuhn (2012) | Source (archive) |
Kühn & Haddadin (2017) | Source (archive) |
Kuijsten (2008) | Source (archive) |
Kuncel et al. (2005) | Source (archive) |
Kunzendorf & Wallace (2000) | Source (archive) |
Kyselo & Paolo (2015) | Source (archive) |
LaBerge & DeGracia (2000) | Source (archive) |
LaChat (1996) | Source (archive) |
Lackner (1988) | Source (archive) |
Ladner et al. (2016) | Source (archive) |
Lagercrantz (2016) | Source (archive) |
Lambert & Kinsley (2004) | Source (archive) |
Lamme (2010) | Source (archive) |
Lample & Chaplot (2016) | Source (archive) |
Langdon et al. (2014) | Source (archive) |
Langland-Hassan (2015) | Source (archive) |
Långsjö et al. (2012) | Source (archive) |
Laplane & Dubois (2001) | Source (archive) |
Laplane et al. (1984) | Source (archive) |
Lau & Rosenthal (2011) | Source (archive) |
Laureys (2005a) | Source (archive) |
Laureys (2005b) | Source (archive) |
Laureys et al. (2015) | Source (archive) |
Le Neindre et al. (2009) | Source (archive) |
Le Neindre et al. (2017) | Source (archive) |
Lecours (1998) | Source (archive) |
LeDoux (2015) | Source (archive) |
Lee (2014) | Source (archive) |
Lee (2015) | Source (archive) |
Leek & Jager (2017) | Source (archive) |
Leisman & Koch (2009) | Source (archive) |
Lenay et al. (2003) | Source (archive) |
Lessells & Boag (1987) | Source (archive) |
Leu-Semenescu et al. (2013) | Source (archive) |
Levin (2013) | Source |
Levy & Newborn (1991) | Source (archive) |
Lewin (1980) | Source (archive) |
Lewis (2001) | Source (archive) |
Lewis (2013) | Source (archive) |
Leys & Henon (2013) | Source (archive) |
Liao (2016) | Source (archive) |
Lieberman (2013) | Source (archive) |
Life, “Headless Rooster: Beheaded chicken lives normally after freak decapitation by ax” | Source (archive) |
Lin (2015) | Source (archive) |
Lin et al. (2006) | Source (archive) |
Liu & Fridovich (1996) | Source (archive) |
Liu & Schubert (2010) | Source |
Lockhart (2000) | Source (archive) |
Loeser & Treede (2008) | Source (archive) |
Lomber & Malhotra (2008) | Source (archive) |
Loosemore (2012) | Source (archive) |
Loukola et al. (2017) | Source (archive) |
Low (2012) | Source |
Ludwig (2014) | Source (archive) |
Ludwig (2015) | Source (archive) |
Luhrmann (2011) | Source (archive) |
Lui et al. (2011) | Source (archive) |
Luijtelaar et al. (2014) | Source (archive) |
Luke Muehlhauser, “Other Writings” | Source (archive) |
Lurz (2009) | Source (archive) |
Lycan (1996) | Source (archive) |
Lynn et al. (2014) | Source (archive) |
Lyon (2015) | Source (archive) |
MacAskill (2014) | Source (archive) |
Macchi et al. (2016) | Source (archive) |
Machado (2007) | Source (archive) |
Mackie & Burighel (2005) | Source (archive) |
MacLean et al. (2014) | Source (archive) |
Macphail (1987) | Source (archive) |
Macphail (1998) | Source (archive) |
Macphail (2000) | Source (archive) |
MacQueen (2015) | Source (archive) |
Maginnis (2006) | Source (archive) |
Mahowald (2011) | Source (archive) |
Maidenbaum et al. (2014) | Source (archive) |
Maley & Piccinini (2013) | Source (archive) |
Mallatt & Feinberg (2016) | Source (archive) |
Mallinson (2016) | Source |
Mandik (2013) | Source (archive) |
Marblestone et al. (2016) | Source (archive) |
Marino (2017a) | Source (archive) |
Marino (2017b) | Source (archive) |
Marinsek & Gazzaniga (2016) | Source (archive) |
Markkula (2015) | Source (archive) |
Markowitsch (2008) | Source (archive) |
Marshall (2010) | Source (archive) |
Marshall (2014) | Source (archive) |
Marshall (2016) | Source (archive) |
Martinson et al. (2005) | Source (archive) |
Mashour (2009) | Source (archive) |
Mashour & Alkire (2013) | Source (archive) |
Mashour & LaRock (2008) | Source (archive) |
Matheny & Chan (2005) | Source (archive) |
Matthews & Dresner (2016) | Source (archive) |
Mautner (2009) | Source (archive) |
Mazzola & Deuling (2013) | Source (archive) |
McCarthy-Jones (2012) | Source (archive) |
McDermott (2001) | Source (archive) |
McDermott (2007) | Source (archive) |
McGinn (2004) | Source (archive) |
McGovern & Baars (2007) | Source (archive) |
McLaughlin et al. (2009) | Source (archive) |
McNamara & Butler (2013) | Source (archive) |
Menzel & Fischer (2011) | Source (archive) |
Merker (2005) | Source (archive) |
Merker (2007) | Source (archive) |
Merker (2013) | Source (archive) |
Merker (2016) | Source (archive) |
Metzinger (2003) | Source (archive) |
Metzinger (2010) | Source (archive) |
Metzinger (2013) | Source (archive) |
Meyer et al. (2009) | Source (archive) |
Michael Bach, “Visual Phenomena & Optical Illusions: 132 of them” | Source (archive) |
Mike the Headless Chicken, “History” | Source (archive) |
Miklósi & Soproni (2006) | Source (archive) |
Miller (2000) | Source (archive) |
Miller (2013) | Source (archive) |
Miller (2015) | Source (archive) |
Millsap (2011) | Source (archive) |
Milner & Goodale (2006) | Source (archive) |
Minds and Machines, Volume 4, Issue 4 (1994) | Source |
MIT Press | Source (archive) |
Mitchell (2005) | Source (archive) |
Mole (2013) | Source (archive) |
Möller (2016) | Source (archive) |
Molyneux (2012) | Source (archive) |
Moro et al. (2011) | Source (archive) |
Morris (2011) | Source (archive) |
Morris (2015) | Source (archive) |
Muehlhauser (2010) | Source (archive) |
Muehlhauser (2011) | Source (archive) |
Muehlhauser (2015) | Source (archive) |
Muehlhauser & Williamson (2013) | Source (archive) |
Muehlhauser, Animal consciousness elicitation survey, 2016 | Source (archive) |
Muehlhauser, Animal consciousness self-elicitations chart, 2016 | Source |
Muehlhauser, Animal consciousness self-elicitations spreadsheet, 2016 | Source |
Mueller (2013) | Source (archive) |
Munevar (2012) | Source (archive) |
Nagel (1974) | Source (archive) |
Nagel (1997) | Source (archive) |
Nahm et al. (2012) | Source (archive) |
Nakagawa & Parker (2015) | Source (archive) |
Nakagawa & Santos (2012) | Source (archive) |
Nakagawa et al. (2017) | Source (archive) |
Nash & Barnier (2008) | Source (archive) |
Necker (2014) | Source (archive) |
Newcombe & Johnson (1999) | Source (archive) |
Newell & Shanks (2014) | Source (archive) |
Newson (2007) | Source (archive) |
Ng (1995) | Source (archive) |
Nichols & Stich (2003) | Source (archive) |
Nicol (2015) | Source (archive) |
Nissen et al. (2016) | Source (archive) |
Norberg (2016) | Source (archive) |
Norwood & Lusk (2011) | Source (archive) |
Nosek et al. (2015) | Source (archive) |
Nuijten et al. (2016) | Source (archive) |
O’Connor et al. (2007) | Source (archive) |
O’Neill (2015) | Source (archive) |
O’Regan (2011) | Source (archive) |
O’Regan (2012) | Source (archive) |
O’Regan & Noe (2001) | Source (archive) |
Odgaard-Jensen et al. (2011) | Source (archive) |
OECD (2013) | Source (archive) |
Oesterheld (2016) | Source (archive) |
Ohga et al. (1993) | Source |
Oizumi et al. (2014) | Source (archive) |
Olkowicz et al. (2016) | Source (archive) |
Olmstead & Kuhlmeier (2015) | Source (archive) |
Ord (2006) | Source (archive) |
Ord (2015) | Source (archive) |
Ortega (2005) | Source |
Ostrovsky et al. (2006) | Source (archive) |
Osvath & Karvonen (2012) | Source (archive) |
Our non-verbatim summary of a conversation with Aaron Sloman, July 3, 2016 | Source |
Our non-verbatim summary of a conversation with Brian Tomasik, October 6, 2016 | Source |
Our non-verbatim summary of a conversation with Carl Shulman, August 19, 2016 | Source |
Our non-verbatim summary of a conversation with David Chalmers, May 20, 2016 | Source |
Our non-verbatim summary of a conversation with Derek Shiller, January 24, 2017 | Source |
Our non-verbatim summary of a conversation with Gary Drescher, July 18, 2016 | Source |
Our non-verbatim summary of a conversation with James Rose, November 18, 2016 | Source |
Our non-verbatim summary of a conversation with Joel Hektner, December 17, 2015 | Source |
Our non-verbatim summary of a conversation with Keith Frankish, January 24, 2017 | Source |
Our non-verbatim summary of a conversation with Michael Tye, August 24, 2016 | Source |
Overgaard (2011) | Source (archive) |
Overgaard (2015) | Source (archive) |
Owen (2013) | Source (archive) |
Owen et al. (2002) | Source (archive) |
Oxford University Press | Source (archive) |
Palazzo et al. (2013) | Source (archive) |
Panayiotopoulos (2008) | Source (archive) |
Panksepp (2008) | Source |
Paoni et al. (1981) | Source (archive) |
Papineau (1993) | Source (archive) |
Papineau (2002) | Source (archive) |
Papineau (2003) | Source (archive) |
Papineau (2009) | Source (archive) |
Park et al. (2008) | Source (archive) |
Parker (2003) | Source (archive) |
Parker et al. (2016) | Source (archive) |
Pärnpuu (2016) | Source (archive) |
Parvizi & Damasio (2001) | Source (archive) |
Pastor et al. (1996) | Source (archive) |
Paul et al. (2007) | Source (archive) |
Pearce (2008) | Source (archive) |
Pearce (2013) | Source (archive) |
Pekala & Kumar (2000) | Source (archive) |
Penry & Dreifuss (1969) | Source (archive) |
Pereboom (2011) | Source (archive) |
Perry (2009) | Source (archive) |
Perry et al. (2002) | Source (archive) |
Pessoa & Weerd (2003) | Source (archive) |
PETRL, “An Interview with Eric Schwitzgebel and Mara Garza” | Source (archive) |
Philippi et al. (2012) | Source (archive) |
Phillips (2014) | Source (archive) |
Phillips (2017a) | Source (archive) |
Phillips (2017b) | Source (archive) |
Philomel Records, “Diana Deutsch’s Audio Illusions” | Source (archive) |
Philosophy, et cetera, “The Cartesian Theatre” | Source (archive) |
PhilPapers, “The PhilPapers Surveys” | Source (archive) |
Pinker (2007) | Source (archive) |
Pinker (2011) | Source (archive) |
Pinto et al. (2017) | Source (archive) |
Pistorius (2013) | Source (archive) |
Pitts et al. (2014) | Source (archive) |
Place (1956) | Source (archive) |
PNAS, “Rights and Permissions” | Source (archive) |
Poe (2014) | Source (archive) |
Pokahr et al. (2005) | Source (archive) |
Poldrack et al. (2017) | Source (archive) |
Polger (2017) | Source (archive) |
Polger & Shapiro (2016) | Source (archive) |
Politis & Loane (2012) | Source (archive) |
Post (2004) | Source (archive) |
Powers (2014) | Source (archive) |
Prasad & Cifu (2015) | Source (archive) |
Prasad et al. (2013) | Source (archive) |
Preston, “Analytic Philosophy” | Source (archive) |
Preti (2007) | Source (archive) |
Preti (2011) | Source (archive) |
Price (1999) | Source |
Prigatano (2010) | Source (archive) |
Prince et al. (2008) | Source (archive) |
Prinz (2007) | Source (archive) |
Prinz (2012) | Source (archive) |
Prinz (2015a) | Source (archive) |
Prinz (2015b) | Source (archive) |
Prinz (2016) | Source (archive) |
Prinz et al. (2011) | Source (archive) |
PsychonautWiki, “Psychedelic” | Source (archive) |
Puccetti (1998) | Source (archive) |
Purves et al. (2011) | Source (archive) |
Putnam (1988) | Source (archive) |
Pyke (2014) | Source (archive) |
Qadri & Cook (2015) | Source (archive) |
Quian Quiroga (2012) | Source (archive) |
Quora, “What is the most intelligent thing a non-human animal has done?” | Source |
Raby et al. (2007) | Source (archive) |
Rachels (1990) | Source (archive) |
Rachels (2004) | Source (archive) |
Ramachandran & Brang (2009) | Source (archive) |
Random.org | Source (archive) |
Rao & Gershon (2016) | Source (archive) |
Reggia (2013) | Source (archive) |
Rehkämper et al. (2003) | Source (archive) |
Reilly & Schachtman (2008) | Source (archive) |
Reiner et al. (2004) | Source (archive) |
Reinhart et al. (2015) | Source (archive) |
Reise & Revicki (2015) | Source (archive) |
Remmer (2015) | Source (archive) |
Remy & Watanabe (1993) | Source |
Revonsuo (2009) | Source (archive) |
Rey (1983) | Source (archive) |
Rey (1992) | Source (archive) |
Rey (1995) | Source (archive) |
Rey (2007) | Source (archive) |
Rey (2015) | Source (archive) |
Rey (2016) | Source (archive) |
Rey et al. (2014) | Source (archive) |
Rial et al. (2008) | Source (archive) |
Ricciardelli (1993) | Source (archive) |
Rich (1997) | Source (archive) |
Riley & Freeman (2004) | Source (archive) |
Ringkamp et al. (2013) | Source |
Rinner et al. (2015) | Source (archive) |
Ritchie (2017) | Source (archive) |
Robinson (2015) | Source |
Robinson et al. (2015) | Source (archive) |
Rodd (1990) | Source (archive) |
Roelofs (2016) | Source (archive) |
Rolls (2013) | Source (archive) |
Romeijn & Roy (2014) | Source (archive) |
Rosati (1995) | Source (archive) |
Rose (2002) | Source (archive) |
Rose (2016) | Source (archive) |
Rose & Dietrich (2009) | Source (archive) |
Rose et al. (2014) | Source (archive) |
Rosenthal (1990) | Source |
Rosenthal (2006) | Source (archive) |
Rosenthal (2009) | Source (archive) |
Rossano (2003) | Source (archive) |
Roth & Dickie (2005) | Source (archive) |
Rowlands (2001) | Source (archive) |
Rusanen & Lappi (2016) | Source (archive) |
Russell & Norvig (2009) | Source (archive) |
Rutiku et al. (2015) | Source (archive) |
Rutledge et al. (2014) | Source (archive) |
Ryder (1996) | Source (archive) |
Sachs (2011) | Source (archive) |
Sachs (2015) | Source (archive) |
Safina (2015) | Source (archive) |
Sandberg (2014) | Source (archive) |
Sangiao-Alvarellos et al. (2004) | Source (archive) |
Sanz et al. (2013) | Source (archive) |
Sapontzis (2004) | Source (archive) |
Sato & Aoki (2006) | Source (archive) |
Sayre-McCord (2012) | Source (archive) |
Scales et al. (2005) | Source (archive) |
Schechter (2012) | Source (archive) |
Schechter (2014) | Source |
Schenck (2015) | Source (archive) |
Schenk & McIntosh (2009) | Source (archive) |
Schiff (2010) | Source (archive) |
Schiller & Tehovnik (2015) | Source (archive) |
Schneider & Velmans (2017) | Source (archive) |
Schubert & Masters (1991) | Source (archive) |
Schulze-Makuch & Irwin (2008) | Source (archive) |
Schwarz et al. (2008) | Source (archive) |
Schwitzgebel (2007a) | Source (archive) |
Schwitzgebel (2007b) | Source (archive) |
Schwitzgebel (2008) | Source (archive) |
Schwitzgebel (2011) | Source (archive) |
Schwitzgebel (2012) | Source (archive) |
Schwitzgebel (2015) | Source (archive) |
Schwitzgebel (2016) | Source (archive) |
Schwitzgebel & Garza (2015) | Source (archive) |
Seager & Allen-Hermanson (2010) | Source |
Searle (1992) | Source (archive) |
Searle (1997) | Source (archive) |
Searle (2002) | Source (archive) |
Sellars (1962) | Source (archive) |
Seth & Baars (2005) | Source (archive) |
Seth et al. (2006) | Source (archive) |
Shagrir (2012) | Source (archive) |
Shanahan (2010) | Source (archive) |
Shanon (2002) | Source (archive) |
Shapiro & Todorovic (2017) | Source (archive) |
Shepard (1964) | Source (archive) |
Shepard (1990) | Source (archive) |
Shepherd (2015) | Source (archive) |
Shepherd & Levy (forthcoming) | Source (archive) |
Shermer (2015) | Source (archive) |
Shettleworth (2009) | Source (archive) |
Shevlin (2016) | Source (archive) |
Shiller (2016) | Source (archive) |
Shioi et al. (1987) | Source (archive) |
Shor & Orne (1965) | Source (archive) |
Shriver (2014) | Source (archive) |
Shulman (2015) | Source (archive) |
Shumaker et al. (2011) | Source (archive) |
Siclari et al. (2017) | Source (archive) |
Siegel (2008) | Source (archive) |
Simmons et al. (2011) | Source (archive) |
Simon (2016) | Source (archive) |
Simonsohn et al. (2015) | Source (archive) |
Singer (2011) | Source (archive) |
Sinnott-Armstrong (2016) | Source (archive) |
Sinnott-Armstrong & Miller (2007) | Source (archive) |
Siontis et al. (2010) | Source (archive) |
Sittler-Adamczewski (2017) | Source (archive) |
Slate Star Codex, “Devoodooifying Psychology” | Source (archive) |
Smaldino & McElreath (2016) | Source (archive) |
Smallwood (2015) | Source (archive) |
Smart (1959) | Source (archive) |
Smart (2006) | Source (archive) |
Smart (2007) | Source (archive) |
Smith (1988) | Source (archive) |
Smith (2011) | Source (archive) |
Smith (2016a) | Source (archive) |
Smith (2016b) | Source (archive) |
Smith & Boyd (1991) | Source (archive) |
Smith & Lewin (2009) | Source (archive) |
Smith & Washburn (2005) | Source (archive) |
Smith et al. (2011) | Source (archive) |
Sneddon (2002) | Source (archive) |
Sneddon (2009) | Source (archive) |
Sneddon (2015) | Source (archive) |
Sneddon et al. (2003) | Source (archive) |
Sneddon et al. (2014) | Source (archive) |
Snowden et al. (2012) | Source (archive) |
Soares (2015) | Source (archive) |
Soares (2016) | Source (archive) |
Sobel (1999) | Source (archive) |
Sobel (2001) | Source (archive) |
Sobel (2017) | Source (archive) |
Software Engineering Stack Exchange, “Is Python Interpreted or Compiled?” | Source (archive) |
Sourjik & Wingreen (2012) | Source (archive) |
Spillmann and Werner (1990) | Source (archive) |
Squair (2012) | Source (archive) |
Stalans (2012) | Source (archive) |
Stamenov (1997) | Source (archive) |
Standish (2013) | Source (archive) |
Stanovich (2004) | Source (archive) |
Stanovich (2013) | Source (archive) |
Stanovich et al. (2016) | Source (archive) |
Starr et al. (2009) | Source (archive) |
Steegen et al. (2014) | Source (archive) |
Steele-Russell (1994) | Source (archive) |
Steele-Russell et al. (1979) | Source (archive) |
Sterzer (2013) | Source (archive) |
Stiles & Shimojo (2015) | Source (archive) |
Stone et al. (1999) | Source (archive) |
Stone et al. (2007) | Source (archive) |
Strausfeld (2012) | Source (archive) |
Strayer & Hummon (2001) | Source (archive) |
Streiner & Norman (2008) | Source (archive) |
Stumbrys et al. (2014) | Source (archive) |
Subitzky (2003) | Source (archive) |
Sun & Franklin (2007) | Source (archive) |
Sunstein (2005) | Source (archive) |
Sutton et al. (1980) | Source (archive) |
Suziedelyte & Johar (2013) | Source (archive) |
Swan (2013) | Source (archive) |
Swanton (1996) | Source (archive) |
Sytsma (2014) | Source (archive) |
Szucs & Ioannidis (2017) | Source (archive) |
Taborsky (2010) | Source (archive) |
Takeno (2012) | Source |
Tamietto & de Gelder (2010) | Source (archive) |
Tartaglia (2013) | Source (archive) |
Taurek (1977) | Source (archive) |
Taylor & Vickers (2016) | Source (archive) |
Tendal et al. (2011) | Source (archive) |
Tenney & Glauser (2013) | Source (archive) |
Terada et al. (2016) | Source (archive) |
Tesla, “Autopilot” | Source (archive) |
Thagard (1992) | Source (archive) |
Thagard (1999) | Source (archive) |
Thagard (2008) | Source (archive) |
Thagard & Stewart (2014) | Source (archive) |
The Official Mike the Headless Chicken Book, “Home Page” | Source |
The Open University, “Thought and Experience: Track 14” | Source (archive) |
Theiner (2014) | Source (archive) |
Thomas & Frankenberg (2002) | Source (archive) |
Thompson (1993) | Source (archive) |
Thompson (2009) | Source (archive) |
TimeTree | Source (archive) |
TimeTree, “Human versus Bantam” | Source (archive) |
TimeTree, “Human versus Bovine” | Source (archive) |
TimeTree, “Human versus Chimpanzee” | Source (archive) |
TimeTree, “Human versus E. coli” | Source (archive) |
TimeTree, “Human versus Fruit Fly” | Source (archive) |
TimeTree, “Human versus Japanese Blue Crab” | Source (archive) |
TimeTree, “Human versus Rainbow Trout” | Source (archive) |
Tittle (2004) | Source (archive) |
Togelius et al. (2010) | Source (archive) |
Tolman (1932) | Source (archive) |
Tomasik (2014a) | Source (archive) |
Tomasik (2014b) | Source (archive) |
Tomasik (2014c) | Source (archive) |
Tomasik (2014d) | Source (archive) |
Tomasik (2015a) | Source (archive) |
Tomasik (2015b) | Source (archive) |
Tomasik (2016a) | Source (archive) |
Tomasik’s hard-problem agent (switched to Python 3 syntax and more thoroughly commented) by Luke Muehlhauser | Source |
Tononi (2004) | Source (archive) |
Tononi (2014) | Source |
Tononi (2015) | Source (archive) |
Tononi & Koch (2015) | Source (archive) |
Tononi et al. (2015a) | Source (archive) |
Tononi et al. (2015b) | Source (archive) |
Tononi et al. (2016) | Source (archive) |
Trestman (2012) | Source (archive) |
Trevarthen & Reddy (2017) | Source (archive) |
Trewavas (2005) | Source (archive) |
Trout (2007) | Source (archive) |
Trout (2014) | Source (archive) |
Trout (2016) | Source (archive) |
Truog & Fackler 1992 | Source (archive) |
Tsakiris (2010) | Source (archive) |
Turner (2013) | Source (archive) |
Tutorials Point, “Execute Python-3 Online” | Source (archive) |
Tuyttens et al. (2016) | Source (archive) |
Tye (1995) | Source (archive) |
Tye (2000) | Source (archive) |
Tye (2009a) | Source (archive) |
Tye (2009b) | Source (archive) |
Tye (2015) | Source |
Tye (2016) | Source (archive) |
Unger (1988) | Source (archive) |
University of Toronto, “Deep Learning in Computer Vision: Winter 2016” | Source (archive) |
Urquiza-Haas & Kotrschal (2015) | Source (archive) |
Uttal (2011) | Source (archive) |
Uttal (2012) | Source (archive) |
Uttal (2015) | Source (archive) |
Uttal (2016) | Source (archive) |
Uttal & Campbell (2012) | Source (archive) |
Vaitl et al. (2005) | Source (archive) |
Vallar & Ronchi (2006) | Source (archive) |
Vallortigara (2000) | Source (archive) |
van Duijn et al. (2006) | Source (archive) |
Van Gulick (1995) | Source (archive) |
Van Gulick (2009) | Source (archive) |
Van Gulick (2014) | Source |
van Wilgenburg & Elgar (2013) | Source (archive) |
van Zanden et al. (2014) | Source (archive) |
Vanpaemel et al. (2015) | Source (archive) |
Varner (2012) | Source (archive) |
Veatch (1975) | Source (archive) |
Velmans (2012) | Source (archive) |
Verschure et al. (2014) | Source (archive) |
Vimal (2009) | Source (archive) |
Višak (2013) | Source (archive) |
Vito & Bartolomeo (2016) | Source (archive) |
Vonk & Shackelford (2012) | Source (archive) |
Voss & Hobson (2015) | Source (archive) |
Vuilleumier (2004) | Source (archive) |
Vul & Pashler (2008) | Source (archive) |
Vul & Pashler (2017) | Source (archive) |
Wadhams & Armitage (2004) | Source (archive) |
Waisman et al. (2014) | Source (archive) |
Waller et al. (2013) | Source (archive) |
Walsh et al. (2015) | Source (archive) |
Walters (1996) | Source (archive) |
Walters et al. (1983) | Source (archive) |
Ward (2011) | Source (archive) |
Ward (2013) | Source (archive) |
Warren (1997) | Source (archive) |
Wasserman et al. (2012) | Source (archive) |
Watt & Pincus (2004) | Source (archive) |
Waytz et al. (2012) | Source (archive) |
Webb & Graziano (2015) | Source (archive) |
Weisberg (2011) | Source (archive) |
Weisberg (2014) | Source (archive) |
Weiskrantz (1997) | Source (archive) |
Weiskrantz (2007) | Source (archive) |
Weiskrantz (2008) | Source (archive) |
Westfall & Yarkoni (2016) | Source (archive) |
Wetlesen (1999) | Source (archive) |
White (1991) | Source (archive) |
Whitehead & Rendell (2014) | Source (archive) |
Whittingham et al. (2006) | Source (archive) |
Wicherts et al. (2011) | Source (archive) |
Wicherts et al. (2016) | Source (archive) |
Wikimedia Commons, “File:1424 Visual Streams.jpg” | Source (archive) |
Wikipedia, “A* search algorithm” | Source (archive) |
Wikipedia, “AI-complete” | Source (archive) |
Wikipedia, “Alex (parrot)” | Source (archive) |
Wikipedia, “Alpha–beta pruning” | Source (archive) |
Wikipedia, “AlphaGo versus Lee Sedol” | Source (archive) |
Wikipedia, “AlphaGo” | Source (archive) |
Wikipedia, “Analytical Engine” | Source (archive) |
Wikipedia, “Anatomically modern human” | Source (archive) |
Wikipedia, “Anencephaly” | Source (archive) |
Wikipedia, “Anthropomorphism” | Source (archive) |
Wikipedia, “Antonie van Leeuwenhoek” | Source (archive) |
Wikipedia, “Application-specific integrated circuit” | Source (archive) |
Wikipedia, “Autonomous car” | Source (archive) |
Wikipedia, “Autotomy” | Source (archive) |
Wikipedia, “Belief–desire–intention software model” | Source (archive) |
Wikipedia, “Biological immortality” | Source (archive) |
Wikipedia, “Björn Merker” | Source (archive) |
Wikipedia, “Blind spot (vision)” | Source (archive) |
Wikipedia, “Caenorhabditis elegans” | Source (archive) |
Wikipedia, “Cattle” | Source (archive) |
Wikipedia, “Chaser (dog)” | Source (archive) |
Wikipedia, “Chicken” | Source (archive) |
Wikipedia, “Chimpanzee” | Source (archive) |
Wikipedia, “Clever Hans” | Source (archive) |
Wikipedia, “Collision detection” | Source (archive) |
Wikipedia, “David Marr (neuroscientist): Levels of analysis” | Source (archive) |
Wikipedia, “Device driver” | Source (archive) |
Wikipedia, “Drosophila melanogaster” | Source (archive) |
Wikipedia, “Élan vital” | Source (archive) |
Wikipedia, “Enteric nervous system” | Source (archive) |
Wikipedia, “Escherichia coli” | Source (archive) |
Wikipedia, “Evidence-based medicine” | Source (archive) |
Wikipedia, “Evolutionary algorithm” | Source (archive) |
Wikipedia, “Extremophile” | Source (archive) |
Wikipedia, “False awakening” | Source (archive) |
Wikipedia, “Flow (psychology)” | Source (archive) |
Wikipedia, “Global workspace theory (GWT)” | Source (archive) |
Wikipedia, “Homomorphic encryption: Fully homomorphic encryption” | Source (archive) |
Wikipedia, “Hydrocephalus” | Source (archive) |
Wikipedia, “Integrated development environment” | Source (archive) |
Wikipedia, “Intentional stance: Dennett’s three levels” | Source (archive) |
Wikipedia, “Knowledge argument” | Source (archive) |
Wikipedia, “Life: Biology” | Source (archive) |
Wikipedia, “Lisp (programming language)” | Source (archive) |
Wikipedia, “List of animal welfare groups” | Source (archive) |
Wikipedia, “List of animals by number of neurons” | Source (archive) |
Wikipedia, “List of longest-living organisms” | Source (archive) |
Wikipedia, “List of people with locked-in syndrome” | Source (archive) |
Wikipedia, “Loop quantum gravity” | Source (archive) |
Wikipedia, “Microsoft Windows” | Source (archive) |
Wikipedia, “Mike the Headless Chicken” | Source (archive) |
Wikipedia, “Mirror test” | Source (archive) |
Wikipedia, “Neuropathic pain” | Source (archive) |
Wikipedia, “Obligate parasite” | Source (archive) |
Wikipedia, “Pareto principle” | Source (archive) |
Wikipedia, “Particle horizon” | Source (archive) |
Wikipedia, “Pathfinding” | Source (archive) |
Wikipedia, “Pelagibacter ubique” | Source (archive) |
Wikipedia, “Persistent vegetative state” | Source (archive) |
Wikipedia, “Person-affecting view” | Source (archive) |
Wikipedia, “Philosophical Investigations” | Source (archive) |
Wikipedia, “Phlogiston theory” | Source (archive) |
Wikipedia, “Portunus trituberculatus” | Source (archive) |
Wikipedia, “Python (programming language)” | Source (archive) |
Wikipedia, “r/K selection theory” | Source (archive) |
Wikipedia, “Rainbow trout” | Source (archive) |
Wikipedia, “Rapid eye movement sleep behavior disorder” | Source (archive) |
Wikipedia, “Shared memory” | Source (archive) |
Wikipedia, “Sorting algorithm” | Source (archive) |
Wikipedia, “Split-brain” | Source (archive) |
Wikipedia, “Standard Model” | Source (archive) |
Wikipedia, “Symbolic artificial intelligence” | Source (archive) |
Wikipedia, “Vespula austriaca” | Source (archive) |
Wikipedia, “Von Neumann architecture” | Source (archive) |
Wilczek (2008) | Source (archive) |
Wilson (2004) | Source (archive) |
Wilson (2016) | Source (archive) |
Wimsatt (1976) | Source (archive) |
Wimsatt (2007) | Source (archive) |
Windt (2011) | Source (archive) |
Windt et al. (2016) | Source (archive) |
Winkielman & Berridge (2004) | Source (archive) |
Wise (2003) | Source (archive) |
Wittenberg & Baumeister (1999) | Source (archive) |
Wolpert (2011) | Source (archive) |
Wood (2011) | Source (archive) |
Wooldridge (1963) | Source (archive) |
Wooldridge (2000) | Source (archive) |
WorldAnimal.net, “Animal Protection in World Constitutions” | Source (archive) |
Wright et al. (2012) | Source (archive) |
Wuichet & Zhulin (2010) | Source (archive) |
Wulff (2014) | Source |
Wynne (2004) | Source (archive) |
Wynne & Udell (2013) | Source (archive) |
Xiao & Güntürkün (2009) | Source (archive) |
Yamamoto et al. (1990) | Source (archive) |
Yong (2016) | Source (archive) |
Yong (2017) | Source (archive) |
Young (2008) | Source (archive) |
Young (2012) | Source (archive) |
Young & Karr (2011) | Source (archive) |
Young & Leafhead (1996) | Source |
YouTube, “Computer teaches itself to play games – BBC News” | Source |
YouTube, “Crab amputates his own claw” | Source |
YouTube, “Fluent Aphasia (Wernicke’s Aphasia)” | Source |
YouTube, “Infinite Mario AI – Long Level” | Source |
YouTube, “Interview with Jesse Prinz” | Source |
Yudkowsky (2007) | Source (archive) |
Yudkowsky (2008a) | Source (archive) |
Yudkowsky (2008b) | Source (archive) |
Yudkowsky (2008c) | Source (archive) |
Yudkowsky (2008d) | Source (archive) |
Zeman et al. (2015) | Source (archive) |
Zhao (2016) | Source (archive) |
Zihl (2013) | Source (archive) |