Brains are not conscious: Part 2

The idea that brains produce conscious experience runs up against an unsolvable logical problem: how can the same brain that is generating the experience also experience the experience.

Cognitive psychology came upon this early on with something called the homunculus infinite regress problem. If the idea of mental representation—a foundational principle in cognitive psychology—is taken seriously, then you need both the process that is creating the representation, and a “something else” that it is being represented to. But in order for this something else to know what it is observing when it interprets the representation, it needs a way of representing the representation, and so on in an infinite regress.

Stated in neurological terms relative to consciousness, in order to be conscious, you need something to be conscious of (the output of brain activity) and something (other brain activity) to register the awareness of it. But this registration of awareness is just more brain activity and not the awareness itself. For that you need something else, which can only be additional brain activity requiring yet an additional something else and so on.

It’s turtles all the way down.

The only real solution to the infinite regress problem is to dislodge consciousness from the organism. Material engagement theory reflects one interesting attempt to articulate this. In (grossly) simplified terms, material engagement theory (MET) considers conscious experience as a phenomenological property of the three-way interaction (interpenetration?) between culture, the person, and the material world. [In his book, How Things Shape the Mind, Lambros Malafouris provides an in-depth look at material engagement theory, and, in particular, how it relates to human tool use.]   

But the notion that consciousness is a brain product is a hard one to shake for most folks. Part of the reason for this may have to do with the way civilized life is structured, and the fact that (a point that MET drives home) the way we think about the world is not independent of what we are actually doing and how we are doing it.

Consider how a logical error called the mereological fallacy plays into this. The mereological fallacy, as it applies to living organisms, is the tendency to ascribe characteristics of the whole creature to a single component part. For example, to say that stomachs digest, or that legs run, or that brains remember. The world of cognitive neuroscience is replete with examples of this logical error, conscious brains being just one.     

The mereological fallacy reflects a thoughtform that is endemic to the civilized mind, and its parts-standing-in-for-the-whole nature reflects the constituent partitioning of civilized activity. Civilization is a mechanistically structured, systemized way of living in which various parts of the system are designed to play circumscribed roles or carry out specific isolated functions. In addition, the civilized perspective ascribes ontological priority to abstract categories rather than to concrete material events. In reality, of course, consciousness is just an idea, a concept, an abstraction. Awareness is never without content. Awareness is not to be found outside of the content. But consciousness is nonetheless mistaken for a thing in itself, is reified, is granted “thingness,” and its existence then becomes a problem to be solved.

So, what difference does it make how we think about consciousness? What does it matter and why should anyone care?

Seeing consciousness as something produced by brains, something happening inside us, something separate and independent of the world itself, supports the irrational notion that our experience exists somehow independent of the various environments that we inhabit, that we could be who we are independent of the larger contexts in which who we are expresses itself, that we could somehow exist outside of any context at all, or in an artificial context of our own choosing. Not only does this predispose us to undervalue the rest of living nature, but it can ultimately lead to absurd and potentially dangerous notions such as the crazy idea that physical reality doesn’t matter at all, that we could, for example, upload our consciousness into an artificial medium and become an inhabitant of an electronic “metaverse.”   

Consciousness doesn’t happen in brains. First-person experience is not actually first-person, but something that emerges from our shared participation in a world inhabited by numerous others. And, perhaps most importantly, who we are at any point in time depends critically on those numerous others.  Those numerous others are, in the most intimate and inseparable sense, us.  

Brains are not conscious: Part 1

According to a recent theory, the capacity for conscious awareness traces to the evolution of primitive brain areas designed to predict the organism’s likely future movements by generating a dynamic simulation or model of the organism itself. The appearance of the cortex added a whole new layer (literally) to the complexity and sophistication of this self-simulation ability, and consciousness emerged as a result.

A somewhat compelling theory, perhaps. But describing brain areas and their putative activities says absolutely nothing about consciousness.

Our first-person experiential awareness of the world is simply not something that can be explained by what happens inside brains. Consciousness cannot be found lurking within neural connections or patterns of brain activity. No matter how complex or nuanced or sophisticated this brain activity is, it is still just organized clusters of neurons engaged in interactive twitching.

Now, I understand and accept that my conscious awareness is correlated with the activity and structures of my nervous system. Pull out parts of my brain, and my conscious experience is likely to change. But that doesn’t mean that my conscious experience is being created by neural activity.

Perhaps, as panpsychic theories claim, consciousness is an imminent feature of the universe itself, a fundamental quality of matter, something that permeates everything, part of the substrate of reality, a primary feature of what the universe is, and physical brains act to compartmentalize this universal consciousness, temporarily sequester slivers of it.

Panpsychic notions seem at least as reasonable as the idea that the organized twitching of neurons can summon consciousness out of the void. And there are likely other possibilities that are just as reasonable, but that have yet to be conceived.

To explain consciousness in terms of brain activity is merely to restate the question.

Edit: here is a good, easy-to-understand article about the ridiculousness of thinking about brains as computers.