What Is Consciousness in Philosophy, Explained

In philosophy, consciousness refers to the subjective, first-person quality of experience. It is what the philosopher Thomas Nagel famously described as there being “something it is like” to be a particular organism. There is something it is like to taste coffee, to feel pain, to see the color red. That inner experience, and the deep puzzles it creates about the nature of mind and reality, is what philosophers mean when they talk about consciousness.

This sounds simple, but it leads to some of the hardest questions in all of philosophy: Is consciousness a product of the physical brain, or something beyond it? Can a machine be conscious? Could consciousness exist in places we don’t expect? These questions have produced centuries of debate and a rich landscape of competing theories.

Two Kinds of Consciousness

One of the most useful distinctions in the field comes from philosopher Ned Block, who in 1995 split consciousness into two types. The first, phenomenal consciousness, is the raw feeling of experience: the redness of red, the sting of a paper cut, the way a melody sounds in your head. Most philosophers and scientists agree that this refers to “what it is like” to be in a particular mental state.

The second type, access consciousness, is more functional. A mental state counts as access-conscious when it is available for reasoning, decision-making, and speech. Block argued that information in access consciousness is essentially the content of working memory: whatever your mind can grab and use right now to guide your behavior or form a sentence. If you can report what you’re experiencing, that experience is access-conscious.

The critical insight is that these two types might not always overlap. Block proposed a phenomenon called “overflow,” where you are genuinely experiencing something (phenomenally conscious of it) but it hasn’t yet entered working memory and you can’t report it or act on it. Think of a crowded room full of conversations. You hear all of them at some level, but you can only report the one you’re paying attention to. Whether that background auditory experience counts as “conscious” depends on which definition you use, and this disagreement drives much of the current research.

Descartes and the Mind-Body Problem

The modern philosophical debate about consciousness traces back to René Descartes in the 17th century. Descartes argued that the mind and body are completely different in nature and that each could exist by itself. The body is a physical thing, subject to physical laws. The mind, or soul, is a non-physical thinking thing. His famous declaration, “I think, therefore I am,” was built on the observation that you cannot deny the existence of your own mind while using your mind to do the denying.

This view, called substance dualism, has an obvious appeal. If the mind is non-physical, then the destruction of the body doesn’t necessarily destroy the person. It leaves room for survival after death, which is one reason dualism has remained culturally influential for centuries. But it also creates a stubborn problem: how does a non-physical mind interact with a physical body? If the soul can direct the body to move and the body can cause the soul to feel pain, then there must be some two-way connection between the physical and non-physical. No one has ever given a fully satisfying account of how that connection works.

Worse, as a Yale philosophy course on the subject points out, once you admit that physical processes can influence the soul, you’ve opened the door to physical processes potentially destroying the soul. The immateriality of the mind doesn’t automatically protect it.

The Physicalist Response

The dominant alternative to dualism is physicalism (also called materialism), which holds that there is only one kind of stuff in the universe: physical matter. On this view, a person is not a body plus a soul. A person is a body, full stop. But a body that can think, plan, feel, dream, communicate, and be creative. Consciousness, whatever it is, must arise from physical processes in the brain.

Physicalism lines up well with neuroscience, which has mapped countless correlations between brain activity and conscious experience. Damage a specific brain region and a specific aspect of experience changes or disappears. This is exactly what you’d expect if consciousness is a physical process. But physicalism faces its own deep challenge: explaining why and how physical processes give rise to subjective experience at all. You can describe every neuron firing in someone’s brain when they see a sunset, but that description doesn’t seem to capture what the sunset looks like to them.

Mary’s Room and the Limits of Physical Knowledge

The philosopher Frank Jackson made this challenge vivid in 1982 with a thought experiment known as Mary’s Room. Imagine a scientist named Mary who has spent her entire life in a black-and-white room. She studies the neuroscience of color vision through a black-and-white monitor and learns every physical fact there is to know about how humans see color: which wavelengths stimulate the retina, how the nervous system processes them, which brain regions activate. She knows everything physics, chemistry, and neuroscience can tell her.

Then one day, Mary steps outside and sees a red tomato for the first time. Does she learn something new?

Most people’s intuition says yes. She learns what red looks like. If that’s correct, Jackson argued, then there are facts about conscious experience that aren’t captured by physical facts. The logical conclusion is stark: there are non-physical facts about the world, and physicalism is incomplete. This argument remains one of the most debated in philosophy of mind, with physicalists offering various responses (for instance, that Mary doesn’t learn a new fact but gains a new ability, like recognizing red on sight).

Functionalism: Consciousness as a Job Description

Functionalism offers a different angle entirely. Rather than asking what consciousness is made of, it asks what consciousness does. On this view, what makes something a mental state isn’t its physical makeup but the role it plays in a system. A mental state is defined by its causal relationships: what triggers it, what other mental states it produces, and what behavior it leads to.

Pain, for example, would be defined as the state typically caused by bodily injury, which produces the belief that something is wrong, creates a desire for the pain to stop, generates anxiety, and (unless overridden by a stronger motivation) causes wincing or moaning. Anything that fills that functional role counts as pain, regardless of whether it runs on neurons, silicon chips, or something else entirely. This makes functionalism especially relevant to debates about artificial intelligence, since it implies that a machine could, in principle, be conscious if its internal states play the right functional roles.

Eliminativism: Is Consciousness an Illusion?

Some philosophers take a more radical position. Eliminative materialists, most notably Paul and Patricia Churchland, argue that our everyday understanding of the mind is a kind of folk theory, and like many folk theories, it may be fundamentally wrong. We talk about “beliefs,” “desires,” and “conscious experiences” as if they are well-defined things, but these categories might not map onto anything real happening in the brain.

The Churchlands warn that introspection is deeply unreliable. What we “observe” when we look inward may be shaped more by our inherited framework of folk psychology than by what is actually going on in the brain. Philosopher Georges Rey has gone further, suggesting that our ordinary concept of consciousness may correspond to no actual process at all. The “inner light” we associate with being conscious could be a leftover from outdated Cartesian intuitions, much like how people once “saw” celestial spheres in the night sky because their framework told them to.

Panpsychism and Integrated Information

At the opposite extreme from eliminativism sits panpsychism, the view that mental properties are a fundamental feature of the physical world, present even at very basic levels of reality. Rather than trying to explain how consciousness emerges from unconscious matter, panpsychism starts from the premise that some form of experience is woven into the fabric of nature itself.

This idea has gained renewed attention partly through Integrated Information Theory (IIT), developed by neuroscientist Giulio Tononi. IIT proposes that consciousness corresponds to integrated information: a system is conscious to the degree that its parts work together in a way that is greater than the sum of those parts. The philosopher David Chalmers has described IIT as a form of “emergent panpsychism,” meaning it doesn’t attribute consciousness to individual particles but to certain structures that emerge from the relationships between particles. IIT is more conservative than classic panpsychism in this way. It limits consciousness to physical systems with the right kind of interconnected architecture, rather than assigning it to every atom.

Can Machines Be Conscious?

The question of artificial consciousness puts all these theories to the test. Alan Turing proposed what became known as the Turing Test in 1950: if a machine can fool a human interrogator into thinking it is a person through conversation alone, does that count as evidence of intelligence, or even of mind? Turing himself thought the question “can machines think?” was too vague to be useful, and preferred this more concrete behavioral test.

Critics have raised several objections that cut to the heart of what consciousness is. One is that the only way to truly know whether a machine thinks is to be that machine and feel yourself thinking. Another is that genuine consciousness requires self-awareness: not just writing a sonnet, but knowing that you wrote it because of thoughts and emotions you actually felt. A third objection holds that intelligence without desire, emotion, and embodied experience isn’t really a mind at all.

Where you land on machine consciousness depends largely on which theory of consciousness you find most convincing. A functionalist might say yes, a sufficiently complex machine could be conscious. A dualist would say no, because machines lack a soul. A panpsychist might say that a machine already has some flicker of experience by virtue of being a physical system, but whether it has the integrated kind that matters is an open question. And an eliminativist might say we’re asking the wrong question entirely, because “consciousness” as we imagine it doesn’t exist in the way we think it does, in humans or machines.

Philosophy has not settled the question of what consciousness is. But it has mapped the terrain with remarkable precision, clarifying exactly where the disagreements lie and what is at stake in each one. The reason the problem endures is not that philosophers have been careless. It is that consciousness sits at the intersection of everything we find hardest to explain: the relationship between mind and matter, the reliability of self-knowledge, and what it ultimately means to be a subject in a world of objects.