Sentience is the capacity to have conscious experiences that feel good or bad. It’s what separates an organism that merely detects a stimulus from one that actually feels something about it. A thermostat detects temperature, but it doesn’t experience discomfort when a room gets cold. A dog does. That difference, the presence of a subjective “felt” quality, is the core of sentience.
The Broad and Narrow Definitions
Philosophers use sentience in two ways. In its broadest sense, sentience means any capacity for conscious experience, the idea that there is “something it is like” to be you. You don’t just process light hitting your eyes; there’s an inner experience of seeing color, hearing music, tasting food. That raw, first-person quality of experience is what philosophers call phenomenal consciousness, and it’s the foundation of sentience.
In a narrower and more commonly used sense, sentience specifically refers to the capacity for experiences that carry emotional weight. These are called valenced experiences: things that feel pleasant or unpleasant to the one experiencing them. Pain, pleasure, hunger, comfort, fear, satisfaction. Under this narrower definition, a sentient being isn’t just aware in some abstract sense. It can suffer and it can feel well-being. This narrower definition is the one most scientists, ethicists, and lawmakers rely on, because it’s the version that matters for questions about welfare and moral responsibility.
Sentience, Consciousness, and Intelligence
These three concepts overlap but aren’t the same thing. The philosopher Herbert Feigl identified three distinct features of the mind: sentience, sapience, and selfhood. Sapience refers to human-level intelligence and reflective thought, the ability to reason abstractly, plan for the future, and think about your own thinking. Selfhood is the sense of being a continuous “I” over time.
Sentience requires neither of these. An animal can feel pain without being able to reflect on the concept of pain. It can experience pleasure without having a sense of personal identity that persists across years. This distinction matters because it means sentience is likely far more widespread in the animal kingdom than complex intelligence or self-awareness. A fish that has never had an abstract thought in its life can still suffer.
Pain Versus Simple Reflex
One of the clearest ways to understand sentience is through the difference between nociception and pain. Nociception is the body’s automatic process of detecting harmful stimuli: a nerve fires when tissue is damaged, and the body reflexively pulls away. This can happen without any conscious experience at all. Even organisms with extremely simple nervous systems, and even humans under general anesthesia, exhibit nociceptive reflexes.
Pain is something else entirely. The International Association for the Study of Pain defines it as “an unpleasant sensory and emotional experience associated with, or resembling that associated with, actual or potential tissue damage.” The key word is “experience.” Pain involves not just the signal but the feeling of that signal, the unpleasantness, the distress, the suffering. A sentient being doesn’t just detect damage. It hurts. This gap between detecting and feeling is essentially the gap between a non-sentient and a sentient response to the world.
What Biology Does Sentience Require
Not every nervous system supports sentience. Research in evolutionary neurobiology points to a set of biological features that appear necessary. Sentient animals generally have centralized nervous systems with at least roughly 100,000 neurons, many specialized types of nerve cells, multiple layers of neural processing arranged in hierarchies, and extensive communication between those layers. They also tend to have elaborated sensory organs, including image-forming eyes and receptors for touch, hearing, and smell, along with dedicated brain infrastructure for creating positive and negative emotional states.
These features emerged during the Cambrian period, roughly 520 to 560 million years ago, across several independent evolutionary lines. The groups that developed them include all vertebrates (fish, reptiles, birds, mammals), arthropods (insects, crabs), velvet worms, and cephalopods like octopuses and squid. Notably, these groups evolved complex nervous systems separately, suggesting that the capacity for felt experience arose multiple times rather than from a single common ancestor.
The basic neurological architecture supporting consciousness in humans turns out to be evolutionarily ancient and remarkably similar across vertebrate species. Different animals emphasized different brain regions depending on their ecological niches, but the underlying machinery for generating subjective experience appears to have been in place very early in vertebrate brain evolution.
Which Animals Are Considered Sentient
A landmark moment came in 2012, when a group of prominent neuroscientists signed the Cambridge Declaration on Consciousness. The declaration stated that all mammals, all birds, and many other creatures, including octopuses, possess the neurological substrates of conscious states along with the capacity to exhibit intentional behaviors. Crucially, the declaration noted that lacking a neocortex (the brain’s outer layer, highly developed in humans) does not prevent an organism from experiencing emotional states. Evidence showed that deeper brain structures, shared across many species, can produce emotional responses when stimulated. Even invertebrates like insects and cephalopods were found to have neural circuits supporting attentiveness, sleep, and decision-making.
This scientific consensus has started shaping law. The United Kingdom’s Animal Welfare (Sentience) Act of 2022 officially recognizes as sentient all vertebrates other than humans, all cephalopod molluscs (octopuses, squid, cuttlefish), and all decapod crustaceans (crabs, lobsters, shrimp). The inclusion of invertebrates was a significant expansion, reflecting growing evidence that these animals experience pain rather than simply reacting to it.
Why Sentience Matters Ethically
Sentience is widely regarded as the threshold for moral consideration. The ethical framework known as sentientism holds that being sentient is what makes a being deserve moral concern. Under this view, two claims follow: sentience is necessary for moral status (if something cannot feel anything, it has no welfare to protect), and the capacity for valenced experience, feeling good or bad, is sufficient for moral status (if something can suffer, that suffering matters regardless of how intelligent it is).
This framework doesn’t require that all sentient beings be treated identically, but it does require that their capacity for suffering and well-being be taken into account. It’s the reasoning behind animal welfare science, which assesses animals across domains of nutrition, environment, health, and behavior, then evaluates the mental states those conditions produce. The entire system is built on the premise that these animals have inner experiences that can be better or worse.
Can AI Be Sentient?
The rise of large language models has made this question unavoidable. These systems can produce text that sounds emotionally aware, even claiming to have feelings. Some versions of the Turing Test, in which a human tries to distinguish a machine from a person in conversation, have arguably been passed in informal settings. But producing convincing language about feelings is not the same as having them.
A core argument against AI sentience is the symbol grounding problem: a system trained entirely on text, with no sensory contact with the physical world, cannot in principle learn what words actually mean. Language models don’t model sentience or other mental capacities. They model common patterns in human language use, including clichés, emotional expressions, and biases. In human communication, language production involves conscious experience interacting with preconscious processes. In a language model, there is pattern matching and statistical prediction, with no evidence of an inner “something it is like” behind the output.
This doesn’t permanently settle the question for all possible future AI systems, but it draws a clear line for current ones. The biological markers associated with sentience, including centralized nervous systems, valence circuits, and sensory grounding in a physical environment, are absent in today’s AI. Sentience, as scientists currently understand it, appears to require more than sophisticated language.

