Categorization is the mental process of grouping objects, people, events, or ideas into classes based on shared features or relationships. It is one of the most fundamental cognitive operations: every time you recognize something as a “chair,” label an animal as a “dog,” or sort a stranger into “friendly” or “unfriendly,” your brain is categorizing. This process gives you cognitive economy, letting you generalize knowledge from a few examples to a huge number of new situations instead of treating every encounter as completely novel.
Categories also serve as a basis for inference. Knowing that a berry belongs to the category “poisonous” lets you infer that eating it would be harmful, even if you have never seen that specific berry before. In this way, categorization is tightly linked to decision making: you identify candidate categories, evaluate the fit, and select one, then act on what that category membership implies.
The Classical View: Definitions and Their Limits
The oldest theory of categorization, sometimes called the classical or definitional view, proposes that every category is defined by a set of features that are individually necessary and jointly sufficient for membership. Under this view, something either meets the definition and belongs in the category, or it doesn’t. There are no borderline cases and no better or worse members. A triangle, for example, must have three sides and three angles, and anything meeting those criteria is equally a triangle.
The classical view works well for mathematical or legal categories, but it struggles with the categories people actually use in everyday life. Decades of research have shown that people consistently rate some category members as more “typical” than others. A robin is a more typical bird than a penguin, and people verify the robin’s membership faster. The classical view has no mechanism to explain these typicality effects, since every member that meets the definition should be equally representative. This shortcoming pushed psychologists toward alternative theories.
Prototype Theory
Prototype theory, developed largely from the work of Eleanor Rosch in the 1970s, proposes that categories are organized around an abstract mental average of typical instances, called a prototype. You don’t check a rigid checklist of necessary features. Instead, you compare a new item to the prototype and judge whether it shares enough of the important (though not strictly required) characteristics. For the category “bird,” the prototype might include features like flies, nests in trees, sings, and has feathers. A robin matches most of these features and is classified quickly. A penguin matches fewer, so it takes longer and feels like a less natural fit.
This framework neatly explains the typicality effect. When you encounter a highly typical item, the threshold of “enough matching features” is reached almost immediately, so classification is fast. For an atypical item, your brain has to check more features before reaching a conclusion, which takes measurably longer. Studies have confirmed this pattern across many domains. When people classify basic human needs, for instance, items like food and water (highly typical) are sorted faster than items like entertainment or love (atypical), because the typical items share more features with the mental prototype.
Exemplar Theory
Exemplar theory offers a different account. Rather than comparing a new item to a single abstract prototype, your brain compares it to stored memories of every specific example you have previously encountered. When you see a new dog, you don’t compare it to some averaged “ideal dog.” You compare it to the memory of your neighbor’s golden retriever, the stray you saw last week, the German shepherd in a movie, and every other dog you have experienced. The probability that you classify the new animal as a dog increases with its combined similarity to all those stored examples.
Every new encounter creates a new memory representation, so your category knowledge keeps expanding. This theory is particularly good at explaining how people remain sensitive to the variability within categories. If you have seen many small dogs and only one Great Dane, your sense of what a “typical” dog looks like will reflect that distribution. Prototype theory and exemplar theory often make similar predictions, and much of the research in this area has focused on designing experiments that can tease the two apart.
Levels of Categorization
Not all categories sit at the same level of abstraction. Psychologists distinguish three hierarchical levels. Looking at the food in front of you, you could call it “fruit” (the superordinate level), “apple” (the basic level), or “golden delicious” (the subordinate level). Each level carves the world differently, and your brain does not treat them equally.
The basic level holds a privileged position in cognition. People verify objects fastest at the basic level, produce the most shared features for basic-level categories, and tend to default to basic-level labels in everyday speech. You are far more likely to say “I bought an apple” than “I bought a fruit” or “I bought a golden delicious.” Rosch and colleagues argued this happens because basic-level categories maximize two things at once: they capture the most features that members have in common, while still distinguishing themselves clearly from neighboring categories. Superordinate categories like “fruit” are too broad, so their members share relatively few features. Subordinate categories like “golden delicious” are so narrow that they overlap heavily with neighboring categories like “gala” or “fuji,” making them less informative for quick identification.
Research on actions, not just objects, confirms the same pattern. When people are shown images of actions and asked to match them to labels, they respond fastest and most accurately at the basic and subordinate levels compared to the superordinate level.
The Role of Causal Knowledge
Similarity alone does not explain all categorization. People also use their theories about how the world works. If you know that a certain disease is caused by a virus, you will group symptoms differently than if you believe the same symptoms are caused by stress. This knowledge-based (sometimes called theory-based) approach proposes that people represent the causal mechanisms linking a category’s features and classify new items by evaluating whether those mechanisms could have produced them.
In experiments where participants learn causal relationships between the features of a novel category, this causal knowledge reliably changes which features they treat as important and how much weight they give to correlations between features. A feature that plays a causal role in producing other features becomes more central to the category than one that is merely common. This helps explain why some features feel more “essential” than others, even when they occur at similar rates.
Multiple Systems in the Brain
A growing body of evidence suggests the brain does not rely on a single categorization system. One influential model, called COVIS, proposes at least two competing systems. One is a frontal-lobe-based system that learns explicit rules and draws on declarative memory, the kind of knowledge you can consciously state (“if it has four legs and barks, it’s a dog”). The other is a procedural-learning system mediated by the basal ganglia, the same structures involved in habit learning, which picks up on patterns that are difficult to put into words.
Different types of categorization tasks recruit different systems. A task with a clean, verbalizable rule (“all red objects go in group A”) engages the rule-based system. A task where the boundary between categories is complex and hard to describe engages procedural learning. No single neuroscience-based model currently accounts for all types of categorization at once, which reflects just how many cognitive tools the brain brings to this seemingly simple job.
How Categorization Develops in Childhood
Categorization appears remarkably early. By six months of age, infants can form categories of animals and vehicles. In habituation studies, researchers show babies a series of different animals until the babies lose interest, then introduce a vehicle. The babies perk up, looking longer at the new item, which tells researchers the infants had grouped the animals together and recognized the vehicle as something different. This works even when the objects appear in varied and mismatched backgrounds, suggesting that infants are not just responding to surface-level scene similarities.
Social categorization emerges early too. By 17 months, infants show expectations about group loyalty. In one study, toddlers were familiarized with two novel groups marked by labels and visual features. When an agent chose to help a member of the other group instead of their own, infants looked longer, a sign of surprise. This suggests that even before children can articulate the concept of a “team” or “group,” they already expect group members to support one another.
Social Categorization and Bias
The same cognitive machinery that sorts apples from oranges also sorts people into social groups, and this has significant consequences. Adults spontaneously classify people into social categories based on race, gender, age, and many other dimensions, and these classifications guide learning, perception, and behavior, often without conscious awareness.
Research using the minimal group paradigm, where people are divided into groups on the basis of something trivial like a coin flip, consistently shows that merely being placed in a group is enough to trigger an own-group positivity bias. People extend more positive qualities to members of their group and more negative qualities to outsiders, even when the groups are arbitrary and meaningless. This bias appears in children as well, suggesting it is deeply rooted in how the mind organizes social information.
Group membership functions as a kind of schema that organizes knowledge, directs attention, and reinforces existing beliefs. Throughout childhood and into adulthood, the practice of social categorization is shaped by social motivations, including the desire to identify others as either in-group or out-group members and a sensitivity to intergroup dynamics that emerges long before children have the vocabulary to describe it.

