Bottom-up processing builds perception from raw sensory data, starting with what your eyes, ears, or skin detect and working upward toward meaning. Top-down processing works in the opposite direction, using your existing knowledge, expectations, and goals to shape how you interpret that sensory data. Your brain uses both simultaneously, and the interplay between them is what creates your moment-to-moment experience of the world.
How Bottom-Up Processing Works
Bottom-up processing is sometimes called “stimulus-driven” because it starts with the stimulus itself. Light hits your retina, sound waves vibrate your eardrum, pressure activates nerve endings in your skin. Those signals travel through a hierarchy of increasingly complex processing stages before you consciously recognize what you’re perceiving. No prior knowledge is required at the start. The raw data does the work.
The classic demonstration of this hierarchy comes from the visual system. Neurons in the earliest stages of your visual cortex respond to simple features: edges, lines at specific angles, contrasts between light and dark. These were first mapped by neuroscientists David Hubel and Torsten Wiesel, who showed that individual brain cells in the visual cortex fire in response to oriented slits of light rather than simple dots. Their work revealed that visual processing is built in layers. “Simple cells” detect edges at particular angles, and their outputs feed into “complex cells” that respond to those same edges regardless of exact position. Each layer combines the outputs of the layer below it, progressively assembling more complex representations. A set of edges becomes a shape. A set of shapes becomes an object.
This stepwise construction from simple features to complex wholes is the essence of bottom-up processing. It’s fast in terms of initial capture. Bottom-up attention grabs you within about 120 milliseconds of encountering a salient stimulus, though it fades quickly, typically within 300 milliseconds. A loud bang, a flash of color in your peripheral vision, a sudden movement: these all seize your attention automatically before you’ve had time to think about them.
How Top-Down Processing Works
Top-down processing flows in the opposite direction. Higher brain areas, particularly the prefrontal cortex and regions in the frontal and parietal lobes, send signals back down to earlier sensory areas, effectively telling them what to look for. These feedback signals carry information about your current goals, expectations, memories, and the context of the situation. The result is that the same sensory input can be perceived differently depending on what you already know or what you’re trying to do.
This isn’t a subtle effect. Neurons in your primary visual cortex literally change what they respond to based on top-down instructions. When an animal is cued to look for a specific shape, neurons in the earliest visual processing areas shift their selectivity to match components of the expected shape. In other words, your brain pre-builds a set of filters tuned to what it thinks is coming, then uses incoming sensory data to confirm or revise that expectation. The psychologist Richard Gregory described this as “hypothesis testing”: your brain generates a best guess about what’s happening and then checks it against the data streaming in from your senses.
Top-down attention takes longer to deploy than bottom-up attention, roughly 300 milliseconds, but it can be sustained for as long as a task demands. This is the kind of attention you use when scanning a crowded parking lot for your car or listening for your name in a noisy room.
Everyday Examples of Each Process
One of the clearest demonstrations of top-down processing is the word superiority effect. You can identify a letter faster when it appears inside a word than when it appears alone. Research using precise timing measurements found that single words are processed at about 114 items per second, compared to just 68 items per second for individual letters. Your knowledge of language and word patterns speeds up your perception of the component parts. The whole, in this case, genuinely helps you see the pieces.
The Stroop effect shows what happens when top-down and bottom-up signals collide. If the word “red” is printed in blue ink and you’re asked to name the ink color, you’ll be slower and more error-prone. Your automatic reading ability (a deeply ingrained top-down process) fires off the meaning “red” before you can focus on the bottom-up sensory information about the actual color of the ink. The two streams conflict, and resolving that conflict takes measurable extra time.
The hollow-face illusion is another striking example. When you look at the inside of a hollow mask, your brain “corrects” the concave shape and perceives a normal convex face instead. Your lifelong experience with faces is so powerful that it overrides the actual sensory data about depth and shadow. This is top-down processing rejecting accurate bottom-up input in favor of what it “knows” should be there.
How the Two Processes Work Together
In everyday life, bottom-up and top-down processing don’t operate in isolation. They run simultaneously, and their relative influence shifts depending on the situation. Research on visual attention has found that the two systems have a complementary, almost compensatory relationship: when bottom-up processes have already done a good job of resolving what’s in a scene (separating objects from backgrounds, identifying salient features), top-down attentional modulation decreases. When the bottom-up signal is ambiguous or cluttered, top-down influence ramps up to fill in the gaps.
Think of walking through a familiar room in dim light. The bottom-up signal is weak and noisy, so your brain leans heavily on its stored model of the room to fill in details. Now think of stepping into a completely unfamiliar space in bright daylight. Bottom-up data is rich and detailed, so your brain relies more on the incoming signal and less on prediction. The balance between the two shifts constantly, hundreds of times per second, calibrated to the reliability of each information stream.
This dynamic is well described by predictive coding theory, which frames the brain as a prediction machine. Your cortex continuously generates models of the world by integrating present sensory input (carried by bottom-up, feedforward connections) with prior knowledge and context (conveyed through top-down, feedback connections). Perception is what emerges when these two streams meet. An equilibrium between them is necessary for adaptive functioning, preventing perception from being either too rigidly locked to expectations or too overwhelmed by raw sensory noise.
What Happens When the Balance Shifts
The balance between bottom-up and top-down processing isn’t identical in everyone, and differences in this balance may underlie some of the perceptual traits associated with autism. Research using brain connectivity measurements has found that individuals with higher autistic traits show a pattern consistent with dominant bottom-up processing and relatively weaker top-down influence. In practical terms, this means placing more weight on present sensory stimuli and less weight on contextual information or prior expectations.
This processing style has measurable consequences. People on the autism spectrum tend to be less susceptible to visual illusions (which typically rely on top-down expectations overriding sensory data) and show increased local versus global processing, meaning they’re more attuned to fine details than to the overall gestalt of a scene. The detail-driven perceptual style that many autistic people describe, including heightened sensitivity to specific textures, sounds, or visual patterns, aligns with a system where raw sensory input carries more weight relative to top-down filtering.
The Brain Pathways Involved
The two processing directions use physically distinct pathways in the brain. Bottom-up sensory information flows from primary sensory areas (like the visual cortex at the back of the brain) forward through the parietal cortex and then to frontal motor areas. This parietofrontal pathway drives fast, relatively automatic responses to what you see or hear. It’s the route that lets you duck before you’ve consciously registered that something is flying toward your head.
Top-down signals travel the reverse route, from the prefrontal cortex and premotor areas back toward sensory regions. This frontoparietal pathway carries cognitive, rule-based information: your current task, the rules you’re following, what you’re searching for. Studies in animals have directly demonstrated that top-down projections from frontal motor and decision-making areas to primary sensory cortex can modulate how strongly neurons respond to sensory input, effectively turning up the volume on relevant signals and turning it down on irrelevant ones.
These aren’t just theoretical distinctions. The two pathways have different speeds, different triggering conditions, and different purposes. Bottom-up captures your attention reflexively and briefly. Top-down directs it voluntarily and for as long as needed. Together, they allow you to both react instantly to unexpected events and stay focused on long-term goals, often within the same second.

