Audition is the psychological term for the sense of hearing. It covers everything from how sound waves enter the ear to how the brain interprets those signals as speech, music, alarms, or a dog barking across the street. In psychology, audition is studied not just as a biological process but as a perceptual experience, exploring how the brain transforms raw vibrations in the air into meaningful sound. Humans can detect sounds ranging from about 20 Hz to 20,000 Hz, though most adults lose some high-frequency sensitivity over time, with the upper limit often settling closer to 15,000–17,000 Hz.
How Sound Becomes Perception
Sound starts as vibrations traveling through the air. These vibrations have physical properties that map directly onto what you experience. Frequency, the number of oscillations per second, determines pitch. Middle A on a piano, for example, vibrates at 440 times per second. Amplitude, the height of the wave, determines loudness. And the overall shape of the wave, its complexity, determines timbre: the quality that lets you tell a guitar from a flute even when both play the same note at the same volume.
Psychology treats these as two separate layers. The physical layer (frequency, amplitude, waveform) belongs to acoustics. The psychological layer (pitch, loudness, timbre) belongs to perception. Audition research sits at the boundary, asking how and why the brain translates one into the other, and where that translation can go wrong.
From Sound Wave to Electrical Signal
The outer ear funnels sound waves into the ear canal, where they hit the eardrum and cause it to vibrate. Three tiny bones in the middle ear amplify those vibrations and pass them to the cochlea, a snail-shaped, fluid-filled structure in the inner ear. This is where the critical conversion happens.
Inside the cochlea, the vibrations create waves in the fluid that cause thousands of tiny hair cells to move up and down. Each hair cell has microscopic projections on top called stereocilia. As these projections bend against an overlying structure, tiny channels at their tips open up, allowing chemicals to rush in and generate an electrical signal. That signal is the language the brain can actually read. Different hair cells respond to different frequencies: cells near the wide end of the cochlea pick up high-pitched sounds like a baby crying, while those closer to the center respond to low-pitched sounds like a large dog barking.
How the Brain Determines Pitch
Psychologists have proposed two main theories to explain how you perceive pitch, and both turn out to be partially correct.
Place theory suggests that pitch depends on where along the cochlea the hair cells are activated. The base of the membrane responds to high frequencies, the tip responds to low frequencies, and the brain reads pitch based on which location is firing. Temporal theory takes a different approach: it proposes that hair cells fire at the same rate as the sound wave’s frequency, and the brain reads pitch from that firing rate. The problem with temporal theory on its own is that neurons can only fire so fast. There’s a physical ceiling imposed by the way nerve cells reset between signals.
In practice, both mechanisms work together for sounds up to about 4,000 Hz. Above that threshold, the brain relies almost entirely on place cues to determine pitch.
The Neural Pathway to the Brain
Once hair cells convert sound into electrical signals, those signals travel along the auditory nerve through a series of relay stations in the brainstem. The route passes through several processing nuclei before reaching a structure called the medial geniculate nucleus, which acts as the brain’s auditory switchboard. From there, signals are sent to the primary auditory cortex, located in the temporal lobe on each side of the brain.
One important feature of this pathway is that most of the nerve fibers cross over to the opposite side of the brain early in the journey. This means sound entering your right ear is primarily processed by your left hemisphere, and vice versa. Some fibers stay on the same side, though, so both hemispheres receive input from both ears. This bilateral wiring is part of what makes sound localization possible.
How You Locate Where Sound Comes From
Your brain pinpoints the source of a sound by comparing what arrives at each ear. Two cues drive this process. The first is a tiny difference in arrival time: sound from your left reaches your left ear a fraction of a millisecond before it reaches your right. The second is a difference in intensity: the ear closer to the source receives a slightly louder signal because your head partially blocks the sound wave traveling to the far ear.
In the brainstem, these two cues are initially processed by separate specialized circuits. By the time the information reaches the auditory cortex, however, the brain appears to merge them into a single, integrated sense of where the sound is coming from. Research using brain imaging has shown that the cortex also retains some independent information about each cue, possibly to check whether both cues point to the same source or whether something unusual is happening in the environment.
Echoic Memory: The Auditory Buffer
When you hear a sound, your brain holds onto a brief, nearly exact copy of it for a few seconds. This is called echoic memory, and it’s the auditory version of sensory memory. Research measuring brain responses to sound changes has found that echoic memory traces last roughly 4 to 6 seconds before fading. A single presentation of a sound is enough to create a stored trace, and that trace is immediately replaced when the brain detects a new, different sound.
Echoic memory functions as a real-time monitor of your acoustic environment. It’s what allows you to “replay” a sentence someone just said, even if you weren’t paying full attention. It also helps the brain flag new auditory events that might need your focus, acting as a bridge between raw sensation and conscious awareness.
When Auditory Processing Breaks Down
Hearing problems generally fall into two categories based on where the system fails. Conductive hearing loss happens when something prevents sound from traveling through the outer or middle ear, such as fluid buildup, earwax, or damage to the tiny bones. Sensorineural hearing loss occurs when the problem is in the inner ear or the auditory nerve itself, often from damage to the hair cells. Since hair cells in humans don’t regenerate, sensorineural loss is typically permanent.
There’s also a less straightforward condition called auditory processing disorder, where the ears function normally but the brain has difficulty making sense of what it hears. People with this condition may struggle to follow conversations in noisy environments or to distinguish similar-sounding words. Diagnosis rates vary enormously depending on which criteria are used, with studies finding rates anywhere from 7% to 96% in assessed populations. This wide range reflects genuine disagreement among professionals about how the condition should be defined and measured, and fewer than half of hearing healthcare professionals report routinely screening for it.
Why Audition Matters in Psychology
Audition is central to how people communicate, navigate their surroundings, and respond to danger. It operates continuously, even during sleep, making it one of the most persistent channels of sensory input. Language acquisition in children depends heavily on intact auditory processing, and disruptions in audition can cascade into difficulties with reading, social interaction, and learning. In cognitive psychology, audition provides a window into how the brain handles competing streams of information, how attention is directed, and how memory stores and retrieves sensory experience. It is, in short, far more than just hearing: it’s a complex chain of physical, neural, and cognitive events that shapes how you understand the world around you.

