What Is Sound in Music? Waves, Pitch, and Timbre

Sound in music is vibration organized into patterns your brain interprets as pitch, rhythm, and tone. At the most basic level, every musical note begins as a physical disturbance: a guitar string vibrates, a drum skin flexes, a column of air oscillates inside a flute. These vibrations travel through the air as pressure waves, reach your ear, and your brain assembles them into what you experience as music. What separates musical sound from random noise is structure. The frequencies in music are discrete and related to each other by simple mathematical ratios, while noise contains random, continuous frequencies with no dominant tone.

How Sound Travels as a Wave

Sound is a mechanical wave, meaning it needs a physical medium to travel through, whether that’s air, water, or a solid surface. When a musician plucks a string, the string pushes against nearby air molecules, compressing them together. Those molecules then push against their neighbors, creating a chain reaction of compressions and expansions (called rarefactions) that ripples outward from the source.

This type of wave is called a longitudinal wave because the air particles move back and forth in the same direction the wave travels, rather than up and down like a wave on water. The particles themselves don’t travel across the room. They simply oscillate around their resting position. What moves is the pattern of compression, and that pattern is what your ear detects as sound. The speed of this wave depends on the medium: sound travels roughly 343 meters per second in air at room temperature, faster through water, and faster still through steel.

Pitch: Why Notes Sound High or Low

Pitch is your brain’s interpretation of how frequently a sound wave vibrates, measured in hertz (Hz), or cycles per second. A sound vibrating 440 times per second produces the note A above middle C, known as A4. This frequency, 440 Hz, is the international tuning standard (ISO 16), used across the United Kingdom and the United States. In continental Europe, orchestras sometimes tune slightly higher, between 440 and 444 Hz.

Western music divides each octave into 12 steps called semitones. These steps aren’t evenly spaced in terms of raw frequency. Instead, they follow a logarithmic pattern where each semitone is about 1.06 times the frequency of the one below it. This means jumping from one octave to the next always doubles the frequency. The A below middle C vibrates at 220 Hz, and the A above it at 880 Hz. Your ear perceives each of these doublings as the “same” note, just higher or lower.

Human hearing spans roughly 20 Hz to 20,000 Hz, but most music sits comfortably between about 100 Hz and 1,000 Hz. Frequencies below that range are often felt physically more than heard, which is why bass-heavy music vibrates your chest at a concert. Frequencies above about 1,500 Hz become less pleasant as standalone musical tones, though they remain important as overtones that shape the character of instruments.

Volume and Dynamics

If pitch corresponds to a wave’s frequency, volume corresponds to its amplitude: how much the air pressure changes with each compression. A louder sound pushes air molecules together more forcefully, creating bigger pressure swings. In music, these changes in volume are called dynamics, and they range from barely audible passages to full-force climaxes.

Your perception of loudness is logarithmic, not linear. Doubling the amplitude of a sound wave increases the perceived loudness by a consistent step regardless of where you started. This is why sound is measured in decibels (dB), a logarithmic scale. A change of about 6 dB corresponds to doubling the amplitude, while a 3 dB change is roughly one noticeable “step” in loudness. In modern music production, a technique called dynamic range compression shrinks the gap between the quietest and loudest moments, making everything sound uniformly louder. This is why a pop song on the radio can sound louder than a classical recording even when your volume knob hasn’t moved.

Timbre: Why Instruments Sound Different

A piano and a violin can play the exact same note at the same volume and still sound completely different. The quality that distinguishes them is called timbre (pronounced “TAM-ber”). Timbre comes from the specific mixture of frequencies an instrument produces beyond its fundamental note.

When a guitar string vibrates at 440 Hz, it doesn’t produce only that single frequency. It also generates quieter vibrations at 880 Hz, 1,320 Hz, 1,760 Hz, and so on. These additional frequencies, called harmonics, are whole-number multiples of the fundamental. Every instrument produces a unique recipe of harmonics at different strengths, and that recipe is what gives each instrument its recognizable voice. A flute emphasizes the fundamental with few harmonics, producing a pure tone. A trumpet loads up on upper harmonics, creating a brighter, brassier character. Through a mathematical process called Fourier analysis, any complex musical sound can be broken down into this series of simple component frequencies.

Timbre isn’t just about which harmonics are present. It also depends on how a sound evolves over time. Synthesizers and sound designers describe this using four stages: attack (how quickly the sound reaches full volume), decay (how fast it drops from that peak), sustain (the steady level held while you keep playing), and release (how the sound fades after you stop). A piano has a sharp attack and a long, gradual decay. A bowed violin has a slow attack and a steady sustain. These differences in the shape of sound over time are a major part of why each instrument feels distinct, even on the same note.

What Makes Music Different From Noise

The distinction between musical sound and noise is mathematical. Musical sounds are periodic, meaning their waveforms repeat in a regular cycle. The component frequencies are discrete, separable, and related by simple ratios. A major chord sounds pleasing in part because the frequencies of its notes form clean mathematical relationships with each other.

Noise, by contrast, contains a continuous spread of frequencies with no dominant tone and no repeating pattern. Think of the hiss of static or the roar of wind. Every frequency within a range is present, distributed randomly. Some sounds fall between the two extremes: a cymbal crash begins as something close to noise, with a wash of unpitched frequencies, but certain resonant peaks give it a recognizable character. Percussion instruments in general live in this gray area, blending periodic and non-periodic vibrations.

How Your Brain Turns Waves Into Music

Raw sound waves are just pressure changes in the air. The transformation into music happens in your brain. The auditory cortex, located along the upper surface of the temporal lobe, contains at least 15 subdivisions that handle different aspects of sound processing. Neurons in the core region respond to simple, pure tones, while those in surrounding areas are better activated by complex sounds like speech and music.

Your brain processes both the timing and frequency content of incoming sound simultaneously. It groups individual notes into larger units, recognizing patterns of pitch, intensity, timbre, and rhythm that form melodies and harmonies. This is why you can follow a single violin line through a full orchestra, or why a familiar song triggers an emotional response before you consciously identify it. Music engages both the analytical and emotional processing systems of your brain, conveying meaning that goes beyond the sum of its acoustic parts.

How Space Shapes Musical Sound

The same piece of music sounds radically different in a tiled bathroom, a carpeted living room, and a cathedral. When sound waves hit hard surfaces like concrete or glass, they reflect back and overlap with the original signal. These overlapping reflections are called reverberation, and they add a sense of richness and space to music. A concert hall with good acoustics lets notes resonate and blend naturally, while a room full of soft, absorbing materials like curtains and carpet deadens the sound, making it feel flat and dry.

Musicians and producers have long exploited this relationship. Orchestras choose performance venues partly based on how the room’s shape and materials complement their sound. In recording studios, engineers use both physical spaces and digital tools to add or remove reverberation, shaping the sense of depth and atmosphere in a recording. The reverb on a vocal track can make a singer sound like they’re performing in a vast stone church or a tight, intimate room, all without changing a single note.