What Is the Difference Between Phonetics and Phonology?

Phonetics studies the physical sounds of speech, while phonology studies how those sounds are organized into patterns within a particular language. The simplest way to think about it: phonetics asks “what sounds can humans make and how?” while phonology asks “which sounds matter in this language and why?” Both fields deal with speech sounds, but they approach them from fundamentally different angles.

What Phonetics Covers

Phonetics is concerned with the concrete, measurable side of speech. It examines how your mouth, tongue, and vocal cords physically produce sounds, what those sounds look like as acoustic waves, and how your ear perceives them. Because every human shares roughly the same vocal anatomy, phonetics tends to be universal. The same physical principles govern how a speaker of Japanese and a speaker of Swahili produce a vowel sound, even if the languages use that sound differently.

The field breaks into three main branches. Articulatory phonetics describes the movements of your vocal organs: where your tongue sits, whether your lips are rounded, whether air flows through your nose. Acoustic phonetics measures the physical properties of the sound wave itself, things like frequency and amplitude. And auditory phonetics focuses on how listeners perceive and process those signals. Researchers use software like Praat to analyze spectrograms, pitch contours, formant frequencies, and other acoustic details that would be invisible to the naked ear.

When you transcribe speech phonetically, you capture as much physical detail as possible. This kind of transcription goes inside square brackets and can include fine-grained details like whether a sound is aspirated (produced with a puff of air), nasalized, or subtly shifted in position. For example, a narrow phonetic transcription of the word “pin” in English would note that the “p” is aspirated, something a phonological transcription wouldn’t bother recording.

What Phonology Covers

Phonology zooms out from the physical sound and asks a more abstract question: how does this language organize its sounds into a system? Every language picks a subset of the sounds humans can produce and treats some differences between sounds as meaningful and others as irrelevant. Phonology is the study of those choices.

The central concept in phonology is the phoneme, a sound unit that can change the meaning of a word. In English, /t/ and /d/ are separate phonemes because swapping one for the other creates a different word: “train” versus “drain.” That’s called a minimal pair, and it’s the classic test for whether two sounds are distinct phonemes in a language.

But not every physical difference between sounds matters at the phoneme level. English has two versions of the “L” sound: a “light L” that appears before vowels (as in “lamp”) and a “dark L” that appears at the end of words (as in “hill”). These are physically different sounds, and a phonetician would transcribe them differently. But no English word changes its meaning based on which L you use. You could pronounce “hill” with a light L and it would sound odd, but everyone would still understand you. In phonological terms, these two sounds are allophones, predictable variants of a single phoneme. Phonology cares about the phoneme; phonetics cares about both versions.

This is why phonology is language-specific. Two sounds that are separate phonemes in one language can be allophones of the same phoneme in another. The distinction between “L” and “R” is phonemic in English (think “light” versus “right”) but not in some other languages, which is exactly why that contrast can be difficult for some language learners to hear or produce.

Phonological Rules Shape How Sounds Behave

Phonology also studies the rules that govern how sounds interact with each other in connected speech. These rules are patterns that native speakers follow automatically, usually without realizing it.

One of the most common patterns is assimilation, where a sound changes to become more like a neighboring sound. In the indigenous Brazilian language Ka’apor, for instance, vowels become nasalized when they follow a nasal consonant like “m” or “n.” The word for “duck,” /uruma/, is actually pronounced with a nasalized final vowel. English does this too: say “input” naturally, and you’ll likely pronounce it “imput,” with the “n” shifting to match the “p” that follows it. You’re not being lazy. You’re following a phonological rule of English.

Other common rules include elision (dropping sounds in casual speech, like the middle syllable in “chocolate”) and processes that change how vowels are produced depending on the consonants around them. These rules are what phonologists spend much of their time identifying and formalizing. They help explain why the “same” word can sound different in different contexts, and why learning a new language means learning not just its sounds but its sound rules.

Different Notation, Different Goals

One of the most practical differences between the two fields shows up in how they write things down. Phonetics uses square brackets to enclose transcriptions: [pʰɪn] for “pin,” capturing the aspiration on the P. Phonology uses forward slashes: /pɪn/, representing only the meaningful sound categories without allophonic details like aspiration or nasalization. The International Phonetic Association maintains the IPA chart, a standardized set of symbols that both fields rely on, but how much detail you include depends on whether you’re doing phonetic or phonological work.

A phonemic transcription inside slashes deliberately strips away predictable variation. If you know the rules of English phonology, you already know the P in “pin” will be aspirated, so there’s no need to write it. A phonetic transcription inside square brackets captures exactly what was said, including all of that predictable detail and more. Think of phonological transcription as the blueprint and phonetic transcription as the photograph.

How the Two Fields Connect

Despite their different focus, phonetics and phonology are deeply intertwined. Phonological categories are ultimately realized as physical sounds, and the physical properties of speech influence which phonological patterns tend to appear across languages. Certain patterns show up again and again worldwide, like the fact that higher vowels (where the tongue is raised, as in “beat”) tend to be slightly shorter than lower vowels (where the tongue drops, as in “bot”). That’s a phonetic universal rooted in biomechanics, but it feeds directly into the phonological systems languages build.

At the same time, languages take these universal tendencies and run with them in different directions. Similar phonological categories can have noticeably different phonetic realizations across languages. A “t” in English, French, and Hindi are all stops made with the tongue tip, but the exact tongue placement, the amount of aspiration, and the timing differ in ways that are systematic and language-specific. Children learning to speak must acquire both: the abstract phonological system of their language and the precise phonetic details of how that system is physically produced in their speech community.

The simplest summary: phonetics gives you the raw materials of human speech. Phonology tells you what each language builds with them.