Language is neither fully innate nor fully learned. The current scientific picture points to a biological foundation for language, including brain structures and genes that prepare humans to acquire it, but the specific language you speak and the grammar you master depend heavily on the input you receive as a child. The real debate among linguists and cognitive scientists is about how much of the machinery is built in versus assembled through experience.
The Case for Innate Language
The most influential argument that language is innate comes from the linguist Noam Chomsky, who proposed that all human languages, despite their surface differences, share a deep set of structural principles. He called this Universal Grammar: a system of categories, mechanisms, and constraints shared by every human language and considered to be hardwired into the brain. Under this view, children don’t learn language from scratch. They arrive equipped with a kind of blueprint, and their exposure to a particular language simply activates the right settings within that blueprint.
Three observations support this idea. First, universality: all known human languages share certain structural properties, such as the distinction between nouns and verbs, or the ability to form questions and embed one clause inside another. Second, convergence: children raised in vastly different environments, hearing different languages with different amounts of input, all arrive at grammatical competence on a remarkably similar timeline. Third, and most debated, is the “poverty of the stimulus” argument, which holds that children master grammatical rules they could never have figured out from the speech they actually hear.
Grammar Children Shouldn’t Be Able to Learn
The poverty of the stimulus argument is worth unpacking because it’s one of the strongest cards in the nativist hand. Consider how English speakers form yes/no questions. A child hears sentences like “Ali is happy” and the corresponding question “Is Ali happy?” She also hears “That man can sing” and “Can that man sing?” From these examples, a reasonable guess would be: take the first verb in the sentence and move it to the front. This simple rule works perfectly for short sentences.
But now take a more complex sentence: “The man who is happy is singing.” If you follow the simple rule and move the first verb, you get the ungrammatical “Is the man who happy is singing?” The correct question is “Is the man who is happy singing?” To get this right, a child needs to understand the structure of the sentence, not just scan for the first verb. Chomsky’s point is that children rarely if ever make this kind of error, even though nothing in their input explicitly teaches them the structural rule. Two-year-olds also correctly handle subtle restrictions on word contractions, like knowing you can say “Who do you wanna kiss?” but not “Who do you wanna kiss you?” These patterns suggest children come equipped with expectations about how language structure works.
Biological Evidence for Language Readiness
The brain appears to be prepared for language from birth. Newborns respond preferentially to speech over other sounds and show a measurable preference for verbal input at frequencies typical of human language. Studies using brain imaging on breastfed infants have shown that babies listening to their mother’s language activate the same left-hemisphere regions (including areas corresponding to Wernicke’s area) that adults use for language comprehension. Experiments with infants as young as 22 days old have found that they process speech sounds better in the right ear, which feeds into the left hemisphere, suggesting an inborn brain asymmetry for language.
Genetics tells a similar story. The gene FOXP2 is a transcription factor involved in speech learning and execution. In the well-studied KE family, a mutation in this gene caused severe speech difficulties in more than half the family members, a condition called verbal dyspraxia, where the coordination needed for speech breaks down while other cognitive abilities remain largely intact. FOXP2 shows consistent expression patterns in brain regions tied to motor learning across species, from songbirds to primates, and it regulates other genes involved in neural connectivity and the flexibility of synaptic connections. Animal models with mutated versions of the gene show impaired vocalization and motor learning. This doesn’t mean FOXP2 is “the language gene” in any simple sense, but it demonstrates that specific genetic pathways underpin our capacity for speech.
The Case Against a Built-In Grammar
Not everyone accepts that children are born with an abstract grammar waiting to be activated. Usage-based theories, most prominently associated with the developmental psychologist Michael Tomasello, argue that children build language from the ground up using general cognitive abilities: pattern recognition, imitation, and social understanding of what other people intend to communicate.
Observational studies of children’s natural speech reveal that early language is surprisingly narrow and formulaic. Rather than applying broad grammatical rules, young children seem to organize their speech around specific high-frequency words and phrases. A child might master “I’m pushing it” and “I’m eating it” without having any general concept of a verb or a sentence structure. These “slot-and-frame” patterns, such as “I’m VERB-ing it,” look like grammar but are actually built from the repetition and systematic variation children encounter in what adults say to them. Only gradually do children generalize across these specific patterns to form more abstract rules.
Tomasello’s Verb Island Hypothesis captures this idea: children initially learn individual verbs tied to the specific sentence structures in which they’ve heard them, rather than slotting verbs into a pre-existing grammar. The consistent parts of phrases are learned through imitation, while the variable slots emerge from noticing communicative parallels across different situations. Under this view, what looks like innate grammar is actually the product of powerful statistical learning applied to rich, structured input.
What Nicaraguan Sign Language Revealed
One of the most striking natural experiments in language history occurred in Nicaragua in the 1970s and 1980s, when deaf children who had never been exposed to any formal sign language were brought together in new schools. The first generation of students developed a shared communication system, but it was relatively simple and inconsistent. When a second generation of younger children entered the community and learned from the first generation, something remarkable happened: they spontaneously added grammatical complexity that their older peers had never used.
The second cohort developed consistent spatial modulations on signs to indicate who does what to whom, a grammatical device found in established sign languages worldwide. The first cohort used these spatial markers inconsistently, sometimes one way, sometimes another. The younger learners regularized the system and made it more systematic. Crucially, the innovations came primarily from children, not adults. This case is often cited as evidence for innate grammatical capacity because the children created structure that wasn’t present in their input. But it also illustrates how social interaction and generational transmission shape language, lending support to both sides of the debate.
The Sensitive Period for Learning
One piece of evidence that cuts across both camps is the existence of a sensitive period for language acquisition. The neurologist Eric Lenneberg argued in 1967 that language acquisition needs to take place between roughly age two and puberty, a window he believed coincides with the brain’s lateralization process. After this period, achieving native-level proficiency becomes dramatically harder.
The pattern shows up clearly in second-language learning: the earlier someone begins, the higher their ultimate attainment. Some researchers describe this as a ceiling effect, where performance doesn’t vary much with age during the sensitive period, followed by a steep decline. Others find a more gradual slope that steepens around puberty. Either way, the pattern is consistent: children’s brains are uniquely receptive to language in ways adult brains are not. This time-limited receptivity suggests biological programming, even if the specific content of language is learned from the environment.
What Makes Human Language Unique
One feature consistently proposed as the dividing line between human language and animal communication is recursion: the ability to embed one structure inside another, and to apply that same operation again and again without limit. This is what lets you produce a sentence like “I read the article that the journalist wrote that explained the theory that Chomsky proposed that Fitch argued made human language unique.” Each “that” clause is nested inside the previous one, and in principle, you could keep going forever.
While animal communication systems show cultural transmission, reference to external objects, and even basic syntax, no non-human species has demonstrated the open-ended recursive capacity that characterizes human language. Comparative brain studies point to specific evolutionary divergences that may explain this gap, including the distinct structural organization of Broca’s area in humans, a more extensive and specialized bundle of nerve fibers connecting language regions (the arcuate fasciculus), and overall greater cortical connectivity.
Where the Science Stands Now
The current consensus is moving away from the extremes. The old framing of “innate grammar module” versus “everything is learned” has given way to a more nuanced picture. Most researchers accept that humans have biological predispositions that make language acquisition possible: left-hemisphere dominance detectable from birth, genetic infrastructure like FOXP2, and neural architecture that no other species shares. At the same time, the idea that a detailed Universal Grammar is encoded in our genes has lost ground. The trend in neuroscience is to view complex linguistic abilities like syntax not as localized in a single brain region but as emerging from dynamic, large-scale network interactions that develop through experience.
The practical answer is that your brain comes wired to expect language and to extract its patterns with extraordinary efficiency, especially during childhood. But which patterns you extract, which sounds you distinguish, and which grammatical structures you internalize depend entirely on the language community you grow up in. Biology provides the engine. Experience provides the fuel and the destination.

