Research in human development began not in laboratories but in nurseries, with parents writing detailed diaries about their own children. For centuries, questions about how children grow and learn belonged to philosophy. The shift toward scientific study started in the mid-1800s, when naturalists and physicians began applying systematic observation to infant behavior, eventually building the foundations of developmental psychology as a formal discipline.
Philosophers Asked the Questions First
Before anyone tried to study children scientifically, thinkers like John Locke and Jean-Jacques Rousseau shaped how Western culture understood childhood. Locke, writing in the 1690s, described the infant mind as a blank slate shaped entirely by experience. Rousseau, almost a century later, argued the opposite: children are born with an innate goodness that unfolds naturally through stages. Neither philosopher conducted anything resembling an experiment, but their competing ideas framed a debate that would drive research for the next 200 years. Is development shaped more by nature or by experience? That question gave early scientists something concrete to investigate.
Darwin Brought Child Observation Into Science
Charles Darwin is best known for evolutionary theory, but he also played a surprisingly direct role in launching developmental research. Starting in 1839, when his first child was born on December 27, Darwin began keeping a detailed notebook of observations. He tracked reflexes, emotional expressions, early vocalizations, and what appeared to be the first signs of intentional behavior. He continued recording observations on his children for nearly two decades.
Darwin didn’t publish these notes until 1877, when a French philosopher named Hippolyte Taine published his own account of infant mental development. Prompted by Taine’s work, Darwin wrote up his decades-old diary as “A Biographical Sketch of an Infant” for the journal Mind. In it, he described how his newborn performed sneezing, yawning, stretching, sucking, and screaming with precision during the first week of life, yet voluntary movements remained clumsy. He concluded that the imperfection of voluntary actions wasn’t due to weak muscles but to the immaturity of the brain’s capacity for will.
Darwin recorded the specific day his son first laughed (day 113), the day the infant began making sounds “without any meaning to please himself” (day 46), and the moment at 114 days old when the baby deliberately maneuvered Darwin’s finger into his own mouth, repeating the action several times in what Darwin judged “evidently was not a chance but a rational one.” He even speculated that children’s seemingly irrational fears might be inherited traces of dangers faced during humanity’s ancient past. His goal was to show that emotional expression, like physical traits, had “a gradual and natural origin” shaped by evolution. Diary studies similar to Darwin’s and Taine’s continued to appear in Mind from 1878 onward, establishing what historians call the “baby biography” tradition.
Preyer Made It Systematic
Baby biographies were valuable but limited. Each parent observed their own child, using their own methods, with no standardized framework. The German physiologist Wilhelm Preyer changed that. In 1882, he published “The Mind of the Child,” a book based on careful, systematic observations of his son’s development. Unlike earlier diary-keepers, Preyer brought the rigor of a trained scientist to the task, recording detailed data on language, motor skills, and sensory responses with consistent methods.
Preyer’s contribution went beyond his own data. He proposed a connection between brain development and language acquisition, and his methods offered a template that other researchers could replicate. The book stimulated infant development studies across Europe and America, and his work was widely acknowledged by the American child study movement. Historians credit Preyer’s systematic approach as a key step in the emergence of developmental psychology as its own field, distinct from philosophy or general biology.
G. Stanley Hall Built a Movement
If Darwin and Preyer laid the scientific groundwork, the American psychologist G. Stanley Hall turned child study into a public cause. In the final decades of the 1800s, Hall was among the most prominent experts on education and child development in the United States. He pioneered the use of questionnaires to collect data from large numbers of children and parents, moving the field beyond single-child case studies toward broader, more representative samples.
Hall also did something no developmental researcher had done before: he actively courted public support. He gave lectures, published popular writing, and enlisted parents and teachers as collaborators in his research. Through these efforts, he helped establish psychology’s relevance to everyday parenting and education. His “child study movement” attracted thousands of participants and created an infrastructure of parent groups, teacher organizations, and university programs dedicated to understanding how children develop. Hall’s lasting influence was institutional. He showed that studying children wasn’t just an academic exercise but something with direct practical value for families and schools.
Intelligence Testing Changed the Scale
A parallel thread emerged in France at the turn of the 20th century. In 1905, psychologist Alfred Binet and his collaborator Theodore Simon developed a test designed to measure the attention, memory, and verbal skills of schoolchildren. Their original purpose was practical: the French government needed a way to identify students who required extra help in school. The Binet-Simon scale became the first standardized tool for assessing cognitive development, and it shifted the field toward quantitative measurement.
The test was later adapted at Stanford University and became the Stanford-Binet Intelligence Scale, one of the most widely used psychological instruments of the 20th century. Whatever its later controversies, the Binet-Simon scale introduced a critical idea to human development research: that mental abilities could be measured, compared across individuals, and tracked over time. This opened the door to studying not just what children do at various ages, but how quickly and in what patterns their abilities grow.
Gesell Mapped What “Normal” Looks Like
Arnold Gesell, working at the Yale Child Study Center in the early 1900s, took the measurement impulse further than anyone before him. He conducted a landmark national study observing more than 10,000 children, recording their verbal, motor, social, emotional, and cognitive development at specific ages. From this enormous dataset, he created the Gesell Developmental Schedules, the first standardized set of milestones describing what typical development looks like at each stage of childhood.
Gesell’s norms gave pediatricians, parents, and educators a common reference point. For the first time, a parent could compare their child’s progress against a scientifically derived timeline. His work also popularized the concept of maturation, the idea that much of development follows a biologically driven timetable rather than being entirely shaped by environment. The milestone charts used in pediatric offices today are direct descendants of Gesell’s original schedules.
Longitudinal Studies Tracked Lives Over Decades
Early developmental research captured snapshots: what a child does at six months, at two years, at school age. But understanding how development unfolds required following the same individuals over long periods. In 1921, psychologist Lewis Terman initiated one of the first major longitudinal studies, tracking more than 1,500 children with IQs above 140. The study followed participants as they aged, observing not just intellectual achievement but mental health, social adjustment, and career outcomes across their lifetimes.
Terman found that the gifted group showed greater drive to achieve along with better mental and social adjustment compared to nongifted peers, challenging the popular stereotype that highly intelligent children were fragile or socially awkward. More importantly for the field, his study demonstrated the power of longitudinal methods. Watching the same people grow and change over decades revealed patterns that no single-point observation could capture, and longitudinal research became a cornerstone of developmental science.
Women Researchers Shaped the Field
The early history of developmental research includes significant contributions from women whose influence has been largely overlooked. Leona Mayer Bayer, who earned her medical degree from Stanford in 1928 as one of just five women in a class of sixty, spent decades studying children’s physical growth. Working with growth specialist Nancy Bayley, she developed methods for predicting adult height and coined the term “growth diagnosis” to describe how a child’s development compares to established norms. In 1953, she co-founded a center for children with developmental delays.
Helen Gofman became a program director and associate professor of pediatrics at UCSF, focusing on children’s developmental needs. Selma Fraiberg advanced the understanding of infant mental health. Carol Hardgrove championed the inclusion of families in pediatric care and the importance of unstructured play. Collectively, these women pushed the field toward treating the whole child, including learning differences, complex health needs, disability rights, and the emotional lives of infants, at a time when developmental research was still heavily focused on physical milestones and cognitive testing.
From Diaries to a Discipline
The trajectory from Darwin’s nursery notebook to Gesell’s study of 10,000 children took roughly 60 years. In that span, the study of human development moved from philosophical speculation to parent diaries, from single-child observations to large-scale standardized measurement, and from cross-sectional snapshots to lifelong longitudinal tracking. Each step built on the last. Darwin showed that infant behavior was worth recording scientifically. Preyer showed it could be done systematically. Hall showed it mattered to the public. Binet showed it could be quantified. Gesell showed what the norms looked like. And Terman showed that development could be traced across an entire life. By the mid-20th century, human development was no longer a curiosity for naturalists but a fully established scientific field with its own methods, institutions, and practical applications.

