The first IQ test was invented in 1905 in France by psychologist Alfred Binet and his collaborator Theodore Simon. Their original test contained 30 questions designed to measure the attention, memory, and verbal skill of schoolchildren. Within a decade, the test crossed the Atlantic, was revised for American use, and eventually became the foundation for every major intelligence test still administered today.
The Original 1905 Binet-Simon Scale
Alfred Binet was actually trained as a lawyer before turning to psychology. The French government commissioned him to solve a practical problem: identifying which schoolchildren needed extra academic support. Binet and Theodore Simon responded by creating a 30-item test gauged to what would be “normally” expected at certain ages. They published the scale in 1905 in a journal Binet himself founded and edited, called L’Année psychologique.
The test wasn’t designed to measure some fixed, inborn trait. Binet saw it as a diagnostic snapshot of where a child stood relative to peers, not a permanent label. His items ranged from simple sensory tasks for very young children to more complex reasoning problems for older ones. In 1908, Binet and Simon revised the scale and introduced a concept that would reshape the entire field: “mental age.” A child who passed all the tasks typically solved by eight-year-olds was assigned a mental age of eight, regardless of their actual birth date. Examiners recorded chronological age in years and months, then compared it to mental age to identify children who were ahead or behind.
How the Test Came to America
In 1916, Stanford University psychologist Lewis Terman released the “Revised Stanford-Binet Scale,” adapting Binet’s French test for American children. Terman didn’t just translate it. He restandardized the questions on American populations, adjusted difficulty levels, and expanded the test’s range. His version became known as the Stanford-Binet Intelligence Scale, and it was the first widely used test to produce a single number: the intelligence quotient, or IQ. The score was calculated by dividing a child’s mental age by their chronological age and multiplying by 100.
Terman’s version went through multiple forms. “Form L” referred to Terman’s own edition, while “Form M” was named for his graduate student Maud Merrill, who helped develop a parallel version. The Stanford-Binet became the dominant intelligence test in the United States for decades and is still in use today, now in its fifth edition.
World War I and Mass Testing
The real turning point for IQ testing came during World War I, when the U.S. military needed a fast way to sort nearly two million recruits into appropriate roles. Individual testing was far too slow, so psychologists developed two group tests: the Army Alpha for men who could read and write English fluently, and the Army Beta for men who could not. The Beta relied on pictures, symbols, and pattern recognition rather than written language.
Between April 1918 and January 1919 alone, over 483,000 men took the Army Beta. Roughly 30% of all tested draftees in 1918 were given the Beta version, and about 20% of those also took the Alpha. This was intelligence testing at an entirely new scale. It proved that large groups could be assessed quickly and cheaply, which opened the door for IQ tests to move into schools, workplaces, and immigration processing in the years that followed.
Spread to Britain and Beyond
Binet’s methods were quickly picked up outside France and the United States. In Britain, psychologist Cyril Burt became the first part-time school educational psychologist appointed by the London County Council in 1913, just eight years after Binet’s original publication. Burt used intelligence testing to study children who were struggling academically and those involved in the juvenile justice system. His 1925 book, The Young Delinquent, established the acceptance of psychometric testing in British education for decades to come.
By the 1920s and 1930s, intelligence testing had become a standard tool in educational systems across Europe and North America. Countries used the tests for school placement, identifying learning disabilities, and sorting students into academic tracks. The tests were influential, but also controversial from the start, particularly when results were used to make sweeping claims about racial or national differences in intelligence.
Modern IQ Tests
Today’s IQ tests look quite different from Binet’s original 30 questions. The most widely used test for adults is the Wechsler Adult Intelligence Scale, now in its fifth edition (WAIS-5), published in 2024. Rather than producing a single mental-age calculation, modern tests break intelligence into multiple areas: verbal comprehension, working memory, processing speed, and visual-spatial reasoning, among others. The Stanford-Binet is also still updated and administered, particularly for children.
Scoring has changed too. Modern IQ tests use a statistical method where 100 represents the average score for a given age group, and each 15-point jump above or below represents one standard deviation from that average. About 68% of people score between 85 and 115. This replaced the old mental-age-divided-by-chronological-age formula, which broke down when applied to adults.
So while the basic idea of a standardized intelligence test dates to 1905, the tests themselves have been continuously rebuilt. What started as a simple tool to help French schoolchildren has become one of the most widely administered psychological assessments in the world, with a history shaped as much by military needs and political pressures as by scientific advances.

