IQ, or intelligence quotient, is a standardized score designed to measure cognitive ability relative to others your age. Most modern IQ tests set the average score at 100, with a standard deviation of 15 points. That means roughly 68% of people score between 85 and 115, and scores outside that range become increasingly rare in either direction.
The number itself doesn’t capture everything about how smart someone is, but it remains one of the most widely studied and consistently predictive measurements in psychology. Here’s what it actually measures, where it comes from, and what it can (and can’t) tell you.
How IQ Testing Started
The idea of measuring intelligence formally goes back to the late 1800s, when Francis Galton developed the first broad test of cognitive ability. But reliable testing didn’t take off until the early 1900s, when a French psychologist named Alfred Binet was asked by the French government to create a test that could identify children likely to struggle in school. Binet’s test relied heavily on verbal tasks and introduced the concept of “mental age,” comparing a child’s performance to what was typical for their age group.
Louis Terman, a professor at Stanford, later refined Binet’s work by testing thousands of children across different ages and establishing average scores for each. This became the Stanford-Binet test, one of the two major IQ tests still used today. The other, the Wechsler scales, broke intelligence into separate components rather than treating it as a single number. Both tests are regularly updated and re-standardized so that 100 always represents the current average.
What IQ Tests Actually Measure
A common misconception is that IQ tests measure one thing. In practice, modern tests like the Wechsler Adult Intelligence Scale assess several distinct cognitive skills and combine them into a full-scale score. The main areas tested are:
- Verbal comprehension: vocabulary, general knowledge, and the ability to reason with language
- Fluid reasoning: solving novel problems without relying on prior knowledge, such as identifying patterns in abstract shapes
- Visual-spatial processing: mentally manipulating objects and understanding spatial relationships
- Working memory: holding information in your mind and manipulating it over short periods
- Processing speed: how quickly you can scan, identify, and respond to simple information
Each of these areas generates its own index score, so two people with the same full-scale IQ of 110 might have very different cognitive profiles. One might excel at verbal reasoning but struggle with processing speed, while the other shows the opposite pattern. The full-scale number is useful as a summary, but the subscores often reveal more about how a person actually thinks.
The General Intelligence Factor
One of the most consistent findings in intelligence research is that people who score well on one type of cognitive test tend to score well on others. In 1904, Charles Spearman noticed positive correlations across a variety of academic domains, from math to languages, and used early statistical techniques to extract what he called the “g-factor,” a general factor of intelligence that underlies performance on all cognitive tasks.
Not everyone agrees that g tells the whole story. Some researchers argue for multiple relatively independent intelligences, and the subscores on modern IQ tests do show meaningful variation within individuals. Still, the g-factor remains one of the most replicated findings in psychology. It’s the reason a single IQ number can be meaningful at all, even if it inevitably glosses over individual strengths and weaknesses.
What Shapes Your IQ
Large-scale twin studies consistently show that about half the variation in IQ scores across a population is attributable to genetic differences. But that figure changes dramatically with age. In infancy, genetics accounts for roughly 20% of the variation in intelligence. By childhood, it rises to about 40%, and by adulthood, it reaches around 60%. At the same time, the influence of the shared family environment (the home you grew up in, the parenting style, the neighborhood) shrinks to nearly zero by adulthood. This doesn’t mean upbringing doesn’t matter. It means that once people leave home and start selecting their own environments, genetic tendencies increasingly express themselves.
Environmental factors still play a large role, especially early in life. Nutrition, exposure to toxins like lead, access to education, and even stress all influence cognitive development. One striking piece of evidence for environmental effects is the Flynn effect: IQ scores worldwide rose by about 2 to 3 points per decade throughout the 20th century. A meta-analysis covering 70 years of data and 300,000 test scores found an average gain of 2.2 points per decade. Since human DNA doesn’t change that fast, these gains reflect improved nutrition, schooling, and familiarity with the abstract thinking that IQ tests reward.
What Your IQ Score Predicts
IQ scores correlate with a range of real-world outcomes, though not always in the ways people expect. Income is one of the clearest links. Data from a large U.S. longitudinal study found that each additional IQ point is associated with roughly $200 to $600 more in annual income, even after controlling for other factors. Over a 30-point spread, that translates to a meaningful difference in lifetime earnings.
Interestingly, the same research found no statistically significant relationship between IQ and total wealth. High earners don’t necessarily accumulate more assets, likely because spending habits, financial decisions, and inheritance muddy the picture.
Health is another area where IQ has surprising predictive power. A study tracking people with childhood IQ scores of 135 or higher over 64 years found that every 15-point increase in childhood IQ was associated with a 32% lower risk of death, independent of social class. This relationship held until scores reached about 163, at which point the protective effect leveled off. The reasons likely involve a mix of healthier behaviors, better access to information, and the kinds of environments that high-scoring individuals tend to end up in.
What Happens in the Brain
Neuroimaging research points to a network of regions in the front and sides of the brain that work together to support the kind of thinking IQ tests measure. People with higher scores tend to have denser gray matter (the brain tissue where processing happens) and stronger white matter connections (the wiring between regions) in frontal and parietal areas. A leading model called the parieto-frontal integration theory proposes that intelligence depends not on any single brain region, but on how efficiently these areas communicate with each other.
Studies in children have found that stronger connectivity between right-side parietal and frontal regions is associated with higher nonverbal intelligence. Connectivity between parietal regions and a midline area involved in attention and error monitoring also correlated with higher scores. In short, intelligence appears to be less about the size or activity of any one brain area and more about how well the network functions as a whole.
Known Limitations and Biases
IQ tests have real predictive value, but they also have well-documented problems. Achievement gaps between racial, ethnic, and socioeconomic groups have persisted for decades, with Black and Hispanic students consistently scoring lower on average than White and Asian students on standardized cognitive assessments. These gaps reflect systemic differences in educational opportunity, exposure to environmental stressors, and the cultural assumptions baked into test questions, not innate differences in ability.
Research has shown that even the experience of racial bias can impair working memory, one of the core skills IQ tests measure. Immigrants and non-native speakers also tend to score lower, not because they’re less intelligent, but because the tests were designed and normed primarily within Western, English-speaking contexts. Cognitive skills develop in context, and people from different cultural backgrounds may deploy reasoning strategies in ways that don’t map neatly onto a standardized test format.
In schools, IQ-based placement has disproportionately funneled low-income and minority students into special education, leading to fewer and less enriching educational opportunities. The tests themselves have been updated over the years to reduce obvious cultural bias, but the deeper issue, that any single test inevitably favors certain backgrounds and experiences, remains unresolved. An IQ score is best understood as a measure of specific cognitive skills valued in specific contexts, not a comprehensive verdict on a person’s intellectual potential.

