Moore’s Law predicts that the number of transistors fitting on a computer chip doubles approximately every two years. It’s not a law of physics but an observation about the pace of semiconductor manufacturing, first made in 1965 and still roughly holding six decades later.
The Original Observation
In April 1965, Gordon Moore, then at Fairchild Semiconductor, published a short paper in Electronics magazine titled “Cramming More Components onto Integrated Circuits.” He plotted the number of components on a chip going back to 1959 and noticed they doubled every year. He extrapolated that trend forward for the next decade.
By 1975, the pace had slowed slightly, and Moore revised his forecast to a doubling roughly every two years. That two-year cadence is the version most people mean when they reference Moore’s Law today. You’ll sometimes hear “18 months” cited instead, but that figure is commonly attributed to Intel executive David House, who factored in both transistor density and performance improvements. Moore himself settled on the two-year number.
Why Transistor Count Matters
Transistors are the tiny electrical switches inside a processor that represent the 0s and 1s of digital computing. The more you can pack onto a single chip, the more calculations it can handle, the less power each operation requires, and the smaller (or more capable) the device can be. This is why the doubling trend has had such dramatic real-world effects: it’s the reason a smartphone today has more computing power than the room-sized machines of the 1960s.
When Moore made his original observation, a cutting-edge chip held a few dozen transistors. By the mid-1980s, that number was in the hundreds of thousands. By 2000, it reached tens of millions. Today’s chips are measured in billions.
Where Transistor Counts Stand Today
As of 2024, NVIDIA’s Blackwell B100 GPU holds about 208 billion transistors, making it the GPU with the highest count ever produced. It’s built on a custom version of TSMC’s 4-nanometer process. On the consumer side, Apple’s M3 Ultra system-on-chip holds around 184 billion transistors, fabricated on TSMC’s 3-nanometer process. Even mid-range GPUs like AMD’s Navi 48, released in 2025, pack nearly 54 billion transistors onto a chip roughly the size of a large postage stamp.
These numbers would have been unimaginable even 15 years ago. For context, the original iPhone’s processor in 2007 had around 100 million transistors. The leap from 100 million to 208 billion in less than two decades is a roughly 2,000-fold increase, which tracks fairly closely with what Moore’s Law would predict.
What “Nanometers” Actually Mean
You’ll often see chip manufacturing described by a number in nanometers: 7 nm, 5 nm, 3 nm. These labels suggest the physical size of features on the chip, but the reality is more complicated. Most critical features on a so-called 7-nanometer transistor are considerably larger than 7 nm. This disconnect between the marketing name and actual dimensions has existed for about two decades.
The naming convention dates back to the 1970s and ’80s, when two key measurements on a chip, the gate length and the metal half-pitch, happened to be roughly the same number. That number became the “node” name. But starting in the mid-1990s, chipmakers began shrinking the gate length more aggressively than other features to boost speed and efficiency. By the 130-nm node, for instance, actual gate lengths were closer to 70 nm. The industry kept using the old naming cadence anyway, so today’s node names are more like branding than literal measurements. A “3-nm” chip from TSMC and a “3-nm” chip from Samsung don’t necessarily have the same physical dimensions.
This matters for understanding Moore’s Law because the real measure of progress is transistor density (how many transistors per square millimeter) rather than the marketing label on the manufacturing process.
How the Industry Keeps Pace
For decades, the strategy was straightforward: make transistors smaller, fit more on the same size chip. But as features approach the scale of individual atoms, that approach gets exponentially harder and more expensive. The semiconductor industry has responded by getting creative.
One major shift is building chips in three dimensions rather than on a flat surface. Instead of cramming everything onto a single slab of silicon, manufacturers now stack multiple chips vertically or connect them side by side in a single package. TSMC, Intel, and Samsung all have competing versions of this approach. TSMC’s CoWoS technology places chips laterally on an interposer and is evolving toward full 3D stacking. Intel’s Foveros stacks logic chips directly on top of each other. Samsung’s X-Cube takes a similar vertical approach, targeting tight integration between logic and memory layers.
These advanced packaging techniques allow chipmakers to combine specialized components, like a CPU, GPU, memory, and AI accelerators, into a single package without needing to shrink every transistor further. Die-to-die communication in these packages can exceed 2 terabits per second, fast enough that the separate chips behave almost like a single piece of silicon. This modular approach also lets manufacturers mix and match different manufacturing processes for different parts of the chip, using the most advanced (and expensive) process only where it matters most.
Is Moore’s Law Still Alive?
It depends on how strictly you define it. If you’re counting transistors per chip, the doubling trend has continued remarkably well into the 2020s, especially when multi-die designs are included. NVIDIA’s jump from around 54 billion transistors in its previous-generation Hopper GPU to 208 billion in Blackwell happened in roughly two years.
If you define Moore’s Law more narrowly as shrinking a single slab of silicon, the pace has slowed. Each new process node takes longer to develop, costs more to build, and delivers smaller percentage improvements than the generation before it. The shift toward 3D stacking and chiplet architectures is, in part, an acknowledgment that the original playbook of “just make things smaller” is running out of room.
What hasn’t changed is the underlying economic dynamic Moore originally described: the semiconductor industry keeps finding ways to deliver more computing power per dollar, year after year. The methods have evolved from simple miniaturization to a combination of smaller transistors, smarter chip design, and three-dimensional packaging. The trajectory Moore identified in 1965 has proven remarkably durable, even if the path forward looks different than it did a decade ago.

