Moore’s Law, in its original strict form, is no longer holding. Transistor counts are still climbing, but the pace has slowed, the costs have shifted dramatically, and the industry is increasingly relying on creative workarounds rather than pure shrinking to keep progress moving. The short answer: the spirit of Moore’s Law survives, but the letter of it does not.
What Moore’s Law Actually Predicted
In 1965, Gordon Moore observed that the number of transistors on a chip was doubling roughly every year. A decade later, in 1975, he revised that estimate to a doubling every two years. That revised pace became the benchmark the semiconductor industry used for decades, and it held remarkably well through the 1990s and 2000s. It was never a law of physics. It was an observation about engineering progress and economic incentives, which made it both powerful and fragile.
Where Transistor Scaling Stands Today
Chipmakers are still shrinking transistors, but the gains per generation have gotten smaller. At 28nm, a typical chip packed about 14 million transistors per square millimeter. At 3nm, that figure is around 300 million. That’s impressive progress over a decade, but the most dramatic density jumps happened earlier, during transitions like 28nm to 20nm and then to 16nm/14nm. Recent jumps between nodes (5nm to 4nm to 3nm) show noticeably slower improvement.
TSMC, the world’s leading chip manufacturer, confirmed that its 2nm process is on track for mass production in 2025. Its even smaller 1.4nm node is expected to enter mass production in 2028. So the roadmap hasn’t hit a dead end. But each step forward now requires exponentially more effort and investment than the last.
The Physics Problem
Silicon transistors face hard physical limits. When the channel that electrons flow through gets extremely short, quantum effects start to interfere. Electrons can “tunnel” through barriers they’re supposed to be blocked by, causing current to leak in ways that make the transistor unreliable. Research has identified an effective channel length of roughly 100nm as the point where conventional transistor designs start breaking down as reliable switches. Engineers have blown past that number using clever structural tricks (like wrapping the gate around the channel in three dimensions), but each workaround adds complexity and cost.
Researchers have demonstrated transistors with gate lengths as small as 1 nanometer using exotic two-dimensional materials like molybdenum disulfide, with graphene serving as electrical contacts. These materials are atomically thin, which gives them natural advantages at tiny scales. They’re compatible with existing manufacturing processes in principle, but scaling them up from lab demonstrations to billions of transistors on a production chip remains a massive engineering challenge.
The Cost Problem
Even where physics cooperates, economics increasingly does not. One of Moore’s Law’s most important side effects was that cost per transistor dropped with each generation. That trend has stalled or reversed. TSMC’s wafer prices climbed from about $5,000 at the 7nm node to $18,000 at 3nm, a more than threefold increase in roughly a decade. The cost per square millimeter of chip area rose from $0.07 to $0.25 over that same period.
There’s a counterargument: because 3nm packs so many more transistors into each square millimeter, the cost per individual transistor did fall by about 7x over those ten years. But that rate of decline is far slower than historical norms, and some analysts argue the cost per transistor hasn’t meaningfully dropped in 15 years once you account for all the additional manufacturing steps. The result is that only the highest-value chips (flagship phone processors, AI accelerators, cutting-edge GPUs) can justify the cost of the most advanced nodes. Cheaper products stick with older, more economical processes.
How the Industry Is Working Around It
Next-generation lithography tools are buying the industry more room. ASML’s High-NA extreme ultraviolet (EUV) systems can print features 1.7 times smaller than previous EUV machines, achieving transistor densities 2.9 times higher. These machines cost hundreds of millions of dollars each, but they extend the runway for continued scaling.
The bigger shift, though, is in packaging. Instead of cramming everything onto one monolithic chip, manufacturers are splitting designs into smaller “chiplets” and connecting them inside a single package using advanced interposer technology. This approach lets companies mix and match components built on different process nodes, combining a cutting-edge AI core with a cheaper memory controller, for example. Packaging, once considered a boring back-end step, is now a primary driver of system performance. Japanese chipmaker Rapidus is pursuing panel-level interposer production at 600mm scale specifically to make this approach more economical.
Intel has committed to a goal of fitting 1 trillion transistors on a single package by 2030. That target relies heavily on chiplet integration and 3D stacking rather than shrinking alone. It’s a sign that the industry has redefined what “more transistors” means: not necessarily smaller ones, but more of them connected in smarter ways.
What Industry Leaders Are Saying
The debate plays out in public among the people who would know best. NVIDIA CEO Jensen Huang declared Moore’s Law “dead” in 2022, citing the physical limits of transistor miniaturization. Then he reversed course, predicting a “Hyper Moore’s Law” era driven by AI-specific hardware gains. That reversal captures the tension well: traditional scaling is stalling, but performance improvements from architectural innovation, software optimization, and new packaging techniques are accelerating.
Intel, whose co-founder coined the term, continues to publicly champion Moore’s Law as a guiding principle rather than a precise prediction. Its trillion-transistor-by-2030 roadmap is ambitious but depends on stacking and integration technologies that Moore never envisioned.
Energy Efficiency Tells a Similar Story
A related trend called Koomey’s Law tracks how much computation you can do per unit of energy. Historically, computational energy efficiency doubled every 1.57 years. Recent analysis found that efficiency is still improving exponentially, but the pace has slowed to a doubling every 2.29 years. Performance itself doubles roughly every 1.85 years. Both metrics are still improving, just not as fast as they used to, which mirrors the transistor story almost exactly.
The Honest Assessment
If you define Moore’s Law strictly as transistor density doubling every two years at decreasing cost per transistor, it no longer holds. Density gains have slowed, costs per wafer have surged, and the cost-per-transistor decline has flattened. If you define it more loosely as the idea that computing capability keeps advancing at an exponential pace, there’s still truth in it, thanks to architectural changes, chiplet packaging, AI-optimized designs, and new lithography tools. The semiconductor industry hasn’t stopped progressing. It’s just progressing differently than it did for the first 50 years.

