Moore’s Law, in its original sense, has already ended. The cost per transistor stopped decreasing at the 28-nanometer node roughly a decade ago, breaking the economic engine that made the observation so powerful. The physical ability to shrink transistors further is also approaching hard limits set by quantum physics. What remains is a slower, more expensive, and increasingly creative effort to keep squeezing more performance out of chips, but the era of predictable doubling every 18 to 24 months is over.
The Physical Wall: Quantum Tunneling Below 5nm
Silicon transistors work by using a gate to control the flow of electrons. Shrink that gate small enough and electrons stop obeying the gate entirely. Below roughly 5 nanometers, a quantum-mechanical effect called tunneling allows electrons to pass straight through the gate whether it’s “on” or “off.” At that point, the transistor can no longer function as a reliable switch, and silicon hits a fundamental physical limit.
Researchers have pushed experimental gates far smaller than 5nm using exotic materials. A team demonstrated a gate effectively 0.34 nanometers wide, the thickness of a single layer of carbon atoms, using graphene instead of silicon. But the lead researcher on that project put it bluntly: “This could be the last node for Moore’s Law.” Even if new materials allow one or two more generations of shrinking, the road ends at the atomic scale. You cannot make a gate smaller than an atom.
The Economic Wall Hit First
Long before transistors reached their physical limits, the economics broke. For decades, each new generation of chip manufacturing made transistors both smaller and cheaper, which is really what made Moore’s Law transformative. That stopped at the 28-nanometer node. From that point forward, cost per transistor has been flat to slightly rising.
This matters enormously. Building a cutting-edge chip fabrication plant now costs upward of $20 billion. Only three companies in the world (TSMC, Samsung, and Intel) can even attempt to manufacture at the most advanced nodes. TSMC officially began volume production of 2-nanometer chips in late 2025, with an enhanced version slated for the second half of 2026. Each of these jumps takes longer, costs more, and delivers smaller improvements than the generation before it. The predictable, affordable doubling cycle that defined Moore’s Law for 50 years no longer exists.
The Heat Problem
Packing more transistors into the same area generates more heat. By the early 2000s, chip power densities had already surpassed a hot plate, and projections showed them heading toward nuclear reactor territory if the trend continued unchecked. That forced chipmakers to stop increasing clock speeds around 2004 and shift to multi-core designs instead. Most consumer chips today are limited to roughly 100 watts by practical cooling, power supply, and reliability constraints. You can keep adding transistors, but if you can’t remove the heat they generate, they can’t all run at full speed simultaneously.
What the Industry Leaders Actually Say
Nvidia CEO Jensen Huang has been one of the most prominent voices declaring Moore’s Law dead. His core argument: “The ability for Moore’s Law to deliver twice the performance at the same cost, or at the same performance, half the cost, every year and a half, is over.” He’s not wrong about the traditional definition.
But Huang also points to what replaced it for GPUs specifically. Nvidia’s approach is to optimize across the entire stack simultaneously: chip architecture, system design, software libraries, and algorithms all engineered together. The result, Huang claims, is that Nvidia’s GPUs were “25 times faster than five years ago,” a pace that would have been only a 10x improvement under traditional Moore’s Law scaling. This kind of full-stack optimization is increasingly how performance gains happen, rather than through transistor shrinking alone.
How Chipmakers Are Working Around the Limits
The end of simple shrinking doesn’t mean the end of progress. The industry has shifted to several strategies that extend chip performance without relying on smaller transistors.
- 3D stacking: Instead of making transistors smaller on a flat surface, manufacturers are stacking layers of circuitry vertically. This approach, called 3D heterogeneous integration, connects multiple chip layers using tiny vertical tunnels through the silicon. It increases transistor density, reduces power consumption, and sidesteps the need for ever-smaller feature sizes.
- Chiplets: Rather than building one massive chip, companies now assemble smaller specialized chips (chiplets) into a single package. AMD’s latest processors use this approach, combining computing cores, memory controllers, and other components manufactured at different process nodes, using whichever is most cost-effective for each function.
- New transistor architectures: The shift from flat (planar) transistors to 3D FinFET designs bought the industry about a decade of additional scaling. The next transition, to gate-all-around transistors, wraps the gate completely around the channel for better control at small sizes. These architectural changes deliver real gains even when the raw dimensions barely shrink.
- Software and algorithm improvements: Specialized chips for AI workloads, better compilers, and smarter algorithms can deliver massive performance leaps that have nothing to do with transistor counts. Much of the recent explosion in AI capability came from software and architectural innovation, not from Moore’s Law.
The Realistic Timeline
If you define Moore’s Law strictly as doubling transistor density every two years at decreasing cost, it ended around 2014 when cost scaling stalled at 28nm. If you define it more loosely as continued transistor shrinking at any cost, the physical endpoint arrives within this decade as gate lengths approach the 1-nanometer range. Experimental devices have already reached the single-atom scale in the lab, leaving nowhere further to go.
What replaces it is a messier, more fragmented picture. Performance will keep improving through 3D packaging, chiplets, specialized accelerators, and software optimization. But the improvements will be uneven, expensive, and unpredictable. The simple, elegant observation that Gordon Moore made in 1965, that chips would get reliably smaller, faster, and cheaper on a fixed schedule, belongs to the past.

