Will Moore’s Law End? A Realistic Timeline

Moore’s Law is slowing down, but it hasn’t hit a hard wall yet. The original observation, that the number of transistors on a chip doubles roughly every two years, held remarkably steady for decades. Now, each new generation takes longer to arrive and costs significantly more to produce. Whether that counts as “ending” depends on how strictly you define the law, but the era of easy, predictable scaling is already over.

What Moore’s Law Actually Predicts

In 1965, Intel co-founder Gordon Moore noticed that the number of components on a chip was doubling every year. By 1975, he revised the pace to roughly every two years. An informal compromise of 18 months became the popular shorthand. Analysis of Intel’s actual chip density over the decades found doubling times of 14 and 25 months that closely match Moore’s original 1965 and 1975 estimates.

It’s worth remembering that Moore’s Law was never a law of physics. It was an economic observation about what the semiconductor industry could achieve when billions of dollars flowed into manufacturing research. The “law” held because companies made it hold, investing in each successive generation of tooling and materials to keep the trend alive.

The Physics Getting in the Way

Silicon has a hard floor. Below roughly 5 nanometers, silicon transistor gates can no longer reliably control the flow of electrons because of quantum tunneling: electrons slip through barriers they shouldn’t be able to cross, causing the transistor to leak current even when it’s supposed to be off. Today’s leading-edge chips are already at the 2 to 3 nanometer range, which means the industry is approaching that boundary fast.

Researchers in China recently built an experimental transistor gate just 0.34 nanometers wide, the thickness of a single layer of carbon atoms, using graphene instead of silicon. That result demonstrates the absolute physical extreme of how small a gate can get, but it’s a lab demonstration, not a commercial product. It highlights that silicon itself is nearing its useful limit, even if exotic materials can push further in controlled settings.

How Chipmakers Are Buying More Time

The industry has a long history of engineering around physical limits rather than surrendering to them. The latest example is the shift from FinFET transistors to gate-all-around (GAA) designs, also called nanosheet transistors. In older designs, the gate that switches the transistor on and off touches the channel from one or two sides. In a GAA transistor, the gate wraps completely around the channel, giving it much tighter control over electron flow. That means less leakage, more stable performance, and the ability to keep shrinking for a few more generations.

TSMC began production of its 2-nanometer node using GAA architecture in late 2024, with high-volume manufacturing expected through 2025 and 2026. Intel is ramping its competing 18A process (roughly equivalent to 1.8 nm) on a similar timeline. Samsung adopted GAA earlier but stumbled on yields, illustrating how difficult each new step has become. Beyond GAA, researchers are already sketching out successor architectures, but nothing past the current GAA generation is close to production.

The Machines That Print Chips

Shrinking transistors is only possible if you can print patterns that small. The entire advanced chipmaking world depends on a single company, ASML, for the extreme ultraviolet (EUV) lithography systems that etch features onto silicon. ASML’s latest generation, called High-NA EUV, increases the optical precision of the system and can print features down to 8 nanometers in resolution. The first of these machines was delivered in December 2023, and they’re expected to support high-volume manufacturing of sub-2-nanometer chips starting around 2025 to 2026.

That equipment will enable continued geometric scaling “into the next decade,” according to ASML. But each generation of lithography machine is more complex and more expensive than the last. There is no announced successor technology beyond High-NA EUV, which means the industry will eventually need yet another breakthrough in how patterns are physically printed onto wafers.

The Rising Cost Problem

Even when physics cooperates, economics may not. Moore originally noted that cost per transistor dropped with each generation, making newer chips not just faster but cheaper per unit of computing. That economic half of the equation is under serious strain.

A 3-nanometer wafer from TSMC currently costs around $25,000 to $27,000. The new 2-nanometer wafers are priced at roughly $30,000, a 10 to 20 percent jump. That’s less dramatic than early rumors of a 50 percent increase, but the trend is clear: each node costs more. TSMC is also raising prices on its older 3, 5, and 7 nanometer processes by single-digit percentages in 2026. The number of companies that can afford to design and manufacture chips at the leading edge has shrunk to a handful. When only a few customers can pay the bill, the broad economic engine that powered Moore’s Law for decades starts to stall.

What Might Replace Traditional Scaling

If cramming more transistors onto a flat chip gets too hard or too expensive, there are other paths to more computing power. Chiplet designs, where multiple smaller chips are packaged together as one unit, sidestep the need for a single enormous die manufactured at the most expensive node. Advanced 3D stacking puts memory directly on top of processors, cutting the distance data has to travel and dramatically improving energy efficiency.

New materials could also extend transistor scaling beyond silicon’s limits. Carbon nanotube transistors have been a research favorite for years. Recent work has produced circuits with over a thousand carbon nanotube transistors that function reliably even under extreme radiation, pointing toward potential use in space and military applications as an early commercial foothold. But large-scale carbon nanotube chips that compete with silicon on cost and complexity remain years away from any consumer product.

Performance Is Growing Faster Than Transistor Counts

One reason Moore’s Law matters less than it used to is that raw transistor count is no longer the best measure of computing progress. Software optimizations, specialized chip architectures, and AI-specific hardware have decoupled performance gains from transistor scaling. GPU-driven AI computing, sometimes informally called “Huang’s Law” after Nvidia’s CEO, has seen effective compute efficiency grow by roughly 10 to 13 times per year, a pace that dwarfs the old doubling every two years. That growth comes from a combination of better chip design, smarter software, and architectural innovations rather than just smaller transistors.

This shift matters practically. Even if transistor density plateaus in the next decade, the computing power available for tasks like AI training, scientific simulation, and everyday applications can keep climbing through other means. The question “will Moore’s Law end?” increasingly has a follow-up: “and will it matter when it does?”

A Realistic Timeline

The semiconductor industry has firm plans through roughly 2027 to 2028, with 2-nanometer and smaller nodes in active development at TSMC, Intel, Samsung, and Japan’s Rapidus (targeting 2-nanometer production by 2027). Beyond that, the roadmap gets hazy. Most industry analysts expect traditional transistor scaling to reach practical limits sometime in the late 2020s to early 2030s, not because it becomes physically impossible in a lab, but because the combination of quantum effects, manufacturing complexity, and cost makes further shrinking uneconomical for most applications.

Moore’s Law won’t end with a single announcement. It will fade gradually, as the time between nodes stretches, the cost per transistor stops falling, and the industry redirects its energy toward packaging, architecture, and new computing paradigms. In many ways, that transition is already underway.