What Is a Petaflop? Floating-Point Ops Explained

A petaflop is one quadrillion (1,000,000,000,000,000) floating-point operations per second. It’s a unit used to measure the raw computational speed of supercomputers and, increasingly, of AI hardware. To put that number in perspective: if every person on Earth could do one math calculation per second, you’d need about 130,000 Earths working simultaneously to match a single petaflop.

What “Floating-Point Operations” Actually Means

The “flop” in petaflop stands for floating-point operation. A floating-point operation is any math calculation involving numbers with decimal points, like 3.14 × 2.718. These calculations are the backbone of scientific simulations, 3D graphics rendering, and AI training. Every weather forecast, every physics simulation, every frame of a CGI movie is built from billions of these decimal-point calculations.

The prefix “peta” means 10 to the 15th power, or one quadrillion. The full scale of computing measurements runs from megaflops (millions) through gigaflops (billions), teraflops (trillions), and petaflops (quadrillions). The term “flops” itself was coined in 1974 by computer scientist David Kuck to describe supercomputer performance in a standardized way.

How a Petaflop Compares to Everyday Hardware

Gaming consoles offer a useful reference point. The PlayStation 5 delivers about 10.28 teraflops, and the Xbox Series X hits roughly 12.1 teraflops. A petaflop is 1,000 teraflops, so you’d need about 83 Xbox Series X consoles working in perfect unison to reach a single petaflop of performance.

High-end consumer graphics cards are closing the gap, though only for specific types of work. NVIDIA’s RTX 4090 delivers up to 83 teraflops for standard graphics calculations. For AI-specific tasks using lower-precision math (called FP8), its specialized tensor cores can reach 1.32 petaflops. That distinction matters: AI workloads don’t always need the same decimal-point precision as scientific simulations, so hardware makers can trade precision for speed in those cases. When people compare petaflop numbers across different systems, the type of calculation being measured makes a significant difference.

The First Petaflop Computer

IBM’s Roadrunner supercomputer became the first machine to break the petaflop barrier on May 25, 2008. It was a landmark moment in computing, comparable to breaking the sound barrier in aviation. Today, the fastest supercomputers operate at over one exaflop, which is 1,000 petaflops. That jump to exascale required a fundamentally new class of machine, with roughly one billion processing cores working in parallel.

What Petaflop-Scale Computing Makes Possible

The reason petaflops matter isn’t the number itself. It’s what becomes solvable at that scale. Many scientific problems are essentially impossible below a certain computational threshold, and petascale computing unlocked several of them.

Climate science is one of the clearest examples. Earlier climate models had to simplify atmospheric chemistry and use coarse geographic resolution because the computers couldn’t handle more detail. Petascale systems let researchers simulate climate at regional and continental scales with far more realistic physics, producing predictions accurate enough to guide policy decisions.

In biology, petaflop-level computing enables researchers to predict protein structures with greater accuracy and simulate complex biological systems like viruses and ribosomes. These simulations help scientists understand fundamental life processes that can’t be observed directly in a lab. In seismology, petascale machines simulate earthquake wave propagation across the entire globe at resolutions that weren’t previously possible, improving both earthquake source models and our understanding of Earth’s interior structure. Cosmologists use the same class of hardware to build virtual universes, comparing simulated structures against real observations from ground and space telescopes to test theories about how the cosmos formed and evolved.

Petaflops, Exaflops, and What Comes Next

Petaflop-class machines are no longer the frontier. The current benchmark for cutting-edge supercomputers is the exaflop: one quintillion operations per second, or 1,000 petaflops. Reaching that level wasn’t simply a matter of adding more processors. It required roughly 1,000 times more parallelism than the fastest petascale machines, meaning about one billion cores and threads coordinating calculations simultaneously.

Still, petaflops remain the most common unit you’ll encounter when reading about computing performance. Most of the world’s supercomputers operate in the tens to hundreds of petaflops range, and AI training clusters from companies like Google, Meta, and OpenAI are typically described in petaflops. When you see a headline about a new AI chip or supercomputer, the petaflop count is the quickest way to gauge where it sits on the scale of computational power.