A supercomputer is a machine built to solve problems that would take ordinary computers years or decades to finish. It does this by linking thousands of processors together so they work on the same problem simultaneously. The fastest supercomputer in the world right now, El Capitan at Lawrence Livermore National Laboratory, can perform 1.809 quintillion calculations per second.
How Speed Is Measured
Supercomputer performance is measured in FLOPS: floating-point operations per second. A floating-point operation is any mathematical calculation involving numbers with decimal points, the kind of math that dominates scientific simulations. A regular laptop might handle billions of these operations per second (gigaFLOPS). A supercomputer handles quadrillions or quintillions.
The scale runs from megaFLOPS (millions) through gigaFLOPS (billions), teraFLOPS (trillions), petaFLOPS (quadrillions), and now exaFLOPS (quintillions). Reaching that exascale threshold of one exaFLOP, or a billion billion calculations per second, was a major engineering milestone. Three U.S. Department of Energy systems have crossed it: Frontier at Oak Ridge National Laboratory, Aurora at Argonne National Laboratory, and El Capitan at Lawrence Livermore. El Capitan delivers more than 20 times the speed of the system it replaced.
It’s worth noting that FLOPS measures a specific type of work. Not every operation inside a computer takes the same effort. A division might take 4 to 20 times as many processing cycles as a simple addition. And the theoretical peak performance of a supercomputer is almost never achieved in real workloads. The Top500 list, which ranks the world’s fastest machines, uses a standardized benchmark called High Performance Linpack to level the playing field.
What’s Inside a Supercomputer
A supercomputer doesn’t use exotic, alien hardware. It uses the same basic building blocks as a regular computer: processors, memory, storage, and network connections. The difference is scale and coordination. Where your laptop has one processor with a handful of cores, a supercomputer strings together thousands of individual computers, called nodes, each containing multiple processors. A modest cluster with 64 nodes might have 256 processing cores working in tandem. The largest machines have millions of cores.
Each node is essentially a standalone computer that handles its own chunk of a problem. The nodes communicate through high-speed networks, often using a technology called InfiniBand, which transfers data between machines far faster than standard Ethernet. This network fabric is critical. If the processors can’t share results quickly enough, the whole system stalls. Getting thousands of nodes to synchronize their work is one of the hardest challenges in supercomputing. Every processor in the system has to time its computation and communication like instruments in an orchestra. If one section falls behind, the performance breaks down.
Modern supercomputers also rely heavily on GPUs, the same type of chip originally designed for video game graphics. GPUs excel at running thousands of simple calculations at the same time, which makes them ideal for the massively parallel workloads supercomputers handle.
What Supercomputers Actually Do
Supercomputers exist because some problems are too large or too complex for any other approach. Climate modeling is a classic example: simulating the Earth’s atmosphere, oceans, and land surfaces at fine resolution requires processing enormous grids of data through physics equations at each point, over and over, across simulated decades. No single processor could finish that work in a useful timeframe.
Drug discovery is another major application. Researchers use supercomputers to screen millions of chemical compounds against a protein target through virtual simulations, identifying promising drug candidates before any lab work begins. This involves molecular docking, where software predicts how a small molecule fits into the pocket of a protein, and molecular dynamics simulations, which model how proteins fold and flex over time. One research group used this approach to identify compounds from traditional Chinese medicinal plants as potential inhibitors of a coronavirus protein. These simulations generate massive amounts of pharmacological data that would be impossible to produce through physical experiments alone.
National security is the original driver behind many top machines. El Capitan was built for the National Nuclear Security Administration’s stockpile stewardship program, which uses simulations to ensure the reliability of the U.S. nuclear arsenal without underground testing. Other common workloads include astrophysics simulations, genomics, materials science, and weather forecasting.
The Role of Software
Hardware alone doesn’t make a supercomputer useful. The software layer determines whether thousands of processors actually cooperate effectively. The overwhelming majority of the world’s fastest supercomputers run Linux as their operating system, largely because it’s open source and highly customizable for different hardware configurations.
On top of the operating system sits a job scheduler, software that manages which tasks run on which nodes and when. Slurm, which stands for Simple Linux Utility for Resource Management, is the dominant scheduler, used by over 60% of the world’s top clusters. When researchers submit a simulation, the scheduler allocates the right number of nodes, launches the job, and frees those resources when it finishes.
Writing software that runs efficiently across thousands of processors is genuinely difficult. Code for a single computer is straightforward, but code that distributes work across 1,000 machines simultaneously requires careful coordination of timing, data movement, and error handling. This is one reason supercomputing remains a specialized field even as the hardware has become more standardized.
Power and Cooling
Running a supercomputer takes staggering amounts of electricity. Frontier, the exascale machine at Oak Ridge, can demand up to 30 megawatts of power. To put that in perspective, one megawatt can power roughly 800 homes, and at Oak Ridge’s local electricity rates, each megawatt costs about $1 million per year. Early projections estimated that a basic exascale system would need 150 megawatts, so engineers spent years driving that number down to make exascale computing financially viable.
All that electricity becomes heat, and removing it is an engineering problem in its own right. Older supercomputers used loud fans and air conditioning. Today’s top machines use liquid cooling, pumping water directly through the compute nodes. Frontier is completely fan-free, relying on liquid cooling with water temperatures around 90 degrees Fahrenheit, a dramatic shift from earlier systems that required water chilled to 42 degrees. Warmer cooling water is actually more efficient because it takes less energy to produce, and it can reject heat to the outside air more easily.
New data centers built for high-performance computing and AI workloads are being designed with capacities of 100 to 1,000 megawatts, enough to power 80,000 to 800,000 homes. The capital expenditures for data center infrastructure globally are projected to reach trillions of dollars by 2030, driven largely by demand for AI processing.
Supercomputing and AI Training
Training large AI models, including the large language models behind tools like ChatGPT, is fundamentally a supercomputing problem. These models learn by processing enormous datasets across thousands of GPUs working in parallel, exactly the kind of coordinated computation supercomputers are built for. The infrastructure is similar: clusters of GPU-equipped nodes connected by high-speed networks, running parallel code that distributes the training workload.
Interestingly, the AI models themselves struggle with the kind of programming supercomputers require. A large language model can write code for a single computer without much trouble, but when asked to write parallel code that coordinates 1,000 machines simultaneously, the results are often broken or confused. Researchers at the University of Maryland are working on bridging this gap, using AI to help write better parallel programs, which could make supercomputers more accessible to scientists who aren’t parallel programming experts.
Quantum-Classical Hybrid Systems
The next frontier in supercomputing involves integrating quantum processors alongside traditional hardware. Rather than replacing classical supercomputers, quantum processing units are being developed as specialized accelerators that handle specific parts of a problem conventional chips struggle with. IBM and partners at Oak Ridge National Laboratory and the Japanese research institute RIKEN have already demonstrated systems where classical processors, GPUs, and quantum processors work together on the same computation.
In these hybrid setups, each type of hardware does what it’s best at. Quantum processors handle calculations involving quantum circuits that would require impossibly large amounts of memory on a traditional chip. Classical processors and GPUs take over for parts of the problem that involve simpler, highly parallel operations or general coordination tasks. GPUs also help clean up errors from the quantum processor, which remains inherently noisy with current technology. IBM has stated a goal of delivering a system capable of fault-tolerant quantum computation by the end of the decade, with classical and GPU hardware integrated directly into the system for error correction.

