Supercomputers are used by a surprisingly wide range of organizations, from government weather agencies and nuclear weapons labs to pharmaceutical companies, oil explorers, Wall Street firms, and the tech giants building AI. What ties them together is a need to solve problems too large or too complex for ordinary computers: simulating the physics of a nuclear warhead, predicting a hurricane’s path five days out, or training an AI model with a trillion parameters.
Weather Agencies and Climate Scientists
National weather services are among the longest-standing and most visible users of supercomputers. NOAA operates twin supercomputers named Dogwood and Cactus, located in Virginia and Arizona, each running at 14.5 petaflops (roughly 14.5 quadrillion calculations per second). Combined with research supercomputers in West Virginia, Tennessee, Mississippi, and Colorado, NOAA’s total supercomputing capacity reaches 49 petaflops. All of that power feeds global weather models that divide the atmosphere into a grid and simulate how temperature, pressure, wind, and moisture will evolve over time. The U.S. Global Forecast System is being upgraded to a horizontal resolution of nine kilometers, down from 13, meaning the model can now distinguish weather features roughly the size of a small city. The European Centre for Medium-Range Weather Forecasts runs a comparable operation and consistently ranks among the most accurate forecast producers in the world.
Climate research pushes the same hardware even harder. Where a five-day weather forecast might take hours to compute, a century-long climate projection can run for weeks, simulating ocean currents, ice sheet dynamics, carbon cycles, and atmospheric chemistry all interacting together.
National Security and Nuclear Stockpile Programs
The U.S. stopped underground nuclear testing in 1992. Since then, the National Nuclear Security Administration has relied on supercomputers to certify that the country’s nuclear weapons still work as intended. The Advanced Simulation and Computing (ASC) program at Lawrence Livermore National Laboratory is the primary source of high-performance computing for this mission. Scientists run high-fidelity, three-dimensional, physics-based simulations to model everything from the aging of warhead materials to the performance of a weapon under extreme conditions. These simulations replace the data that underground tests once provided and are essential for life-extension programs that keep decades-old warheads safe and reliable without ever detonating one.
The computational demands are enormous. A single simulation might track millions of interacting physical processes across microseconds, requiring some of the most powerful machines on the planet. Los Alamos, Sandia, and Lawrence Livermore national labs have historically housed many of the world’s top-ranked supercomputers for exactly this reason.
Tech Companies Training AI Models
The recent explosion in artificial intelligence has turned tech companies into some of the world’s largest supercomputer operators. Microsoft has built what it calls the world’s most powerful AI datacenter, where hundreds of thousands of NVIDIA GPUs are interconnected to function as a single massive supercomputer. At the rack level, GPUs communicate at terabytes per second. Across racks, networking fabrics deliver 800 gigabits per second in an architecture designed so every GPU can talk to every other GPU at full speed without congestion. The result: tens of thousands of accelerators training a single AI model in parallel.
Training a large language model with a trillion or more parameters requires this kind of scale. The model’s “learning” process involves adjusting billions of numerical weights across enormous datasets, a task that would take a single computer centuries. Meta, Google, and other companies operate similar GPU clusters for their own AI research. Eli Lilly recently announced it is building the most powerful supercomputer owned by a pharmaceutical company, in collaboration with NVIDIA, specifically to train AI models on millions of drug experiments.
Pharmaceutical and Biomedical Research
Drug discovery has become one of the fastest-growing areas of supercomputer use. Developing a new medicine traditionally takes over a decade and billions of dollars, partly because researchers must test enormous numbers of molecular candidates against biological targets. Supercomputers compress that timeline by simulating how drug molecules interact with proteins, how diseases progress at a cellular level, and which candidates are most likely to succeed before they ever enter a lab.
Lilly’s new supercomputer, for example, will let scientists train AI models on millions of experimental results to predict which compounds are worth pursuing. Advanced medical imaging processed on these systems gives researchers a clearer picture of how diseases change over time, helping develop biomarkers for more personalized treatments. Other pharmaceutical companies use similar computing power for protein folding simulations, where understanding the three-dimensional shape of a protein can reveal exactly where a drug molecule needs to attach to be effective.
Oil, Gas, and Energy Exploration
Finding oil and gas deposits miles below the Earth’s surface requires processing staggering volumes of seismic data. Ships or ground-based equipment send sound waves deep underground, then sensors record the reflections. Turning those echoes into a detailed 3D map of subsurface rock formations is computationally brutal. BP recently built its first production-scale GPU cluster for exactly this purpose. Testing on real datasets from the Thunder Horse and Herschel fields in the Gulf of Mexico showed a 90-fold speedup and 13-fold improvement in energy efficiency compared to traditional computing hardware, while maintaining accuracy with errors below 1%.
These imaging techniques, known as reverse time migration and full-waveform inversion, essentially replay the physics of sound waves traveling through rock to reconstruct what lies beneath. The faster and more accurately companies can process this data, the better they can target drilling locations and avoid costly dry wells.
Financial Services and Risk Modeling
Major banks and investment firms use supercomputing-class hardware to manage financial risk. The core technique involves running millions of simulated future scenarios to evaluate how a portfolio or institution would perform under different economic conditions. Monte Carlo simulations, which generate vast numbers of random possible outcomes to estimate probabilities, are a staple of this work. One recent study evaluated risk models using simulated and historical market data from 200,000 portfolios subjected to Monte Carlo simulations and stress testing across multiple economic scenarios.
Beyond risk management, high-frequency trading firms rely on extreme computing power and ultra-low-latency networks to execute trades in microseconds. Fraud detection systems at large banks also process millions of transactions in near real time, flagging suspicious patterns using models too complex for standard servers.
Particle Physics and Academic Research
CERN’s Large Hadron Collider generates roughly 200 petabytes of data every year of operations. No single institution could store or analyze that volume, so CERN coordinates the Worldwide LHC Computing Grid, a collaboration of around 160 computing centers in more than 40 countries. Universities and national labs contribute processing power to sift through collision data, searching for rare particle interactions that might reveal new physics.
Academic supercomputer use extends far beyond particle physics. Universities run simulations of galaxy formation, earthquake propagation, protein dynamics, fluid mechanics, and hundreds of other research areas. Many countries fund national supercomputing centers that provide free or subsidized access to researchers, making it possible for a graduate student studying turbulence or a biologist modeling viral evolution to tap into world-class computing power.
How Smaller Organizations Get Access
You don’t need to build your own supercomputer anymore. Cloud providers like AWS, Microsoft Azure, Google Cloud, and Oracle rent high-performance computing resources on demand. A basic virtual machine with 4 CPUs and 16 GB of memory costs roughly $100 to $134 per month depending on the provider. For serious computational work, a high-performance GPU instance with 8 GPUs, 96 CPUs, and over a terabyte of memory runs between $24,000 and $30,000 per month. Committing to a one- or three-year contract can reduce those prices by 32% to 55%.
This pay-as-you-go model has opened supercomputing to startups, mid-sized companies, and research groups that would never have the budget to buy and maintain their own hardware. A biotech startup can rent a GPU cluster for a week to run molecular simulations, then shut it down. An engineering firm can spin up thousands of cores to test aerodynamic designs, paying only for the hours used. The barrier to entry has dropped dramatically, which means the list of who uses supercomputers is growing every year.

