Peak load is the highest level of electricity demand drawn from a power system during a given period, whether that’s an hour, a day, a month, or a year. It represents the moments when the most people are using the most power at the same time, and it drives some of the biggest decisions in how utilities build, manage, and price their electricity. The concept also applies to computing, where it describes the maximum demand placed on servers during traffic spikes.
How Peak Load Works on the Grid
Every electrical grid has a baseline level of demand that stays relatively constant, the power needed to keep hospitals, streetlights, refrigerators, and industrial processes running around the clock. This is called base load. Peak load sits on top of that baseline, representing the surge in consumption that happens when millions of people turn on air conditioners on a hot afternoon or flip on lights, ovens, and heaters in the early evening.
The electricity industry generally defines on-peak hours as 7:00 a.m. to 11:00 p.m. on weekdays. Off-peak hours run from 11:00 p.m. to 7:00 a.m. on weekdays and all day on weekends and holidays. The exact windows vary by utility and region, but the pattern holds: demand climbs when people wake up, stays elevated through the workday, spikes again in the evening, and drops overnight.
During peak hours, utilities must supply power at a significantly higher rate than normal. To do this, they rely on power plants that can start up and ramp quickly, often natural gas turbines that sit idle most of the year and only fire up when demand surges. Base load plants, by contrast, run continuously for extended periods. The distinction matters because those quick-start peak plants are more expensive to operate per unit of electricity, and that cost gets passed along to customers.
How Peak Load Is Measured
Electricity is measured in watts. Small devices use watts directly, larger appliances use kilowatts (1,000 watts), and the generation capacity of power plants is measured in megawatts (MW) or gigawatts (GW). Peak load for a household might be a few kilowatts; for a city, it could be thousands of megawatts.
Utilities track peak demand in real time using electronic smart meters that wirelessly transmit power usage data. For commercial and industrial customers, the utility records the highest average demand over short intervals, typically 15 or 30 minutes, during a billing cycle. That single highest reading becomes the basis for demand charges on the bill.
Why Peak Load Costs You More
If you pay a flat residential rate, peak load still affects you indirectly because the utility’s cost of maintaining extra generation capacity gets built into everyone’s rates. But if you’re on a commercial rate or a time-of-use residential plan, the impact is direct and sometimes dramatic.
Commercial demand charges range from nothing to more than $6 per kilowatt, depending on the utility. A business that hits 800 kW of demand in December at $5 per kW pays a $4,000 demand charge for that month alone. Many utilities also apply what’s called a ratchet charge: the highest monthly demand you hit in a year becomes your benchmark for the next 11 months. So if that same business drops to just 150 kW the following June, it doesn’t pay $750. Instead, it pays $2,400, because the minimum is set at 60% of the December peak (480 kW times $5). A single spike in demand can inflate your utility bill for an entire year.
Some utilities now offer split pricing between peak and off-peak demand. One Northwest utility, for example, charges $4.50 per kW during peak periods but only 63 cents per kW during off-peak hours for customers above 50 kW. That sevenfold price difference creates a strong incentive to shift energy use away from peak times.
Peak Shaving and Load Shifting
Because peak demand is so expensive, a whole set of strategies has emerged to flatten those spikes. The umbrella term is “peak shaving,” and it works in a few ways.
- Battery storage: A battery system charges during off-peak hours when electricity is cheap, then discharges during high-demand periods to reduce the load drawn from the grid. This works at every scale, from a home battery paired with solar panels to large centralized systems serving entire neighborhoods.
- Load shifting: Instead of reducing total energy use, you move consumption to different hours. Running industrial equipment overnight, pre-cooling a building before afternoon peaks, or charging electric vehicles late at night are all forms of load shifting. Research suggests flexible scheduling can shift 10% to 30% of power demand away from peak windows.
- Demand-side management: Utilities offer programs that let them briefly cycle off water heaters, pool pumps, or air conditioners during critical peaks in exchange for bill credits. The interruptions are short enough that most people don’t notice.
Combining solar panels with battery storage has emerged as a particularly effective approach. Solar generates the most power during midday, but demand often peaks in the evening. Batteries bridge that gap, storing midday solar output and releasing it when the grid needs it most.
The Duck Curve Problem
As solar energy has grown, it has reshaped when peak load stress actually occurs. The “duck curve,” named for the shape it creates on a demand chart, illustrates the problem. During the middle of the day, solar panels flood the grid with cheap power, pushing net demand down to a trough. Then, as the sun sets and solar output drops, demand surges in the evening when people come home and turn everything on.
This creates a steep ramp-up that forces conventional power plants to increase output very quickly, sometimes over just two or three hours. California, where solar contributed nearly 40% of the state’s electricity generation on one day in March, has already felt this pressure acutely. With installed solar capacity expected to triple nationwide by 2030, the duck curve pattern is spreading well beyond California. Managing that evening ramp is now one of the central challenges of modern grid planning.
Peak Load in Computing
The concept translates directly to servers and data centers. Average load is the demand on your system during normal daily operations. Peak load is the demand during specific events that cause traffic spikes, and what counts as a spike depends entirely on the application.
For most business software, peak load hits Monday morning when everyone logs in simultaneously and kicks off workflows. For online retailers, it’s seasonal: holiday shopping, Black Friday, flash sales. A news site might spike when a major story breaks. The key metric is whether systems can handle those surges without failing. Users generally tolerate some slowdown during peak events, but applications should degrade gracefully rather than throwing errors.
Capacity planning in computing follows a practical rule: if your servers are consistently running above 80% resource consumption, you need to plan for expansion because you have no headroom left for spikes. Rather than keeping expensive servers idle year-round to handle occasional peaks, many organizations use cloud-based auto-scaling that automatically adds capacity when traffic rises and releases it when traffic falls. This mirrors the energy grid’s approach of keeping quick-start resources available without running them constantly.
To plan effectively, engineers collect resource data (processor usage, memory, network traffic, disk activity) on both a normal day and a peak day, then plot user load against resource consumption. The correlation between user count and resource use is typically linear, making it possible to forecast when current infrastructure will hit its limits as the user base grows.

