GPU power refers to the amount of electrical energy a graphics card consumes during operation, measured in watts. A modern high-end GPU can draw anywhere from 200 to 450 watts under load, making it the single most power-hungry component in most desktop computers. Understanding GPU power matters for choosing the right power supply, keeping temperatures manageable, and knowing what performance to expect from a given card.
TDP, TGP, and TBP: What the Numbers Mean
GPU manufacturers use several overlapping terms to describe power consumption, and they don’t all measure the same thing. The three you’ll encounter most often are GPU Power, Total Graphics Power (TGP), and Total Board Power (TBP).
GPU Power is the narrowest measurement. It covers only the power delivered to the graphics processor chip itself to run computations. TGP is a step broader: it includes the GPU chip’s power plus the power consumed by the graphics memory and any electrical losses during power conversion. TGP is the standard metric for laptop GPUs, and it’s listed as a range rather than a single number. For example, Intel’s Arc A570M has a TGP range of 75 to 95 watts, meaning laptop manufacturers can configure it to run at different power levels depending on cooling capacity.
TBP is the broadest measurement and the one most relevant if you’re building a desktop PC. It represents the total power draw of the entire graphics card, including the GPU chip, memory, fans, LED lighting, and all supporting circuitry. When you see a spec sheet listing a desktop GPU at “300W,” that’s almost always TBP. The AMD Radeon RX 7900 XT, for instance, carries a TBP rating of 300 watts. NVIDIA’s RTX 4090 is rated at 450 watts.
How Power Gets to Your GPU
Your power supply unit outputs 12 volts, but the GPU’s processor cores operate at much lower voltages, typically around 0.7 to 1.1 volts. Bridging that gap is the job of the voltage regulator modules (VRMs) built onto the graphics card itself. These small circuits step the voltage down and deliver extremely high currents to the chip. A GPU drawing 300 watts at under one volt needs hundreds of amps flowing through a very small space, which is why high-end cards have elaborate VRM designs with many “phases,” each sharing a portion of the total current load.
The physical connection between your power supply and the graphics card has evolved to keep pace with rising power demands. Older 6-pin PCIe power connectors max out at 75 watts. The 8-pin connector handles up to 150 watts. For today’s flagship cards, NVIDIA introduced the 12VHPWR connector (also called 12+4 pin), which supports up to 600 watts through a single cable. The PCIe slot on the motherboard itself contributes an additional 75 watts regardless of which cable connectors are used.
Why the Rated Number Isn’t the Whole Story
TBP ratings describe power consumption during a “typical” workload, not absolute maximum draw. In practice, GPUs experience brief transient power spikes that can significantly exceed their rated TBP. These spikes last only milliseconds but can trip overcurrent protections in a power supply that’s sized too close to the edge. This is one reason GPU manufacturers recommend power supplies that seem oversized relative to the card’s rated wattage.
Temperature also plays a role. Most modern GPUs use dynamic frequency scaling, boosting their clock speeds (and power draw) when temperatures allow and throttling back when they don’t. A card in a well-ventilated case with good airflow will sustain higher clocks and draw more power than the same card stuffed into a cramped enclosure. Overclocking pushes power draw further still, sometimes 10 to 20 percent above stock TBP.
Choosing the Right Power Supply
Start by adding up the power draw of your major components: GPU, CPU, storage drives, fans, and any other accessories. The GPU will typically account for 50 to 70 percent of the total in a gaming system. Once you have a rough total, Seasonic recommends adding 20 to 30 percent on top as a buffer. This headroom serves three purposes: it absorbs transient spikes without tripping protections, it keeps the PSU operating in its most efficient range (around 50 to 75 percent of rated capacity), and it accounts for capacitor aging, which gradually reduces a power supply’s effective output over the life of the unit.
As a bare minimum for modern builds, plan for at least 10 to 20 percent reserve above your calculated peak consumption. If you’re considering future upgrades or overclocking, aim for the full 20 to 30 percent. For a system with a 300-watt GPU and a 125-watt CPU, your base draw sits around 500 watts once you include other components. A 700-watt PSU would cover the minimum, but an 850-watt unit gives comfortable headroom.
Laptop GPU Power Works Differently
In a laptop, the same GPU model can perform very differently depending on how much power the manufacturer allocates to it. A laptop GPU’s TGP is configured within a defined range, and two laptops using the same chip may have noticeably different performance if one runs it at the top of the range and the other at the bottom. A 20-watt difference in TGP can translate to a meaningful gap in frame rates.
This is why checking just the GPU model name isn’t enough when shopping for a gaming laptop. Look for the specific TGP or “maximum graphics power” listed in the specs. Thinner, lighter laptops tend to use lower TGP settings because their cooling systems can’t dissipate as much heat, while thicker gaming laptops push closer to the upper limit. The tradeoff is always the same: more power means more performance, but also more heat and shorter battery life.
Power Draw and Real-World Performance
More watts don’t automatically mean a faster GPU. Architectural efficiency varies enormously between generations and between AMD and NVIDIA designs. A newer GPU can often match or beat an older one while drawing significantly less power, because each generation of chip manufacturing allows more calculations per watt. The relationship between power and performance follows a curve of diminishing returns: the last 20 percent of clock speed might require 50 percent more power to sustain.
This efficiency curve is why “performance per watt” has become a key metric in GPU reviews. It captures how effectively a card converts electrical energy into useful work. For most users, a mid-range card operating at a moderate power level delivers the best balance of speed, noise, heat, and electricity cost. Flagship cards push into the steep end of the power curve, where you’re paying a real premium in watts (and in your electricity bill) for the last increment of performance.

