What Is Goodput? How It Differs From Throughput

Goodput is the rate at which useful data arrives at its destination, measured in bits per second. Unlike throughput, which counts every bit that crosses the network, goodput only counts the data your application actually wanted. If you download a 100 MB file and it takes 10 seconds, your goodput is 10 MB per second, regardless of how much extra data the network had to shuffle behind the scenes to make that transfer happen.

The term is a portmanteau of “good” and “throughput,” and it captures something throughput alone misses: actual, usable performance.

How Goodput Differs From Throughput

Throughput measures the total volume of data flowing through a network connection. That total includes protocol headers, control messages, and any packets that had to be sent a second (or third) time because they were lost along the way. Goodput strips all of that away and only counts the payload, the bits of your file, video stream, or web page that actually reach the application on the other end.

This distinction matters because high throughput doesn’t always mean high performance. Imagine a connection moving 100 Mbps of total data, but 20% of that is protocol overhead and retransmissions. Your throughput reads 100 Mbps, but your goodput is closer to 80 Mbps. That 80 Mbps figure is the one that reflects your real experience as a user. In congested or lossy networks, the gap between the two numbers can grow dramatically.

What Eats Into Your Goodput

Three main factors reduce goodput below the raw throughput number your connection reports.

  • Protocol overhead. Every packet your computer sends includes headers for TCP, IP, and often additional layers. These headers carry addressing and error-checking information the network needs, but they aren’t part of your actual data. This overhead is always present, even on a perfectly healthy connection.
  • Retransmissions. When packets are lost in transit, the sender has to retransmit them. Those retransmitted bytes count toward throughput but not toward goodput, since they’re just repeating data that should have arrived the first time. In one network study, retransmissions consumed about 3.5% of total bandwidth during a one-hour monitoring window, a modest hit. But in worse conditions, the cost compounds quickly.
  • Congestion control. TCP, the protocol behind most internet traffic, deliberately slows down when it detects packet loss or rising latency. This is a safety mechanism to prevent network collapse, but it directly throttles how fast useful data can flow.

Why Small Packet Loss Causes Big Problems

Packet loss has an outsized effect on goodput, far larger than the percentage alone would suggest. Testing by the network monitoring firm ThousandEyes found that just 1% packet loss caused a roughly 71% drop in throughput on a symmetric network connection. At 2% packet loss, throughput fell by about 78% compared to the baseline. On asymmetric connections (the type most home internet uses, with different upload and download speeds), the results were even worse: 1% loss caused a 74% drop, and 2% loss caused over 80%.

The reason is TCP’s congestion control behavior. When TCP detects lost packets, it interprets that as a sign of network congestion and aggressively reduces its sending rate. So you’re not just losing the 1% of packets that vanished. You’re also losing speed because TCP is pumping the brakes on everything else. The actual data that reaches your application, your goodput, takes a double hit: fewer packets arrive, and the ones that do arrive more slowly.

Goodput and Streaming Video

Goodput is the metric that determines whether your video plays smoothly or buffers constantly. A video stream has a fixed bitrate, the amount of data per second needed to keep the picture playing. As long as your TCP goodput meets or exceeds that bitrate, the video plays without interruption. The moment goodput drops below the video’s bitrate, the player’s buffer starts draining faster than it fills, and you get the familiar pause-and-reload of a rebuffering event.

Research from Hong Kong Polytechnic University formalized this relationship: rebuffering frequency is zero when average TCP goodput is at or above the video bitrate, and it climbs as goodput falls below that threshold. Users consistently rate rebuffering as one of the most annoying aspects of video quality. So if you’re watching a 1080p stream that requires 5 Mbps and your connection shows 10 Mbps of throughput but only delivers 4 Mbps of goodput after overhead and retransmissions, you’ll experience stalls despite what your speed test says.

The same principle applies to video calls, online gaming, and any real-time application. The advertised bandwidth of your connection is a ceiling, not a guarantee. Goodput is what you’re actually getting.

How to Calculate Goodput

The formula is straightforward. Take the total bytes successfully delivered to the application, subtract any retransmitted bytes, and divide by the total transfer time in seconds. For a simple file transfer, it’s even easier: divide the file size by the time it took to download.

If you transferred 500 MB and the download took 50 seconds, your goodput was 10 MB per second (80 Mbps). If the network also had to retransmit 25 MB of lost packets during that transfer, the throughput was higher (525 MB over the wire), but the goodput stays at 10 MB/s because you only care about the 500 MB that made up your file.

Measuring Goodput in Practice

The most widely used tool for testing network performance is iperf3, an open-source utility maintained by the Energy Sciences Network. It measures achievable bandwidth over TCP, UDP, and SCTP connections, reporting throughput, packet loss, and related parameters. While iperf3 reports throughput by default, comparing its results with actual file transfer speeds gives you a practical sense of the goodput gap on your connection.

For everyday users, the simplest goodput test is timing a large file download from a reliable server and dividing the file size by the elapsed time. If that number is significantly lower than what your ISP promises or what a speed test shows, the difference is being consumed by overhead, retransmissions, or congestion. Network engineers use more granular tools to isolate exactly where the loss is happening, but the file-transfer method gives you the number that actually reflects your experience.