What Is Round Trip Time (RTT) and Why It Matters

Round trip time (RTT) is the total time it takes for a data packet to travel from your device to a destination server and back again. In practical terms, if you click a link and your browser sends a request, RTT measures how many milliseconds pass before that request reaches the server and the server’s response arrives back at your device. Typical RTT for a fixed broadband connection in North America is between 17 and 21 milliseconds.

How RTT Differs From Latency

People often use “latency” and “round trip time” interchangeably, but they measure slightly different things. Latency is a one-way measurement: the time it takes for data to travel from point A to point B. RTT covers the full journey, from point A to point B and back to point A. In most real-world scenarios, RTT is roughly double the one-way latency, though the return path isn’t always identical to the outgoing one. Network tools like ping report RTT by default because it’s easier to measure. You can time a packet’s departure and arrival back at the same device without needing synchronized clocks at both ends.

What Determines Your RTT

Several factors combine to produce the RTT you experience on any given connection.

Physical distance is the most straightforward. Data travels through fiber optic cables at roughly two-thirds the speed of light. A request traveling from New York to London and back covers about 11,000 kilometers, which alone adds around 55 milliseconds just from the physics of signal propagation.

Number of network hops matters because your data rarely travels in a straight line. It passes through routers, switches, and other intermediary devices. Each hop adds a small amount of processing and queuing delay as the device reads the packet header and decides where to send it next.

Network congestion increases RTT during peak usage. When routers handle more traffic than they can immediately forward, packets sit in queues waiting their turn. This queuing delay can spike dramatically during high-traffic periods.

Transmission medium plays a role too. Fiber optic connections generally produce lower RTT than copper or wireless connections because light signals propagate faster and with less interference. Wireless connections add variability because radio signals contend with interference, signal strength fluctuations, and shared airwaves.

Server response time is the piece people often overlook. RTT includes however long the destination server takes to process your request and generate a reply. A heavily loaded server that takes 50 milliseconds to respond adds that directly to your total RTT.

Routing efficiency can quietly inflate RTT when your internet service provider sends packets along suboptimal paths. Data traveling between two cities 200 miles apart might route through a hub 500 miles away, adding unnecessary distance.

Typical RTT by Connection Type

RTT varies enormously depending on how you connect to the internet. According to FCC broadband data from 2023, fixed broadband connections in North America average between 17 and 21 milliseconds. The lowest national averages belong to countries like Denmark, Lithuania, and Chile, where fixed broadband RTT sits between 12 and 13 milliseconds.

Mobile connections are slower. 4G LTE connections in North America average between 33 and 63 milliseconds, and 5G networks generally fall somewhere below that range depending on the type of 5G deployment (low-band 5G won’t be dramatically faster than LTE, while millimeter-wave 5G can approach wired speeds).

The starkest comparison is between satellite technologies. Traditional geostationary satellites orbit about 36,000 kilometers above Earth. That extreme distance produces RTT of 550 to 1,000 milliseconds under normal conditions, sometimes spiking to 3,000 milliseconds during heavy rain. Low-Earth orbit constellations like Starlink orbit much closer, producing RTT of 20 to 100 milliseconds. Under clear skies, Starlink typically stays within a tight 25 to 35 millisecond range, making it comparable to wired broadband for most uses.

How to Measure RTT

The simplest method is the ping command. Open a terminal or command prompt and type ping google.com. Your device sends a small test packet to the target and measures how long the reply takes. The output shows RTT for each packet in milliseconds, like time=10.2 ms. Running several pings gives you a sense of both your average RTT and how much it varies.

Traceroute (or tracert on Windows) goes further by showing the RTT to every hop along the path between your device and the destination. This helps you identify where delays are occurring. If the first few hops show 5 milliseconds but a hop in the middle jumps to 80, you’ve found a bottleneck.

Web performance tools like Google’s Lighthouse simulate different RTT conditions when testing websites. Lighthouse adds 150 milliseconds of RTT when testing mobile performance and 40 milliseconds for desktop, reflecting the slower connections many real users experience.

Why RTT Matters for Web Performance

RTT has a compounding effect on web browsing because loading a single web page requires multiple round trips. Before your browser receives any page content, it needs to complete a DNS lookup, establish a TCP connection (which involves its own back-and-forth handshake), and often negotiate an encrypted HTTPS session. Each of those steps costs at least one full round trip.

Loading a resource from a server with 100 milliseconds of RTT takes at least 400 milliseconds before any data arrives, because of those sequential handshake steps. A comparison by DebugBear showed that the same resource took 1.13 seconds on a high-latency connection versus just 70 milliseconds on a fast one. That difference is entirely due to RTT, not download speed.

This directly affects metrics that search engines and web developers track. Time to First Byte (TTFB), a core measure of server responsiveness, is roughly the sum of any redirects, four times the connection RTT, and the server’s own processing time. Largest Contentful Paint, which measures when the main content of a page becomes visible, also increases with higher RTT because every resource the page needs requires its own round trips to fetch.

How TCP Uses RTT

RTT is central to how your device manages data transfer. When your computer sends data using TCP (the protocol behind web browsing, email, and most internet traffic), it waits for an acknowledgment from the receiving server confirming the data arrived. The time between sending a data segment and receiving that acknowledgment is one RTT sample.

TCP uses these samples to set retransmission timeouts. If an acknowledgment doesn’t arrive within a calculated window, TCP assumes the packet was lost and resends it. The timeout is based on a running average of recent RTT measurements plus a margin for variation. If your RTT is normally 20 milliseconds but occasionally spikes to 50, TCP factors in that variability so it doesn’t prematurely resend packets that are simply delayed.

TCP also uses RTT to control how fast it sends data. On a connection with low RTT, TCP can ramp up its sending rate quickly because acknowledgments return fast, confirming that data is getting through. On high-RTT connections, this feedback loop is slower, so TCP takes longer to reach full speed. This is why a satellite connection with 600 milliseconds of RTT feels sluggish even if the raw bandwidth is high.

How to Reduce RTT

Content delivery networks (CDNs) are the most common solution. A CDN caches copies of website content on servers distributed around the world. When you request a page, the CDN serves it from whichever server is geographically closest to you, cutting the physical distance your data travels. For a website with a global audience, this can reduce RTT from hundreds of milliseconds to single digits for cached content.

Edge computing takes a similar approach for dynamic content. Instead of sending every request to a central data center, compute resources near the end user process data locally. A video streaming service, for instance, can use edge servers to prepare and deliver content without routing requests across continents.

Protocol-level optimizations also help. Enabling TCP window scaling allows larger chunks of data to be sent before waiting for acknowledgments, reducing the total number of round trips needed to transfer a file. HTTP/2 and HTTP/3 further reduce round trips by allowing multiple requests to share a single connection and, in the case of HTTP/3, eliminating one round trip from the connection setup entirely.

On the user side, choosing a wired connection over Wi-Fi, selecting a closer DNS server, or using a VPN endpoint near the destination server can each shave milliseconds off your RTT. Individually small, these reductions compound across the dozens of round trips a typical web page requires.