Yes, packets can and do arrive out of order on the internet. When data travels across a network, it gets split into smaller pieces called packets, and each packet can take a different route to reach the destination. Because some routes are faster or less congested than others, packets that were sent in sequence may arrive jumbled. What happens next depends entirely on the protocol handling the connection.
Why Packets Take Different Paths
The internet is a mesh of interconnected routers, and each router makes its own forwarding decision based on current conditions. If one link becomes congested or a router goes down, traffic gets rerouted on the fly. Two packets from the same data stream might travel through completely different cities or even different continents before arriving at the same destination. The packet that left first doesn’t always arrive first.
Even within a single data center, modern network hardware can cause reordering. Network interface cards often distribute incoming traffic across multiple processor cores to keep up with high speeds. This technique, called receive side scaling, assigns packets from the same connection to the same core to preserve order. But misconfigurations or load balancers that split traffic across multiple paths can still shuffle the sequence.
How TCP Puts Packets Back in Order
TCP, the protocol behind web browsing, email, and file downloads, guarantees that data arrives complete and in the correct sequence. It does this by stamping every packet with a sequence number before sending it. The receiving end keeps track of the next expected sequence number. If a packet arrives with a number equal to or greater than what’s expected, it’s considered in order. If the number is smaller than expected, the packet is flagged as reordered.
For example, if packets numbered 1, 2, 4, 5, 3 arrive in that order, the receiver knows packet 3 is out of place. It holds packets 4 and 5 in a buffer, waits for 3 to show up, then reassembles everything in the correct order before passing the data to your application. From your perspective as a user, you never see the scramble. The web page loads normally, the file downloads intact.
When the receiver notices a gap in sequence numbers, it sends duplicate acknowledgments back to the sender, essentially saying “I’m still waiting for packet X.” If the sender receives three of these duplicate acknowledgments, it assumes the missing packet was lost (not just delayed) and retransmits it immediately. This is called fast retransmit, and it avoids a much longer wait for a general timeout to expire.
Selective Acknowledgment Speeds Recovery
Older versions of TCP could only acknowledge data cumulatively, meaning “I’ve received everything up to byte X.” If multiple packets went missing, the sender had to discover each loss one at a time, waiting a full round trip between each retransmission. A newer mechanism called selective acknowledgment (SACK) lets the receiver report exactly which non-contiguous blocks of data it has received. The sender then retransmits only the data that’s actually missing, saving one or more round trips during recovery.
Why UDP Doesn’t Reorder Anything
UDP takes a fundamentally different approach. It sends each packet as an independent datagram with no sequence numbers, no acknowledgments, and no retransmission. If packets arrive out of order, UDP delivers them to the application exactly as they came in. If packets are lost entirely, UDP doesn’t notice or care.
This sounds reckless, but it’s intentional. Applications like online gaming, live video calls, and DNS lookups need speed more than perfection. A video call that pauses to wait for a missing packet would feel worse than one that drops a single frame and moves on. These applications handle ordering and loss themselves, using their own lightweight logic tuned to their specific needs.
How Reordering Affects Real-Time Audio and Video
Voice and video applications are particularly sensitive to packet reordering. These apps use a small buffer (typically measured in milliseconds) to smooth out variations in packet arrival times. If packets consistently arrive out of order, the buffer has to work harder to sort them, and the application may run out of buffer space or processing power to keep up. The result is choppy audio, frozen video frames, or dropped calls.
Because real-time media is so sensitive, reordering problems in a network often show up in voice and video quality first, long before they affect web browsing or file transfers. Network engineers sometimes use call quality as an early warning signal that something in the routing or switching infrastructure needs attention.
Head-of-Line Blocking
One notable downside of TCP’s ordering guarantee is a problem called head-of-line blocking. Because TCP delivers data as a strict, ordered stream, a single missing or delayed packet stalls everything behind it. Even if the receiver already has the next ten packets sitting in its buffer, the application can’t access any of that data until the gap is filled.
This became a real performance issue with HTTP/2, which multiplexes many web requests over a single TCP connection. If one TCP packet is lost, all the different requests sharing that connection freeze until the lost packet is retransmitted. A delay affecting one image on a page could stall the CSS, JavaScript, and every other resource waiting on the same connection. This was one of the main motivations for HTTP/3, which moved to a UDP-based transport that handles ordering per stream rather than per connection.
How Often Reordering Happens
On the modern internet, only a small percentage of TCP traffic actually experiences packet reordering, and an even smaller fraction of those reordered packets are severe enough to trigger retransmission. For most connections, packets arrive in order the vast majority of the time. Reordering tends to spike on paths that cross many network hops, use load-balanced links, or traverse congested peering points between internet providers.
When reordering does happen in bursts, it can mimic packet loss from TCP’s perspective. Three duplicate acknowledgments look the same whether the packet is genuinely lost or just running late. If the network reorders packets frequently, TCP may retransmit data that wasn’t actually lost, wasting bandwidth and reducing throughput. Some TCP implementations use heuristics to distinguish reordering from true loss, raising the duplicate acknowledgment threshold on paths known to shuffle packets.

