UDP exists because not every application needs the reliability guarantees that TCP provides, and paying for those guarantees costs speed. When milliseconds matter more than perfect delivery, UDP is the better tool. It skips the setup process, strips away tracking overhead, and lets applications send data immediately, making it the protocol of choice for gaming, video calls, live streaming, and increasingly, the modern web itself.
No Connection Setup Means No Waiting
TCP requires a three-way handshake before any data moves. The sender and receiver exchange messages to confirm they’re both ready, agree on sequence numbers, and establish the connection. Only then does actual data start flowing. This process is reliable, but it adds a round trip of delay before anything useful happens.
UDP skips all of that. It’s a connectionless protocol, meaning the sender fires off data without checking whether the receiver is ready. There’s no “are you there?” exchange, no negotiation. The first packet you send already contains real data. For applications that need to move information quickly, this elimination of startup delay is the single biggest reason to choose UDP.
Smaller Headers, Less Overhead
Every packet sent over a network carries a header with protocol information. A TCP header with no options is 20 bytes, and in practice most TCP packets carry at least one option (like maximum segment size), pushing headers to 24 bytes or more. A UDP header is fixed at 8 bytes. That’s less than half the size.
The difference goes beyond header size. TCP continuously manages sequence numbers, acknowledgments, window sizes, and retransmission timers. All of that tracking information rides along with every packet and requires processing at both ends. UDP cuts through this entirely: it sends a datagram with a source port, destination port, length, and an optional checksum. That’s it. When you’re pushing millions of packets per second, the cumulative savings in bandwidth and processing power are significant.
Real-Time Applications Can’t Wait for Retransmission
TCP’s reliability comes from retransmitting lost packets. If packet #47 in a sequence gets dropped, TCP holds packets #48 through #50 in a buffer and waits for #47 to be resent and received. This is called head-of-line blocking, and it’s the exact behavior that breaks real-time applications.
Picture a multiplayer shooter running at 60 frames per second. Each frame needs fresh position data for every player. If one packet is lost and TCP stalls the entire stream waiting for a retransmission, the game freezes for a visible moment. By the time the missing packet arrives, its data is already outdated. The player’s position has changed. Resending old information is worse than skipping it.
UDP solves this by simply not caring about lost packets at the protocol level. The game engine receives whatever arrives, uses the most recent data, and moves on. Developers build their own lightweight recovery on top when needed, like interpolating between known positions or sending redundant state updates, but the underlying transport never stalls waiting for old data.
Voice and Video Calls Tolerate Small Losses
VoIP and video conferencing run on UDP for the same reason games do: a slightly degraded stream is better than a delayed one. In a phone call, a single lost packet might cause a tiny click or a millisecond of silence. Your brain fills in the gap without noticing. But if the protocol paused the entire audio stream to retransmit that packet, you’d hear a noticeable stutter or delay that makes conversation difficult.
The practical threshold for voice quality is about 1% packet loss at the receiving end. Below that, most people can’t tell anything is wrong. Certain audio compression methods duplicate information across successive packets, so even if one packet disappears, the next one carries enough data to reconstruct the missing audio. At 10% loss, though, you’re effectively missing one out of every ten words, and most people will hang up. The key point is that UDP gives the application the freedom to handle these tradeoffs itself, rather than forcing TCP’s all-or-nothing reliability model onto a situation where it doesn’t fit.
Broadcasting and Multicast
TCP is a one-to-one protocol. Every connection links exactly two endpoints, and each one must complete a handshake, maintain state, and exchange acknowledgments. If you want to send the same data to a thousand devices, TCP requires a thousand separate connections.
UDP supports both broadcast (sending to every device on a network segment) and multicast (sending to a specific group of subscribers). A single sender transmits one copy of the data, and the network delivers it to all listening devices simultaneously. This is essential for things like IoT sensor networks, where a controller might push a configuration update to hundreds of devices at once, or for live video distribution within a corporate network. The low overhead of UDP makes this practical in a way that TCP’s connection-per-recipient model never could.
The Modern Web Is Moving to UDP
HTTP/3, the latest version of the protocol that powers the web, runs on QUIC, which is built on top of UDP rather than TCP. This isn’t a niche experiment. Major browsers and services already support it.
The core motivation is solving TCP’s head-of-line blocking problem for web traffic. HTTP/2 allowed multiple requests to share a single TCP connection, but if one packet was lost, every request on that connection stalled. QUIC runs independent streams over UDP, so a lost packet only affects the single stream it belongs to. QUIC also enables faster handshakes, including 0-RTT connections where a client that has previously connected to a server can start sending data immediately with no round-trip delay at all. Connection migration is another benefit: if your phone switches from Wi-Fi to cellular, a QUIC connection can survive the switch because it’s identified by a connection ID rather than by IP address and port.
The Tradeoff: No Built-In Safety Net
UDP’s lack of congestion control is both its strength and its risk. TCP automatically slows down when it detects network congestion, preventing any single connection from overwhelming the network. UDP has no such mechanism. It will send data as fast as the application tells it to, regardless of network conditions.
In practice, this means poorly designed UDP applications can flood a network. When too many devices blast UDP traffic without any rate limiting, the result is packet loss and high latency for everyone on the network, not just the UDP senders. This is why well-built UDP applications implement their own congestion control at the application layer. QUIC, for example, includes sophisticated congestion management despite running on UDP. Game engines typically send updates at a fixed rate tied to the game’s tick rate rather than flooding the connection.
UDP also provides minimal error checking. In IPv4, the checksum field is technically optional (though recommended). In IPv6, the checksum is mandatory. Either way, UDP only detects corruption. It doesn’t fix it, reorder packets, or confirm delivery. Any of those features must be built by the application developer if they’re needed.
When UDP Is the Right Choice
- Live audio and video: Calls, streams, and conferencing where freshness beats perfection.
- Online gaming: Fast-paced games where stale position data is useless and every millisecond of latency is felt.
- DNS lookups: Simple request-response queries where the overhead of a TCP handshake would take longer than the actual data transfer.
- IoT and sensor networks: Lightweight devices sending small, frequent updates where the processing cost of TCP is disproportionate.
- Multicast and broadcast: One-to-many communication that TCP simply cannot do.
- Modern web protocols: HTTP/3 and QUIC, which layer reliability on top of UDP to get the best of both worlds.
The pattern across all of these is the same: UDP gives applications direct control over how they handle (or don’t handle) reliability, ordering, and congestion. For any situation where TCP’s built-in protections create more problems than they solve, UDP is the protocol that gets out of the way and lets the application decide what actually matters.

