The transport layer is the part of a network that manages communication between two specific applications running on different devices. It sits between the software you interact with (like a web browser or email client) and the lower-level network machinery that actually moves data across routers and cables. Its core job is making sure the right data gets to the right application, intact and in order.
Breaking Data Into Manageable Pieces
When you send a large file or load a complex webpage, the transport layer doesn’t shove all that data into the network at once. It splits it into smaller chunks called segments, each tagged with a sequence number. These segments travel independently across the network, potentially taking different routes and arriving out of order.
At the receiving end, the transport layer reads those sequence numbers and reassembles everything in the correct order before handing it up to the application. The segmentation process is also smart about sizing: it checks the limits of the network below it and creates segments small enough to travel without needing to be broken down further at lower layers. This avoids unnecessary extra splitting that would slow things down.
Directing Traffic With Port Numbers
Your computer runs dozens of networked applications at the same time. You might have a browser open, a video call running, and a file downloading in the background. All of that traffic arrives through the same network connection, so something needs to sort it. That’s where port numbers come in.
Every application creates a “socket,” essentially a doorway to the network, and each socket gets a unique port number. When data leaves your machine, the transport layer reads the socket’s port number, stamps it on the outgoing segment’s header, and passes it down to the network. This process is called multiplexing. When data arrives from the network, the transport layer reads the port number in the header and forwards the data to the correct application. That’s demultiplexing.
Port numbers fall into three ranges defined by the Internet Assigned Numbers Authority. System ports (0 through 1023) are reserved for well-known services: port 80 for standard web traffic, 443 for encrypted web traffic, 25 for email routing, 22 for secure shell access, and 53 for translating domain names into IP addresses. User ports (1024 through 49151) are available for registered applications. Dynamic ports (49152 through 65535) are assigned temporarily by your operating system when an application opens a new connection.
Ensuring Reliable Delivery
Networks lose packets. Cables get noisy, routers get overloaded, and data gets corrupted in transit. The transport layer can catch and fix these problems through two mechanisms: checksums and retransmission.
A checksum is a small value computed from the contents of a segment, similar to a fingerprint. The sender calculates it and attaches it to the segment. The receiver recalculates it on arrival. If the two values don’t match, the segment was corrupted and gets discarded. To recover from lost or discarded segments, the transport layer uses acknowledgments. The receiver confirms each segment it gets by sending back an acknowledgment with a sequence number. If the sender doesn’t receive that confirmation within a calculated time window, it resends the segment. The sender estimates round-trip time dynamically, adjusting its retransmission timer based on actual network conditions.
Preventing the Receiver From Being Overwhelmed
A fast sender can easily flood a slow receiver. Flow control prevents this. In practice, the receiver tells the sender exactly how much buffer space it has available by advertising a “window size” in every acknowledgment it sends back. The sender is then limited to having no more than that many bytes of unacknowledged data in flight at any time.
As the receiver processes data and frees up buffer space, the window grows. If the receiver falls behind, the window shrinks, and the sender slows down. This sliding window approach keeps data flowing as fast as the receiver can handle it, without ever overrunning its memory. It’s a continuous negotiation that happens automatically on every connection.
Setting Up and Tearing Down Connections
Before two devices exchange any application data over a reliable connection, the transport layer establishes that connection through a three-step process called a three-way handshake. First, the client sends a synchronization message containing a random starting sequence number. The server responds with its own random sequence number and an acknowledgment equal to the client’s sequence number plus one. Finally, the client sends back an acknowledgment equal to the server’s sequence number plus one. At that point, both sides have confirmed they can send and receive, and data transfer begins.
This setup phase also lets both sides agree on parameters like window size. When the conversation is finished, a similar exchange gracefully closes the connection so neither side is left waiting for data that will never arrive.
TCP vs. UDP: Reliability or Speed
The two main transport protocols offer fundamentally different tradeoffs. TCP is connection-based, meaning both sides establish and maintain a link for the entire conversation. It treats data as a continuous stream, handles segmentation and reassembly automatically, tracks every segment with sequence numbers, and retransmits anything that goes missing. Web browsing, email, file transfers, and secure shell sessions all use TCP because they can’t tolerate missing or jumbled data.
UDP is connectionless. It fires off packets without first checking whether the receiver is ready. If a packet is lost, neither side automatically knows or retransmits. UDP does include a checksum for detecting corruption, but it makes no guarantees about order or delivery. What it offers instead is speed and simplicity. Domain name lookups, time synchronization, and real-time applications like video calls and online gaming use UDP because a retransmitted packet arriving late is worse than no packet at all. In a voice call, a brief gap sounds better than a sentence that plays five seconds late.
QUIC: A Modern Alternative
TCP has a practical limitation: it’s built into operating system kernels and network hardware worldwide, making it nearly impossible to update at scale. QUIC is a newer transport protocol built on top of UDP that addresses several long-standing TCP pain points. It was originally developed to improve web page load times and now powers HTTP/3, the latest version of the web’s core protocol.
QUIC’s most notable improvement is connection setup speed. Where TCP requires a handshake (plus additional steps if encryption is involved), QUIC can establish a connection in zero round trips when reconnecting to a familiar server. It also solves a problem called head-of-line blocking: in TCP, if one segment is lost, all other segments behind it must wait for the retransmission, even if they belong to completely different streams of data. QUIC multiplexes streams independently, so a lost segment in one stream doesn’t stall the others. It also supports connection migration, meaning your session can survive switching from Wi-Fi to cellular without dropping.

