Flow control regulates the speed of data transmission between two devices so the sender doesn’t overwhelm the receiver. If one device can send data at 10 Gbps but the receiving device can only process 1 Gbps, the receiver’s memory buffer fills up almost instantly. Without flow control, that mismatch causes dropped data, corrupted transfers, and potentially crashes on the receiving end.
Flow control works by giving the receiver a way to signal the sender: slow down, pause, or keep going. It operates at multiple levels of a network, from physical cable connections between two devices to internet-scale data transfers across continents.
The Core Problem Flow Control Solves
Every device that receives data has a buffer, a temporary holding area in memory where incoming data waits to be processed. When data arrives faster than the device can handle it, that buffer fills up. Once it’s full, new incoming data gets discarded. This is called a buffer overflow, and it forces the sender to retransmit the lost data, wasting time and bandwidth. In severe cases, the receiving device crashes entirely.
Flow control prevents this by matching the sender’s transmission rate to what the receiver can actually absorb. Think of it like a conversation: if someone talks faster than you can take notes, you hold up a hand and ask them to pause. Flow control is that hand signal, built into the communication protocol itself.
How Stop-and-Wait Works
The simplest form of flow control is the stop-and-wait method. The sender transmits one unit of data (called a frame), then waits. It doesn’t send anything else until the receiver sends back an acknowledgment confirming the data arrived intact. If the receiver detects an error, it sends a negative acknowledgment instead, and the sender retransmits.
To handle situations where the frame never reaches the receiver at all (meaning no acknowledgment comes back either way), the sender runs a timer. If the timer expires before any response arrives, the sender assumes the data was lost and retransmits automatically. Frames are labeled alternately as 0 or 1, so both sides can tell the difference between a genuinely new frame and an accidental duplicate caused by a retransmission.
Stop-and-wait is reliable but slow. The sender sits idle while waiting for each acknowledgment, which wastes a lot of potential throughput, especially on high-speed or long-distance connections where the round-trip time is significant.
Sliding Window: Sending Multiple Frames at Once
Sliding window protocols solve the efficiency problem by letting the sender transmit multiple frames before needing an acknowledgment. The “window” is the number of frames the sender is allowed to have in transit at any given time. As acknowledgments come back, the window slides forward, allowing new frames to be sent.
This keeps the connection busy. Instead of sending one frame and waiting, the sender might have 10 or 100 frames in flight simultaneously. The receiver controls the pace by adjusting the window size: a larger window means “send more,” and a smaller window means “slow down, I’m falling behind.” If the receiver’s buffer is nearly full, it can shrink the window to just a few frames or even zero, effectively pausing transmission.
Hardware Flow Control: Physical Signals
In serial communication (the kind used by older computer ports, industrial equipment, and many embedded systems), flow control can happen through dedicated electrical signals on the cable itself. This is called hardware flow control, sometimes referred to as handshaking.
The two signals involved are Request to Send (RTS) and Clear to Send (CTS). When a device wants to transmit, it activates its RTS line. The receiving device responds by activating CTS if it’s ready to accept data. If the receiver deactivates CTS, the sender must stop transmitting immediately. Because these are physical voltage signals on dedicated wires, they work instantly and don’t consume any bandwidth on the data channel.
Software Flow Control: XON and XOFF
When dedicated hardware lines aren’t available, flow control can happen within the data stream itself using two special characters. The XOFF character tells the sender to stop transmitting. The XON character tells it to resume. These are standard ASCII control characters: XON is Control-Q and XOFF is Control-S.
The tradeoff is that software flow control uses the same channel as the data, so it’s slightly slower to react than hardware flow control. It also means the data itself can’t contain XON or XOFF characters without causing confusion, which makes it unsuitable for transferring binary files unless additional encoding is applied.
Flow Control in Ethernet Networks
Modern Ethernet networks use a mechanism defined by the IEEE 802.3x standard. When a network switch or device gets congested, it sends a special “pause frame” to its link partner. This frame contains a specific duration, measured in slot times, telling the partner how long to stop sending. If conditions change, a new pause frame can extend the pause or cancel it entirely by setting the duration to zero.
This was designed to let network switches work with limited memory without having to drop frames during temporary traffic spikes. It only operates between two directly connected devices on a full-duplex link. It is not intended as an end-to-end solution for sustained congestion across an entire network path.
One practical consideration: enabling Ethernet pause frames can introduce noticeable delays. For real-time applications like voice calls or online gaming, those pauses can hurt performance more than the occasional dropped packet would. Many network administrators leave flow control disabled on ports that carry latency-sensitive traffic for this reason.
How TCP Handles Flow Control
On the internet, the most important flow control mechanism lives inside TCP, the protocol responsible for reliable data delivery. Every TCP connection includes a “receive window” that tells the sender how much data the receiver can currently accept.
The window size field in a TCP header is 16 bits, which originally limited it to 65,535 bytes. That was adequate for early internet speeds but far too small for modern connections. A scaling option, negotiated when the connection is first established during the three-way handshake, multiplies that base value by a power of two. With a scale factor of 3, for example, a 65,535-byte window becomes roughly 524,000 bytes. At the maximum scale factor of 14, the window can grow to just over 1 gigabyte.
As the receiver processes incoming data and frees up buffer space, it advertises a larger window in its acknowledgments, letting the sender speed up. If the receiver falls behind, it advertises a smaller window. If it sets the window to zero, the sender must stop entirely until a new advertisement arrives with available space. This creates a continuous, dynamic feedback loop that adjusts transmission speed in real time throughout the life of the connection.
Flow Control vs. Congestion Control
These two terms are easy to confuse, but they address different problems. Flow control protects the receiver. It ensures data doesn’t arrive faster than the destination device can process it. Congestion control protects the network itself. It prevents senders from collectively flooding routers and links along the path between source and destination.
Flow control is typically a conversation between two endpoints: “I can handle this much, no more.” Congestion control involves reading signals from the network, like packet loss or increasing delays, and backing off when the shared infrastructure is overloaded. TCP implements both simultaneously. The actual sending rate at any moment is governed by whichever limit is smaller: the receiver’s advertised window (flow control) or the sender’s estimate of what the network can carry (congestion control).
Where Flow Control Operates in a Network
Flow control isn’t confined to a single layer of the networking stack. At the data link layer (Layer 2), it manages traffic between directly connected devices on a local network. Ethernet pause frames and the RTS/CTS handshake both operate here. Layer 2 switches, the devices that connect computers within a local network, rely on flow control to handle frame sequencing and prevent buffer overflows on individual ports.
At the transport layer (Layer 4), TCP’s receive window manages flow across the entire connection, regardless of how many switches, routers, and links sit between sender and receiver. This end-to-end scope makes it more flexible but also more complex, since the feedback signal has to travel the full round-trip distance before the sender can adjust.
Both layers work independently. A pause frame on one Ethernet link doesn’t directly affect the TCP window on the connection passing through it. But the cumulative effect of link-level flow control can influence TCP’s behavior indirectly by adding latency or changing the timing of acknowledgments.

