Which Factors Determine TCP Window Size?

The effective TCP window size is determined by whichever value is smaller: the receiver’s advertised window (rwnd) or the sender’s congestion window (cwnd). These two variables work together to control how much unacknowledged data can be “in flight” on the network at any moment. Understanding what shapes each one gives you the full picture.

The Two Windows That Matter

TCP uses two separate window mechanisms, and the actual amount of data a sender can transmit before waiting for an acknowledgment is the minimum of the two.

The receiver window (rwnd) is set by the receiving machine. It reflects how much free space exists in the receiver’s buffer. Every acknowledgment the receiver sends back includes the current rwnd value, telling the sender “this is how much more data I can accept right now.” When the receiving application reads data out of the buffer, rwnd grows. When new data arrives from the network faster than the application consumes it, rwnd shrinks.

The congestion window (cwnd) is maintained by the sender. It starts small and grows as the sender successfully delivers data without signs of network congestion. If packets are lost or delayed, the sender assumes the network is congested and reduces cwnd. Algorithms like slow start, congestion avoidance, and fast recovery all adjust cwnd dynamically based on real-time network feedback.

Because the effective window is the smaller of rwnd and cwnd, a fast receiver with plenty of buffer space won’t help if the network path is congested. Likewise, a clear network path won’t help if the receiver’s buffer is nearly full.

What Controls the Receiver Window

The receiver window is ultimately a reflection of the operating system’s TCP receive buffer. When a connection is established, the OS allocates a receive buffer of a certain size. As data arrives, it fills that buffer. As the application (a web server, a database, a browser) reads from the buffer, space frees up. The rwnd value advertised in each acknowledgment is simply the amount of available space remaining.

On Linux, the receive buffer size is governed by kernel parameters. The setting net.ipv4.tcp_rmem defines minimum, default, and maximum values. Modern kernels also enable auto-tuning by default (controlled by net.ipv4.tcp_moderate_rcvbuf), which lets the OS grow or shrink the receive buffer dynamically based on how the connection is performing. Windows and macOS have their own equivalents, but the principle is the same: the OS manages the buffer, and the buffer determines rwnd.

What Controls the Congestion Window

The congestion window starts at a small value when a connection opens, typically 10 TCP segments on modern systems. During slow start, cwnd doubles with each round trip as long as no packets are lost. Once cwnd hits a threshold or the sender detects loss, the growth strategy switches to congestion avoidance, where cwnd increases much more gradually.

Packet loss is the primary signal that reduces cwnd. When the sender detects a lost packet (either through a timeout or duplicate acknowledgments), it cuts cwnd significantly, sometimes by half, sometimes more depending on the specific congestion control algorithm in use. The sender then builds cwnd back up over subsequent round trips. This constant probing and backing off is how TCP shares network capacity fairly among competing connections.

The 64 KB Limit and Window Scaling

The window size field in the TCP header is only 16 bits wide, which caps the maximum advertised window at 65,535 bytes (roughly 64 KB). On early internet links this was more than enough, but on modern high-speed or long-distance connections it creates a serious bottleneck.

To solve this, TCP window scaling was introduced. During the initial handshake, both sides can negotiate a scale factor between 0 and 14. The receiver then right-shifts its true window value by that factor before placing it in the header, and the sender left-shifts it back when reading. This expands the effective window field to 30 bits, allowing advertised windows up to 1 GiB. Window scaling is enabled by default on virtually all modern operating systems.

Bandwidth-Delay Product: The Ideal Window Size

If you want to fully utilize a network link, the window size needs to be at least as large as the bandwidth-delay product (BDP). The formula is straightforward:

BDP = bandwidth × round-trip time

For example, a 10 Gbps link with a 50 ms round-trip time has a BDP of 62.5 MB. That means 62.5 MB of data needs to be in flight at all times to keep the pipe full. If your window is smaller than this, the sender will finish transmitting and sit idle waiting for acknowledgments, leaving bandwidth on the table.

On a more modest connection, say 10 Gbps with 20 ms round-trip time, the BDP drops to about 25 MB. Linux best practices suggest setting the TCP buffer to at least twice the BDP to give the kernel room for auto-tuning overhead. Without window scaling enabled, the 64 KB header limit would make it impossible to reach these buffer sizes, which is why scaling is essential on any high-performance link.

Putting It All Together

Several factors feed into the final effective window size, but they layer on top of each other in a clear hierarchy:

  • The TCP header field sets the hard upper bound: 64 KB without scaling, up to 1 GiB with scaling enabled.
  • The OS receive buffer determines the maximum rwnd the receiver can advertise. Auto-tuning adjusts this in real time on most systems.
  • The application’s read speed affects how quickly buffer space frees up, which directly controls the advertised rwnd.
  • Congestion control algorithms on the sender side set cwnd based on observed network conditions like packet loss and round-trip time.
  • The effective window at any given moment is whichever is smaller, rwnd or cwnd.

So when someone asks “which factor determines TCP window size,” the precise answer is that no single factor does. The receiver’s available buffer and the sender’s estimate of network capacity both impose limits, and the connection operates at the tighter of the two. On a fast, uncongested network with a well-configured receiver, the window can grow large enough to saturate the link. On a lossy network or one where the receiving application is slow to process data, the window stays small regardless of the other side’s capacity.