What Is Latency Variation? Jitter, Causes & Thresholds

Latency variation is the inconsistency in how long data packets take to travel across a network. Rather than measuring total delay, it captures how much that delay fluctuates from one packet to the next. In networking, this concept is almost always called “jitter,” and the two terms are used interchangeably. For real-time applications like video calls, online gaming, and voice-over-IP, latency variation matters just as much as raw speed.

How It Differs From Latency

Latency is the total time a data packet takes to travel from its source to its destination. If every packet took exactly 40 milliseconds, you’d have high latency but zero latency variation. The experience would be consistent, just slightly delayed. Latency variation describes what happens when one packet arrives in 20 milliseconds, the next in 80, and the one after that in 35. That unevenness is what disrupts the smooth flow of data.

Think of it like a commute. Latency is the average drive time. Latency variation is whether that drive takes 20 minutes one day and 45 the next. A predictably long commute is manageable. An unpredictable one makes planning impossible. Networks face the same problem: applications can adapt to a steady delay, but wild swings in timing cause audio glitches, video freezes, and laggy gameplay.

What Causes It

Several factors contribute to uneven packet delivery, and they often compound each other.

Network congestion is the most common cause. When too many devices send data through the same link, packets queue up at routers and switches. The time a packet spends waiting in that queue varies depending on how much traffic is ahead of it. During a quiet moment, a packet passes through almost instantly. During a burst of activity, it might wait significantly longer. That difference in queuing time is a direct source of latency variation.

Dynamic routing adds another layer of inconsistency. In large networks, especially the internet, packets from the same data stream don’t always follow the same path. Routing algorithms constantly recalculate the best route based on current conditions. When one packet takes three hops to reach its destination and the next takes seven, their arrival times will differ even if no individual link is congested. Multi-hop networks like mesh topologies are particularly prone to this because they offer many possible paths with different lengths.

Oversized network buffers create a subtler problem known as bufferbloat. Modern routers and switches come with large memory buffers designed to prevent packet loss. When traffic spikes, these buffers absorb the overflow instead of dropping packets. That sounds helpful, but it backfires: packets pile up in enormous queues that take a long time to drain. TCP, the protocol that governs most internet traffic, relies on occasional packet drops as a signal to slow down. When buffers swallow everything, that signal never arrives, congestion worsens, and latency becomes both higher and more erratic. The result is long, unpredictable delays that the end user experiences as sluggish, inconsistent performance. Because buffer sizes can’t be configured on most consumer routers, this problem is widespread and often invisible to the people affected by it.

Wireless interference and bandwidth limitations also play a role. Wi-Fi signals compete with neighboring networks, microwaves, and other devices on the same frequency. Each burst of interference can delay a packet by a few milliseconds, and those small, random delays add up to noticeable variation. On wired connections, the time it takes to push a packet’s bits onto a physical link (transmission delay) depends on the link’s bandwidth. Lower-bandwidth links take longer per packet, and when traffic fluctuates, the variation in transmission time becomes more pronounced.

How It’s Measured

Latency variation is measured in milliseconds, and the standard approach compares packet spacing at the receiver against packet spacing at the sender. If a sender releases two packets 20 milliseconds apart but the receiver gets them 35 milliseconds apart, that 15-millisecond difference represents the variation for that pair of packets.

The formal calculation, defined in the internet standard RFC 3550 for real-time transport, works by tracking the “relative transit time” for each packet. For any two consecutive packets, the system calculates how much their transit times differ. It then feeds that difference into a running average that smooths out momentary spikes while still tracking the overall trend. This smoothed value is what gets reported as jitter in tools like network monitors and VoIP quality dashboards.

Most consumer-facing tools simplify this into a single number. When your internet speed test reports “jitter: 8 ms,” it’s telling you the average inconsistency in packet delivery during the test window.

Thresholds That Matter

For voice and video calls, latency variation below 30 milliseconds is generally considered acceptable. Above that, you’ll start hearing choppy audio, robotic-sounding voices, or gaps in conversation. The companion threshold for raw latency is 150 milliseconds. Staying under both numbers keeps calls sounding natural.

Online gaming is less forgiving. Competitive players typically want jitter under 15 milliseconds. In fast-paced games where split-second timing matters, even small inconsistencies in packet delivery can mean the difference between a registered hit and a missed one. The game might show your character in one position while the server has already moved you somewhere else.

Video streaming is more tolerant because it uses buffering to smooth out delivery. Your player downloads chunks of video ahead of time, so a few packets arriving late won’t interrupt playback. But live streaming and low-latency streaming modes shrink that buffer, making them more sensitive to variation.

How Networks Compensate

The most common defense is a jitter buffer, a small holding area on the receiving end that collects incoming packets and releases them at a steady pace. Instead of playing audio the instant each packet arrives, a VoIP phone might wait 20 or 30 milliseconds to let a few packets accumulate, then play them in order at consistent intervals. This trades a tiny amount of added delay for a much smoother experience. Most conferencing and calling apps adjust their jitter buffer automatically based on current network conditions.

Quality of Service (QoS) settings on routers let you prioritize certain types of traffic. By marking voice and video packets as high priority, the router processes them before less time-sensitive traffic like file downloads or software updates. This reduces queuing time for the packets that are most affected by variation.

On the infrastructure side, newer queue management algorithms address bufferbloat by distinguishing between temporary traffic bursts (which genuinely need buffering) and sustained congestion (which needs packet drops to trigger TCP’s slowdown mechanism). These algorithms monitor how long packets sit in a queue: if they linger, congestion is building, and the router starts dropping packets early to keep queues short and latency consistent.

For individual users, switching from Wi-Fi to a wired Ethernet connection eliminates wireless interference as a variable. Closing bandwidth-heavy applications during calls or gaming sessions reduces competition for your connection. And if your router supports QoS configuration, enabling it for real-time traffic can meaningfully reduce the variation you experience.