Interlaced scanning is a video display technique that draws each frame of video in two passes instead of one. Rather than displaying every line of an image from top to bottom in a single sweep, it splits the frame into two “fields”: one containing all the odd-numbered lines and one containing all the even-numbered lines. These fields alternate rapidly, and your eyes blend them together into what looks like a complete, smooth picture.
This technique was the foundation of broadcast television for decades, and understanding how it works helps explain everything from old TV standards to why you sometimes see jagged edges in vintage footage.
How Fields Build a Frame
A standard interlaced video frame is made of two fields displayed one after the other. The first field draws lines 1, 3, 5, 7, and so on down the screen. The second field fills in lines 2, 4, 6, 8, and the rest. Each field contains exactly half the image’s total lines, and together they form one complete frame.
The key detail is that these two fields are captured at slightly different moments in time. In the American NTSC system, 60 fields are displayed every second, which produces 30 complete frames per second. The European PAL system runs at 50 fields per second, yielding 25 frames per second. This notation is why you’ll see formats labeled “480i” or “1080i,” where the “i” stands for interlaced.
Your brain doesn’t notice the alternation because of two things working together. Old CRT screens used phosphors that continued glowing briefly after being hit by the electron beam, so each field persisted on screen just long enough for the next field to arrive. Meanwhile, the natural persistence of human vision smooths out rapid flickering. The result: you perceive a single, continuous image rather than two alternating half-pictures.
Why Interlacing Was Invented
When television was being developed in the early 20th century, engineers faced a hard tradeoff between three things: picture sharpness, smooth motion, and the amount of signal bandwidth available. Transmitting a full, high-resolution image 60 times per second would have required twice the bandwidth that broadcasters had to work with. But dropping to 30 full frames per second introduced visible flicker, where the screen appeared to pulse or shimmer.
Interlacing was the clever workaround. By sending half the lines 60 times per second instead of all the lines 30 times per second, engineers doubled the perceived refresh rate without increasing bandwidth. The picture updated often enough to eliminate flicker for most content, while the actual data transmitted stayed within the limits of available broadcast channels. It was, in essence, a trick that made viewers see a better picture than the signal technically contained.
Interlaced vs. Progressive Scanning
Progressive scanning is the alternative approach, and it’s what most modern displays use. In progressive scan, every line of the frame is drawn sequentially from top to bottom in a single pass. There’s no splitting into fields. Formats like 1080p and 4K are progressive, with the “p” indicating the method.
The practical differences matter most during fast motion. Because the two fields of an interlaced frame are captured at slightly different times, objects that move quickly between those moments end up in slightly different positions on the odd and even lines. This creates a visual glitch called “combing,” where the edges of moving objects look like a fine-toothed comb or appear serrated. Progressive scan avoids this entirely because every line in the frame represents the same instant in time.
Progressive video also delivers full resolution in every frame. An interlaced frame technically has the same number of lines, but because each field only contains half of them, the effective resolution at any given moment is lower. This is why 1080i and 1080p, despite sharing the same line count, don’t look identical. The 1080p image is sharper and more detailed, especially during action scenes or camera pans.
The tradeoff is bandwidth. Progressive scan requires roughly twice the data of interlaced video at the same frame rate, because every frame carries the full set of lines. For early broadcast systems with strict bandwidth limits, that cost was prohibitive. For modern digital streaming and disc formats, it’s manageable.
Common Visual Artifacts
Combing is the most recognizable interlaced artifact. Pause any interlaced video during a moment of fast motion, and you’ll likely see horizontal lines cutting through moving objects, making edges look jagged or split. This happens because the object was in one position when the odd field was captured and a slightly different position a fraction of a second later when the even field was captured.
Interline flicker is another issue. Fine horizontal details, like thin text or narrow stripes in clothing, can appear to shimmer or buzz because those details only exist in one field and disappear in the alternate field. This was a well-known problem in broadcast television. News anchors were often advised to avoid wearing finely striped shirts precisely because of this effect.
How Modern Displays Handle Interlaced Video
Today’s flat-panel TVs and monitors are inherently progressive. They draw every pixel on the screen simultaneously and don’t scan line by line the way CRTs did. So when an interlaced signal arrives, the display has to convert it to progressive through a process called deinterlacing.
Deinterlacing algorithms vary in complexity. The simplest approach, called field insertion or “weaving,” just interleaves the two fields back together into one frame. This works well for still or slow-moving content but produces combing artifacts with fast motion, since it doesn’t account for the time difference between fields.
Line averaging takes a different approach by using only one field and calculating the missing lines from their neighbors. This eliminates combing but reduces vertical resolution, making the image softer. More advanced methods use vertical-temporal filtering, which pulls data from both fields and applies a weighted blend to estimate missing information.
The most sophisticated modern deinterlacers are motion-adaptive. They analyze each area of the frame independently, using weaving in static regions (where it preserves full detail) and interpolation in moving regions (where it prevents combing). This gives the best of both worlds and is the standard approach built into most TVs, Blu-ray players, and video editing software today.
Where Interlaced Video Still Exists
Despite being largely superseded by progressive formats, interlaced scanning hasn’t disappeared entirely. Many broadcast TV stations still transmit in 1080i, particularly for live sports and news. Some security camera systems use interlaced formats. And enormous archives of television content, from decades of news footage to classic TV shows, exist in interlaced form.
If you’re working with older video or capturing from a source that outputs interlaced signals, the format label tells you what you’re dealing with. A “1080i” signal has 1,080 lines split into two 540-line fields alternating 60 times per second (or 50 in PAL regions). A “480i” signal, the standard for DVD and older standard-definition broadcasts, has 480 visible lines at 60 fields per second. Converting this content to progressive for editing or viewing on modern screens requires deinterlacing, and choosing the right method depends on whether the content has significant motion.

