Compressing a video reduces its file size by removing redundant or less noticeable visual information. A raw, uncompressed two-hour film at standard HD resolution can easily exceed 500 GB. After compression, that same film fits into a few gigabytes, or even less, depending on the settings you choose. The process works by exploiting patterns in the footage and limitations in human vision to strip away data you’d never miss.
How File Size Relates to Bitrate
A video’s file size is fundamentally determined by its bitrate: the amount of data used per second of footage. If you record at 24 megabits per second for two hours, you get roughly 21.6 gigabytes. Compression lowers that bitrate. Drop it to 8 megabits per second, the range YouTube recommends for standard 1080p video, and the same two hours shrinks to about 7.2 GB. The math is straightforward: bitrate multiplied by duration equals file size.
The frame size matters too. A 4K video has four times the pixels of 1080p, so it needs a higher bitrate to maintain the same quality. YouTube recommends 35 to 45 megabits per second for 4K at standard frame rates, compared to 8 Mbps for 1080p. When you compress a video, you’re deciding how many bits each second of footage gets to keep.
What Actually Gets Removed
Most video compression is lossy, meaning it permanently discards some data. But the data it throws away is carefully chosen to be invisible, or nearly invisible, to the human eye. Your visual system has quirks that compression engineers exploit ruthlessly.
One major technique is color compression, known as chroma subsampling. Human eyes are far more sensitive to brightness than to color detail. You can distinguish fine differences in light and dark, but your ability to perceive color variations is much coarser, especially in reds and blues. Compression takes advantage of this by storing full brightness information for every pixel but recording color at a fraction of the resolution. The most common scheme, called 4:2:0, is used by nearly all streaming services including Netflix. It cuts color data significantly while producing differences that are practically invisible, especially at higher resolutions like 4K.
Beyond color, compression also discards fine visual detail in parts of the image where you’re unlikely to notice. A busy, fast-moving background doesn’t need the same level of precision as a face in the center of the frame. The encoder allocates more data to the areas that matter visually and less to those that don’t.
How Compression Handles Motion Between Frames
The biggest savings come from something unique to video: the fact that most frames look almost identical to the frame before them. If a camera is filming a person talking, the background doesn’t change at all, and only the person’s mouth and gestures shift slightly from frame to frame. Encoding every frame from scratch would be enormously wasteful.
Instead, compressors use a technique called motion compensation. The encoder divides each frame into small blocks of pixels, then searches for where those blocks appeared in a previous (or future) reference frame. Rather than storing the entire block again, it stores a motion vector: a small instruction that says “this block moved three pixels to the right.” Then it only needs to record the tiny difference between the predicted block and the actual one. Those difference values tend to be very small numbers, which compress efficiently.
This is why video with lots of motion, fast cuts, or complex textures (confetti, rain, grass blowing in wind) produces larger files than a static talking-head shot. When frames change dramatically, the encoder can’t predict as much and has to store more actual data.
Lossy vs. Lossless Compression
Lossy compression, described above, is what you encounter in everyday video. It permanently removes information to achieve dramatic size reductions. Well-designed lossy compression can shrink a file enormously before you notice any degradation.
Lossless compression also exists, but it’s far less common for video. It reduces file sizes by finding mathematical patterns in the data without discarding anything. The original can be perfectly reconstructed. The tradeoff is that lossless compression only achieves modest size reductions, typically around 2:1 or 3:1, compared to the 50:1 or greater ratios that lossy methods can reach. Lossless formats are mainly used in professional video editing workflows where editors need to preserve every detail for further manipulation.
What Happens When You Compress Too Much
Push the bitrate too low and compression artifacts become visible. The most common ones include blockiness (where you can see the square blocks the encoder uses), banding (where smooth gradients break into visible steps of color), and smearing or blurriness in areas with fine detail. Dark scenes are especially vulnerable because subtle shadow detail gets flattened out at low bitrates.
Every time you re-compress a video that’s already been compressed, quality degrades further. The encoder treats the already-degraded version as its source material and introduces a new round of data loss on top of the first. This is called generation loss, and it’s why downloading a video from social media and re-uploading it produces noticeably worse quality each cycle.
How Modern Codecs Improve Efficiency
A codec is the algorithm that performs the compression. Newer codecs achieve better visual quality at the same file size, or the same quality at a smaller file size. The two most significant modern codecs are H.265 (also called HEVC) and AV1.
H.265 was a major leap over its predecessor H.264, roughly doubling compression efficiency. That meant a video could look the same at half the file size. AV1, developed by a consortium including Google, Netflix, and Amazon, pushes efficiency further. Testing shows AV1 compresses HD and full HD video more efficiently than H.265, which is why it’s increasingly used for streaming. YouTube, Netflix, and many other platforms have adopted it.
The catch with more advanced codecs is processing time. More sophisticated analysis of each frame means the encoder needs more computing power and more time to compress the video. This is why live streams and video calls often use faster, less efficient compression. Shaving a few seconds of encoding delay matters more than squeezing out a slightly smaller file. For pre-recorded video that can be encoded overnight, platforms use the most aggressive compression available.
What This Means When You Export or Share Video
When you export a video from an editing app or choose a quality setting on your phone, you’re selecting a compression profile. Higher quality settings use a higher bitrate, preserve more detail, and produce larger files. Lower quality settings discard more data for a smaller, more shareable file.
If you’re uploading to a platform like YouTube or Instagram, the platform will re-compress your video regardless of what you upload. Starting with a higher-quality source file gives the platform’s encoder more data to work with, which generally produces a better final result. Uploading an already heavily compressed file means the platform’s encoder is working with degraded source material, compounding the quality loss.
For archiving footage you want to keep long-term, compress as little as possible or use a lossless format. Storage is cheap compared to the cost of losing detail you can never recover. For sharing on social media or messaging apps, aggressive compression is fine, since the file will be viewed on small screens where artifacts are hard to spot anyway.

