What Is Circle of Confusion in Photography?

The circle of confusion is the tiny blur spot that forms on your camera’s sensor when a point of light isn’t perfectly in focus. A lens can only focus precisely at one distance at a time. Everything closer or farther than that distance gets recorded not as a sharp point but as a small disc of light. When that disc is small enough that your eye can’t tell it apart from a true point, the image still looks sharp. When it grows large enough to notice, things start looking blurry.

This concept is the foundation behind depth of field, hyperfocal distance, and most sharpness calculations in photography. Understanding it helps you predict how much of a scene will appear sharp at any given aperture and focus distance.

How a Blur Spot Becomes “Sharp Enough”

No lens produces a mathematically perfect point of light, even at its best. But human vision has limits. The widely accepted standard is that the eye can resolve detail at about 1 arcminute of angular resolution, the basis of 20/20 vision. If a blur spot on a print or screen is small enough that it falls below this threshold at a normal viewing distance, your brain registers it as a sharp point. The image looks focused even though, technically, only one precise plane is truly in focus.

This is why the circle of confusion isn’t a fixed physical property of a lens. It’s a judgment call: the maximum blur spot size you’re willing to accept as “sharp.” That acceptable size depends on three things: how sharp human vision is, how far the viewer stands from the image, and how much the image has been enlarged from the original sensor capture. A photo printed at 8×10 inches and viewed from arm’s length demands a smaller circle of confusion on the sensor than the same image displayed as a small thumbnail on a phone.

Standard Values for Common Sensor Sizes

Because photographers need a practical number to work with, the industry settled on standard circle of confusion values based on sensor format. These assume a fairly traditional scenario: an 8×10 inch print viewed from about 10 inches away.

  • Full frame (36×24 mm): 0.03 mm
  • APS-C (approximately 24×16 mm): 0.02 mm
  • Micro Four Thirds (17.3×13 mm): approximately 0.015 mm

The classic method for calculating these comes from what’s sometimes called the Zeiss formula: divide the sensor’s diagonal measurement by 1,730. For a full frame sensor with a 43.3 mm diagonal, that gives roughly 0.025 mm, slightly more conservative than the traditional 0.03 mm figure. Different sources use slightly different divisors (1,442 gives the more common 0.03 mm for full frame), which reflects different assumptions about viewing conditions. Neither is “wrong.” They just assume different levels of scrutiny.

Smaller sensors need tighter standards because the image must be enlarged more to reach the same print size. An APS-C sensor producing an 8×10 print is magnified more than a full frame sensor producing the same print, so any blur on the sensor gets magnified more too.

Why It Controls Depth of Field

Depth of field is the range of distances in a scene that appear acceptably sharp. The way you calculate it is straightforward in concept: pick your maximum acceptable circle of confusion, then figure out how far in front of and behind your focus point objects can be before their blur spots exceed that limit. The distance between those near and far boundaries is your depth of field.

This is why depth of field isn’t a fixed property of a lens or aperture setting. It shifts depending on which circle of confusion value you use. Choose a stricter (smaller) value and your calculated depth of field shrinks. Choose a more lenient one and it expands. The physics haven’t changed, only your definition of “sharp enough.”

Aperture plays a direct role because a wider opening (lower f-number) creates a wider cone of light hitting the sensor. Points outside the focus plane produce larger blur spots at wide apertures than at narrow ones. This is why shooting at f/2.8 gives you a thin slice of sharpness while f/16 keeps most of a landscape looking crisp.

The Link to Hyperfocal Distance

Hyperfocal distance is the focus distance that maximizes depth of field for a given lens and aperture. When you focus at this distance, everything from half that distance to infinity falls within acceptable sharpness. Landscape photographers use it constantly.

The formula is: focal length squared, divided by (f-number multiplied by the circle of confusion). So for a 50 mm lens at f/11 with a full frame circle of confusion of 0.03 mm, the hyperfocal distance works out to about 7.6 meters. Focus there, and everything from roughly 3.8 meters to infinity should appear sharp in your final image.

The circle of confusion value you plug into this formula directly changes the result. Use a stricter value (say 0.02 mm instead of 0.03 mm) and the hyperfocal distance pushes farther out, meaning you need to focus farther away and your near-sharp boundary moves farther from the camera. This matters when you’re trying to include a close foreground element in a sharp landscape.

When Standard Values Fall Short

The traditional 0.03 mm standard for full frame was established during the film era, assuming a modest 8×10 inch print viewed at arm’s length. Modern photography regularly breaks those assumptions. If you’re printing large (20×30 inches or bigger), displaying on a high-resolution monitor where viewers can zoom to 100%, or cropping heavily, the standard values are too generous. Blur that would have been invisible in a small print becomes obvious.

High-resolution sensors compound this. A 60-megapixel full frame sensor captures so much detail that pixel-level viewing reveals blur the traditional 0.03 mm standard considers acceptable. Some photographers working with high-megapixel bodies use a circle of confusion closer to 0.02 mm or even smaller for their depth of field calculations, essentially applying the same rigor that APS-C standards would demand.

The key insight is that the circle of confusion value isn’t about the sensor’s pixel count. It’s about the combination of sensor size and how much you enlarge the final image. A 60-megapixel sensor printed at 8×10 inches and viewed at normal distance still looks sharp with the traditional 0.03 mm standard. But if you’re cropping to 50% of the frame and printing large, your effective enlargement has doubled, and the circle of confusion you should use drops accordingly.

Practical Takeaways for Shooting

Most depth of field calculators and apps let you choose or adjust the circle of confusion value. If you mostly share images on social media or make modest prints, the default values for your sensor size work fine. If you regularly make large prints, crop aggressively, or pixel-peep your work, tighten the value by 30 to 50 percent. You’ll get more conservative (and more honest) depth of field estimates.

When shooting landscapes at the hyperfocal distance, remember that “sharp to infinity” depends entirely on which circle of confusion value you trust. Many experienced landscape photographers focus slightly beyond the calculated hyperfocal point as insurance, especially if they plan to print large. The math gives you a boundary, not a guarantee, because the boundary itself is based on a subjective definition of sharpness.

The circle of confusion also explains why the same lens and aperture setting produces different depth of field on different camera bodies. A 50 mm lens at f/8 on a full frame camera has a different depth of field than the same lens on an APS-C body, partly because the smaller sensor’s stricter circle of confusion standard narrows the acceptable range. This is separate from the crop factor’s effect on field of view, though both contribute to the different look you get from different sensor sizes.