What Is Depth Effect? How It Works in Photos and Screens

The depth effect is a visual technique that makes certain objects appear closer or farther away by separating them from their surroundings. It shows up in photography, smartphone features, visual design, and human vision itself. Whether you’ve seen it as the blurred-background look in a portrait photo or as an iPhone wallpaper setting, the core idea is the same: creating a sense of three-dimensional space on a two-dimensional surface.

How Depth Effect Works in Photography

In traditional photography, the depth effect comes from controlling what’s called depth of field. The depth of field is the zone in a photo where things appear sharp. Everything outside that zone gradually falls out of focus and becomes blurry. When a photographer narrows this zone (a “shallow” depth of field), the subject stays crisp while the background melts into a smooth blur. This makes the subject pop forward, creating a strong sense of depth.

Three things control how shallow or deep this zone is: the aperture (the opening in the lens), the focal length of the lens, and the distance between the camera and the subject. A wider aperture lets in more light and shrinks the focused zone. Longer lenses compress the scene and blur backgrounds more aggressively. Getting physically closer to your subject also narrows the depth of field. Professional portrait photographers combine all three to isolate a face against a creamy, out-of-focus background.

That pleasing blur in the out-of-focus areas has its own name: bokeh, from the Japanese word “boke” meaning blur or haze. Bokeh isn’t just “blurriness.” It describes the quality and character of the blur, including how point light sources render as soft circles or hexagons depending on the lens design. Photographers actively seek lenses that produce smooth, even bokeh because it makes the depth effect more visually appealing.

How Smartphones Create the Depth Effect

Smartphone cameras have tiny sensors and small lenses, which naturally produce a deep depth of field where nearly everything is in focus. To mimic the shallow depth look of larger cameras, phones use computational tricks. At the core of this process is depth mapping: the phone builds a pixel-by-pixel estimate of how far each part of the scene is from the camera, then applies artificial blur to everything it determines is “background.”

Different phones approach depth mapping in different ways. Dual-camera systems (popularized by the iPhone 7’s portrait mode) capture two slightly offset images simultaneously and calculate depth from the difference between them, similar to how human eyes work. Google’s Pixel phones initially took a different approach, capturing two photos at slightly different positions and combining them through software. More recent phones, like the iPhone 12 Pro and later, added LiDAR sensors that fire pulses of light and measure how long they take to bounce back. These sensors can map depth with accuracy within about 1 centimeter for small objects.

Once the phone has its depth map, it applies a digital blur that increases with distance from the subject. The blur is typically modeled as a graduated effect, so objects just behind the subject get a slight softening while distant backgrounds get heavily blurred. This is the “depth effect” or “portrait mode” you see in your camera app. The same depth-mapping technology powers the depth effect wallpaper feature on iPhones, where the lock screen wallpaper shifts behind the clock to create a layered, three-dimensional look.

How Your Brain Perceives Depth

The depth effect in photos and screens works because it taps into the same systems your brain uses to perceive real three-dimensional space. Your visual system relies on multiple overlapping cues to judge distance, and they fall into two broad categories.

Binocular cues come from having two eyes spaced about 6 centimeters apart. Each eye captures a slightly different view of the same scene. Your brain fuses these two images into one and uses the tiny differences between them to calculate depth with remarkable precision. This process, called stereopsis, is the reason 3D movies and VR headsets work: they feed each eye a slightly different image. About 7% of adults under 60 lack this ability entirely, a condition linked to conditions like amblyopia (lazy eye) or strabismus (crossed eyes), which means they rely more heavily on other depth cues.

Monocular cues work with just one eye, and they’re the reason flat photos can still feel three-dimensional. These include relative size (faraway objects look smaller), occlusion (closer objects block the view of things behind them), linear perspective (parallel lines converging in the distance), texture gradients (surface details becoming finer and more tightly packed with distance), aerial perspective (distant objects appearing hazier and bluer), and height within the image (objects higher in your field of view tend to be farther away). Shading and cast shadows also provide powerful depth information by revealing the shape and position of objects relative to light sources.

Motion Parallax: Depth From Movement

One of the most powerful depth cues doesn’t appear in still photos at all. Motion parallax is the phenomenon where nearby objects appear to move quickly across your visual field while distant objects seem to drift slowly. You’ve experienced this looking out a car window: roadside fences streak past while distant mountains barely move.

Your brain calculates depth from motion parallax using a combination of what’s happening on your retinas and signals about your own head and eye movements. Research in neuroscience has identified a specific brain region (in the temporal cortex) where over half of the neurons respond selectively to depth information from motion parallax. Interestingly, these neurons can only distinguish whether something is nearer or farther when they also receive signals about eye movements. Without that extra information, the motion on the retina is ambiguous: the brain can’t tell if an object is moving in front of or behind the point you’re looking at.

This is why some smartphone wallpaper and interface effects use subtle shifting when you tilt your phone. By moving the layers at different speeds, the screen mimics motion parallax and creates a convincing sense of depth even on a flat display.

Depth Effect in Design and Displays

Beyond photography, the depth effect is a deliberate design tool in user interfaces, gaming, and digital art. Apple’s iOS uses layered parallax effects on home screens and lock screens to make flat interfaces feel more spatial. Video games use depth-of-field rendering to focus your attention on key elements while blurring peripheral scenery, just like a movie camera would.

The underlying principle is always the same: selectively controlling focus, layering, and motion cues to trick your visual system into perceiving a flat surface as having three-dimensional space. Whether it’s a photographer opening their aperture, a smartphone building a depth map with LiDAR, or a game engine blurring distant terrain, the depth effect works because it speaks the language your brain already uses to navigate the real world.