What Is a Sensor in a Camera and How Does It Work?

A camera sensor is a flat chip inside every digital camera that captures light and converts it into the electrical signals that become your photograph. It replaces the role that film played in older cameras, sitting right behind the lens where light focuses into an image. The sensor’s size, type, and design are the single biggest factors determining your image quality.

How a Sensor Turns Light Into a Photo

The process starts with physics. When light passes through your lens, it hits the sensor’s surface, which is made of silicon. Photons (particles of light) strike the silicon and knock electrons loose from the material’s surface through a process called the photoelectric effect. Each tiny section of the sensor, called a pixel, collects these freed electrons and stores them.

The number of electrons each pixel accumulates depends on how much light hit that spot. A bright area in the scene releases many electrons; a dark area releases few. Once exposure ends, the camera reads out each pixel’s electron count and converts it into a voltage. An analog-to-digital converter then translates that voltage into a number, and the millions of numbers from all the pixels together form your digital image file.

How Sensors Capture Color

Individual pixels are colorblind. Each one only measures the brightness of the light hitting it, not its color. To create a color image, manufacturers place a grid of tiny color filters over the sensor, usually in what’s called a Bayer pattern: alternating red, green, and blue filters arranged so that each pixel only sees one color. The pattern uses twice as many green filters as red or blue, because human eyes are most sensitive to green light.

Since each pixel captures only one color channel, the camera’s processor has to fill in the missing colors for every pixel by looking at what neighboring pixels recorded. This mathematical guessing game is called demosaicing. Modern cameras do this so well that you never notice it, though the process can occasionally produce small color artifacts along very fine edges or patterns.

CCD vs. CMOS Sensors

Two main sensor technologies have existed in digital photography: CCD (charge-coupled device) and CMOS (complementary metal-oxide semiconductor). Both convert light into electrons the same way, but they differ in how they read that information out.

A CCD sensor shifts the charge from each pixel row by row to a single output, like a bucket brigade passing water down a line. This limits how fast the sensor can be read. A CMOS sensor reads each pixel individually and directly, which allows significantly higher speeds. CMOS sensors also consume less power and are cheaper to manufacture. CCD sensors are no longer being actively developed. CMOS technology has caught up on every front where CCDs once held an advantage, including noise performance, color accuracy, and light sensitivity, making them the universal standard in cameras today.

Sensor Sizes and What They Mean

Camera sensors come in a range of physical sizes, and the size has a direct impact on image quality, depth of field, and how lenses behave. The most common formats, measured in width by height:

  • Full frame: roughly 36 × 24 mm, matching the dimensions of traditional 35mm film. Found in professional and high-end mirrorless cameras.
  • APS-C: roughly 23.6 × 15.6 mm. The most common size in enthusiast cameras from Sony, Fujifilm, Nikon, and others.
  • Micro Four Thirds: roughly 17.3 × 13 mm. Used by OM System (formerly Olympus) and Panasonic.
  • 1-inch: roughly 13.2 × 8.8 mm. Found in premium compact cameras like the Sony RX100 series.

A larger sensor collects more total light, which generally means cleaner images with less grain (noise), especially in dim conditions. It also produces shallower depth of field at the same framing, giving you more background blur in portraits.

Crop Factor

Sensor size also changes how your lenses behave. A smaller sensor captures a narrower slice of the image projected by the lens, making it look like you’re zoomed in compared to full frame. This is described by a number called the crop factor. APS-C sensors have a crop factor around 1.5, Micro Four Thirds is 2.0, and 1-inch sensors sit at about 2.7.

To figure out the equivalent field of view, you multiply the lens’s focal length by the crop factor. A 50mm lens on an APS-C camera gives you the same framing as a 75mm lens on full frame. A 300mm lens on that same APS-C body gives you the reach of a 450mm lens. This is why wildlife and sports photographers sometimes appreciate smaller sensors: they get extra reach without buying longer, heavier lenses.

Why Pixel Size Matters More Than Pixel Count

Megapixel counts get the most attention in marketing, but the physical size of each individual pixel often matters more for image quality. Pixel size is measured in microns (millionths of a meter). Larger pixels have more surface area to collect light, which improves their signal-to-noise ratio. That means cleaner, more detailed images, particularly in low light or low-contrast situations.

This is why a 24-megapixel full-frame sensor typically produces cleaner photos than a 24-megapixel APS-C sensor. The full-frame sensor is physically larger, so its pixels are bigger even though both sensors have the same total count. And it’s why cramming 100+ megapixels onto a smartphone sensor involves real trade-offs: the pixels become so tiny that each one captures very little light, forcing the phone’s software to merge groups of pixels together to compensate.

Dynamic Range

Dynamic range describes how wide a span of brightness your sensor can capture in a single shot, from the deepest shadows to the brightest highlights. It’s measured in stops, where each stop represents a doubling of light. A sensor with 13 stops of dynamic range can record detail across a brightness range 8,000 times wider than a sensor with just 5 stops.

Modern full-frame sensors typically capture 13 to 15 stops of dynamic range, up from 10 to 12 stops in older DSLRs. That extra range is why you can recover so much shadow detail from a modern raw file. High dynamic range is especially valuable for backlit scenes, sunsets, and any situation where bright and dark areas coexist in the same frame.

Rolling Shutter vs. Global Shutter

Most CMOS sensors use a rolling shutter, meaning they read out the image row by row from top to bottom. Each row is captured at a slightly different moment. For a typical sensor with 2,048 rows, the bottom row might be read about 2 milliseconds after the top row. That tiny delay is invisible for still subjects, but when something moves fast (or you pan the camera quickly), it can cause a skewing effect where vertical lines appear to lean. You’ve probably seen this in phone videos of helicopter blades or guitar strings looking rubbery.

A global shutter reads every pixel simultaneously, capturing the entire scene at one instant. This eliminates skew artifacts completely. Global shutter sensors have historically come with trade-offs like increased noise and lower frame rates, but the technology is improving rapidly and starting to appear in consumer mirrorless cameras.

Recent Sensor Advances

Two design innovations have pushed sensor performance forward in recent years. Back-side illumination (BSI) flips the sensor’s wiring behind the light-collecting layer instead of in front of it, so more photons actually reach the silicon. This improves low-light sensitivity without requiring larger pixels.

Stacked sensor designs go further by layering processing circuitry beneath the pixel array, enabling faster readout speeds and features that weren’t previously possible. A newer “partially stacked” approach adds more complex readout circuitry around the sensor’s edge, allowing it to capture two different gain settings simultaneously. One setting preserves bright highlights while the other keeps shadows clean, and the camera merges them into a single exposure with significantly boosted dynamic range at low ISO settings. This technique, seen in cameras like the Panasonic S1II and Sony a7 V, currently requires a mechanical shutter to work, since the dual readout takes longer than a standard single-mode scan. It represents a meaningful step forward without the higher cost of a fully stacked design, which makes it likely to spread to more affordable cameras over time.