What Is Satellite Imagery and How Does It Work?

Satellite imagery is photography of Earth captured by sensors aboard orbiting satellites. These sensors detect energy reflected or emitted from the planet’s surface, converting it into digital images that reveal everything from crop health to urban sprawl to wildfire damage. What makes satellite imagery more powerful than a simple photograph is that it captures light beyond what the human eye can see, including infrared and microwave wavelengths that expose patterns invisible at ground level.

How Satellites Capture Images

Every surface on Earth reflects or emits electromagnetic radiation: visible light, infrared heat, microwaves. Satellite sensors measure this radiation from orbit and translate it into pixel-by-pixel data. Each pixel stores a value representing the intensity of energy detected at a specific wavelength, and when millions of these pixels are assembled together, you get a detailed image of a patch of Earth’s surface.

There are two fundamental types of sensors. Passive sensors work like a camera: they rely on an external energy source, usually the sun, and measure the sunlight bouncing off the ground. The energy they detect varies depending on what’s below, whether it’s soil, water, concrete, or forest canopy. Physical temperature, surface roughness, and material composition all change the signal. Active sensors generate their own energy. Radar satellites, for example, fire microwave pulses toward the ground and measure what bounces back. This makes them especially useful at night or through cloud cover, since they don’t depend on sunlight.

What Resolution Actually Means

Resolution isn’t just about how sharp an image looks. Satellite imagery has four distinct types of resolution, and each one determines what kind of questions the data can answer.

Spatial resolution is the one most people think of first. It describes the size of the smallest area represented by a single pixel. A satellite with 30-meter spatial resolution means each pixel covers a 30-by-30-meter patch of ground. Very-high-resolution imagery operates in the meter to sub-meter range, where individual cars, trees, and building features become distinguishable. Commercial providers like Maxar Technologies offer imagery down to 30 centimeters per pixel.

Spectral resolution refers to how finely a sensor divides the electromagnetic spectrum. A standard multispectral sensor captures 3 to 10 bands of light, similar to how your eye sees red, green, and blue but with added infrared channels. Hyperspectral instruments push this much further, recording hundreds or even thousands of narrow bands. At that level of detail, analysts can distinguish between different rock types, mineral compositions, and vegetation species.

Temporal resolution is how often a satellite revisits the same spot on Earth. Geostationary weather satellites hover over one location and capture images continuously. Polar-orbiting satellites circle the planet and may revisit a location every 1 to 16 days depending on their orbit and how wide a strip they photograph in each pass. NASA’s MODIS instrument revisits every 1 to 2 days, making it ideal for tracking fast-changing events like fires or storms. The Landsat satellites have a 16-day revisit cycle, better suited for monitoring gradual changes like seasonal vegetation shifts or urban expansion.

Radiometric resolution determines how many shades of brightness a sensor can distinguish. It’s measured in bits. An 8-bit sensor records 256 possible values per pixel, while a 12-bit sensor records 4,096. Higher radiometric resolution matters when you need to detect subtle differences, like slight variations in ocean color that indicate water quality changes.

Turning Raw Data Into Usable Maps

A satellite image straight from the sensor isn’t ready for analysis. The raw data needs geometric and radiometric corrections before it accurately represents what’s on the ground. One critical step is orthorectification, a process that removes distortions caused by the satellite’s viewing angle, Earth’s curvature, and terrain elevation. Without it, a mountain ridge might appear shifted from its true location by hundreds of meters.

Geometric correction relies on two main approaches. One uses a physical model of how the sensor and satellite orbit work together to map each pixel to a ground location. The other uses mathematical functions called rational polynomial coefficients to achieve the same result with less information about the sensor itself. Both methods reference elevation data and known ground points to ensure the final image lines up accurately with real-world coordinates. After these corrections, the image becomes something you can overlay on a map, compare with images from different dates, or combine with data from other satellites.

Processed data comes in levels. Level-1 products are geometrically corrected images. Level-2 products go a step further, converting the raw signals into measurements of real physical properties like surface reflectance (how much sunlight a surface bounces back) or surface temperature.

Measuring Vegetation Health With Light

One of the most widely used techniques in satellite imagery analysis is the Normalized Difference Vegetation Index, or NDVI. It works on a simple principle: healthy green plants absorb red light for photosynthesis but strongly reflect near-infrared light. Stressed or sparse vegetation reflects more red and less near-infrared.

NDVI is calculated by subtracting the red band value from the near-infrared value, then dividing by their sum. The result falls on a scale from negative 1 to positive 1. Values near zero or below typically indicate water, bare soil, or rock. Values above 0.3 or so suggest vegetation, with higher numbers indicating denser, healthier plant cover. Farmers, ecologists, and forest managers all use NDVI to track changes in plant health over time, spot drought stress early, and estimate crop yields before harvest.

Major Satellite Programs and Data Access

The Landsat program, jointly run by NASA and the U.S. Geological Survey, is the longest-running satellite imagery archive in existence. It has been continuously collecting Earth observation data since 1972. The current collection includes Level-1 data from all nine Landsat missions and science products going back to 1982. Since 2008, all Landsat data has been freely available to anyone, a policy shift that transformed the field by making decades of imagery accessible to researchers, governments, and private companies worldwide. Landsat 8 and 9 deliver new scenes within 4 to 6 hours of acquisition.

The European Space Agency’s Sentinel-2 satellites complement Landsat with higher spatial resolution (10 meters in visible bands) and a faster revisit time. Like Landsat, Sentinel data is free and open. Together, these two programs form the backbone of most non-commercial satellite analysis.

On the commercial side, companies like Maxar Technologies operate satellites that capture 8-band multispectral imagery at resolutions between 30 centimeters and 1.2 meters. That level of detail is sharp enough to assess structural damage to individual buildings after a disaster. Planet Labs takes a different approach, operating hundreds of small satellites that image nearly the entire Earth’s land surface every day, trading some spatial resolution for unmatched temporal coverage.

How Satellite Imagery Is Used

In agriculture, satellite data helps farmers monitor crop conditions across thousands of acres without leaving a truck. Vegetation indices reveal which fields are thriving, which are water-stressed, and where fertilizer application might need adjusting. Over a growing season, repeated images build a picture of how crops develop, letting growers make decisions weeks earlier than they could from ground observation alone.

Disaster response teams rely on commercial and government imagery to assess damage rapidly after hurricanes, earthquakes, wildfires, and floods. During the January 2025 Eaton Fire in Los Angeles, Maxar’s high-resolution imagery was used for precise damage assessment, helping responders prioritize areas and allocate resources.

Urban planners use satellite data to study how cities grow and how that growth affects local temperatures. Researchers have used satellite-derived surface temperature measurements to evaluate how parks, green roofs, and tree canopy cool surrounding neighborhoods. A study in Zurich demonstrated that satellite imagery could quantify the cooling effect of different types of urban greenery and estimate how long new plantings take to reach their full cooling potential. This kind of analysis helps cities prioritize where to invest in green infrastructure as temperatures rise.

Climate scientists track ice sheet retreat, deforestation rates, sea level indicators, and atmospheric composition using satellite records that now span decades. The long, consistent archive from programs like Landsat makes it possible to detect slow-moving changes that would be invisible in any single year’s data.

AI and Automated Analysis

The volume of satellite imagery collected today far exceeds what humans can analyze manually. Machine learning models, particularly deep learning systems trained on labeled images, now automate tasks like identifying buildings, classifying land cover types, detecting changes between images, and even estimating socioeconomic conditions. Researchers have used object detection algorithms to count features in satellite images (rooftop types, road quality, vehicle density) and feed those counts into models that predict poverty levels at a local scale.

The challenge with these systems is interpretability. A model might accurately predict which neighborhoods are poorest, but understanding which visual features drive that prediction is harder. Techniques like heat maps that highlight which parts of an image most influence the model’s output are helping researchers connect the algorithm’s decisions back to real-world factors, turning a black-box prediction into something analysts and policymakers can reason about.