A satellite map is an image of Earth’s surface captured by sensors on orbiting satellites, then processed and stitched together so you can view it like a traditional map. Unlike a drawn or illustrated map, a satellite map shows you actual photographs of the ground, including buildings, roads, forests, rivers, and terrain as they appear from space. The images you see on platforms like Google Earth are composites built from thousands of these satellite photographs.
How Satellites Capture Images of Earth
Satellite mapping is a form of remote sensing, which the U.S. Geological Survey defines as detecting and monitoring the physical characteristics of an area by measuring its reflected and emitted radiation at a distance. In simpler terms, satellites carry specialized cameras and sensors that record light bouncing off (or heat radiating from) the planet’s surface, then transmit that data back to ground stations.
There are two main types of sensors. Passive sensors pick up natural energy, mostly sunlight reflecting off the ground or heat the Earth emits on its own. These work similarly to your eyes or a digital camera. Active sensors, like radar instruments, send out their own signal and measure what bounces back. Radar can penetrate clouds and work at night, which makes it useful for mapping regions with heavy cloud cover or for tracking rainfall and terrain elevation.
Orbit Determines Image Quality
Most Earth-imaging satellites fly in low Earth orbit, generally between 180 and 2,000 kilometers above the surface. Being this close allows them to capture high-resolution images where fine details like individual buildings or cars are visible. The tradeoff is that a low-orbiting satellite only sees a narrow strip of ground on each pass, so it takes many orbits to cover large areas.
Weather satellites, by contrast, often sit in geostationary orbit at about 35,786 kilometers up. At that altitude, a satellite matches Earth’s rotation and stays fixed over one spot, giving it a wide view of an entire hemisphere. That’s ideal for tracking weather patterns in real time, but the images are far less detailed than what a low-orbit satellite produces. This is why the satellite maps you browse online come from low-orbit missions, while weather maps come from geostationary ones.
What Resolution Actually Means
When people talk about satellite image quality, they’re usually referring to spatial resolution: the size of the smallest feature a sensor can detect. A spatial resolution of 250 meters means each pixel in the image represents a 250-by-250-meter patch of ground. Commercial satellites today can achieve resolutions well under one meter, meaning a single pixel covers less than a square meter of surface area.
Two other types of resolution matter for mapping. Spectral resolution describes how many wavelengths of light a sensor can distinguish. Human eyes see red, green, and blue light. Satellite sensors can often detect dozens of wavelength bands, including infrared and ultraviolet, which reveal information invisible to us. Temporal resolution is simply how often a satellite revisits the same location. A satellite that passes over the same spot every few days has high temporal resolution, useful for monitoring fast-changing events like floods or wildfires.
True Color vs. False Color Images
The satellite maps you’re most familiar with are true-color images. These combine measurements of red, green, and blue light to produce a picture that looks the way Earth would appear to your eyes from space: green forests, brown deserts, blue oceans.
False-color images swap in wavelengths outside the visible spectrum, like infrared, and display them using visible colors. The result looks unnatural (vegetation might appear bright red or neon green), but it reveals things a normal photograph cannot. For example, a false-color combination using shortwave infrared, near infrared, and green light makes water appear black, flood-saturated soil appear blue, and burned land appear red. Scientists use these combinations to monitor wildfires, floods, deforestation, and crop health in ways that true-color images simply can’t show.
From Raw Image to Usable Map
A raw satellite photograph isn’t immediately useful as a map. The camera captures images at slight angles, and terrain features like mountains distort where things appear to be located. Before the image can be layered into a mapping application, it goes through a correction process called orthorectification. This adjusts every pixel so it represents an accurate ground position, as if the camera were looking straight down at every point simultaneously. The process requires a detailed terrain model and precise knowledge of the satellite’s position and sensor characteristics. Only after this correction can satellite images be overlaid with roads, borders, and other map data and line up accurately.
On platforms like Google Earth, the satellite imagery you see is typically one to three years old on average. Some areas, particularly major cities and regions of strategic interest, get updated more frequently. Rural or remote areas may show imagery that’s several years old. The images are mosaics, meaning they’re assembled from photos taken on different dates and sometimes by different satellites, which is why you occasionally notice color shifts or seasonal mismatches when scrolling across a map.
The Landsat Legacy
The modern era of satellite mapping began in 1972 with the launch of Landsat 1, a joint mission between NASA and the U.S. Geological Survey. The Landsat program now offers the longest continuous global record of Earth’s surface, spanning over five decades. Landsat 9, the most recent in the series, launched in September 2021. NASA Administrator James Fletcher once predicted that if one space-age development would save the world, it would be Landsat and its successors.
What made Landsat transformative was its consistency. Because the satellites captured calibrated images of the same locations over and over for decades, scientists could track changes in forests, ice sheets, cities, and coastlines year by year. In 2008, the USGS made the entire Landsat archive free and open to anyone, which dramatically expanded its use in research, urban planning, and environmental monitoring.
Practical Uses Beyond Navigation
Most people encounter satellite maps when looking up directions or exploring a location online, but the technology’s biggest impact is in applications you never see directly. In agriculture, satellite data tracks crop health, soil moisture, and drought conditions across entire regions. NASA scientists lead an international effort called the GEOGLAM Crop Monitor, which uses satellite-derived indicators to assess food production for the G20 nations. That information influences commodity prices and food security policies worldwide.
Environmental scientists use satellite maps to monitor deforestation, track glacier retreat, measure urban sprawl, and assess damage after natural disasters. City planners use high-resolution imagery to update land-use maps without sending survey crews into the field. Insurance companies assess flood and wildfire risk. Military and intelligence agencies rely on satellite imagery for reconnaissance. As of 2025, roughly 11,000 active satellites orbit Earth according to the European Space Agency, and a growing share of those are dedicated to Earth observation.
For everyday users, the key thing to understand is that a satellite map is not a single snapshot. It’s a carefully assembled product built from data collected by multiple satellites at different times, corrected for distortion, color-balanced, and stitched into a seamless view. The result feels like looking through a window, but it’s closer to a highly processed composite that represents the best available image of each location on Earth.

