A memory map is a table that assigns every piece of hardware, memory chip, and device in a computer system to a specific range of addresses. It tells the processor exactly what lives at each address, so when the CPU reads from or writes to a particular location, the system knows whether that request should go to RAM, a storage chip, a graphics card, or another peripheral. Think of it as a building directory: instead of listing office numbers and tenants, it lists address ranges and the hardware that responds to them.
The term shows up in several overlapping contexts, from chip design to operating systems to software debugging. All of them share the same core idea: organizing a flat range of numbers so that every address points somewhere meaningful.
How a Processor Uses a Memory Map
A processor communicates with the outside world by sending and receiving data at numbered addresses. The memory map is what gives those numbers meaning. When the CPU issues a read at address 0x20000000, for example, the memory map determines that this address belongs to the on-chip SRAM rather than flash storage or a USB controller. On an ARM Cortex-M processor, the standard layout reserves addresses 0x00000000 through 0x1FFFFFFF for program code (typically flash memory) and 0x20000000 through 0x3FFFFFFF for fast SRAM.
If nothing is mapped at a given address, the system raises an access error. If the address falls within a mapped range, the request gets forwarded to the correct destination. The device sitting at that address doesn’t need to know where it appears in the overall map. It just sees a local read or write and responds accordingly. This means the memory map can be reconfigured (remapping a device to a different address range, for instance) without changing the device itself.
Memory-Mapped I/O
One of the most practical consequences of a memory map is that hardware peripherals, not just RAM, get their own address ranges. This approach is called memory-mapped I/O. A temperature sensor, a display controller, or a network adapter each occupies a slice of the address space, and the CPU talks to them using the same read and write instructions it uses for ordinary memory. In a language like C, you can point directly at a hardware register’s address and interact with the device as if it were a variable in memory.
The alternative, called port-mapped I/O, keeps device addresses in a completely separate space that requires special instructions (historically called “in” and “out” on x86 processors). Memory-mapped I/O is simpler because it collapses everything into one unified address space, and most modern architectures rely on it heavily.
Virtual Memory Maps
Modern operating systems add another layer on top of the physical memory map. Every running program sees its own private set of addresses, called virtual addresses, that don’t correspond directly to physical RAM. A hardware component called the memory management unit (MMU) translates between the two on the fly.
The translation works through page tables. The system divides all memory into fixed-size pages, typically 4,096 bytes each. A virtual address gets split into two parts: the page number (the upper bits) and the offset within that page (the lower 12 bits). The MMU looks up the virtual page number in a table, finds the matching physical page number, and combines it with the offset to produce the real physical address. This happens automatically every time a program loads or stores data.
This layer of indirection is what lets your computer run dozens of programs simultaneously without them stepping on each other’s memory. Each process believes it has the entire address space to itself.
Segments Inside a Process
Within a single program’s virtual memory map, space is divided into well-defined segments:
- Text: the compiled machine code the program is actually executing.
- Data: global variables, split into initialized values and uninitialized values (the latter sometimes called BSS).
- Heap: memory the program allocates dynamically at runtime, growing upward from lower addresses.
- Stack: local variables and function call information, growing downward from higher addresses.
The text, data, and heap segments sit at lower addresses, while the stack occupies higher addresses. The gap between the heap and stack gives both room to grow. If they collide, the program has run out of memory.
Security Through Page Permissions
Each entry in a page table doesn’t just store an address translation. It also carries permission bits that control what a program can do with that page of memory. Pages can be marked as readable, writable, executable, or any combination.
Most ordinary memory your program allocates (through “new” or “malloc”) is marked read-write but not executable. This is a deliberate security feature. If an attacker manages to inject malicious code into your program’s data, the processor will refuse to run it because the page isn’t marked as executable. This protection, sometimes called the NX (No-Execute) bit, only became standard on x86 processors during the transition to 64-bit architectures. Before that, read and execute permissions were merged, leaving a significant security gap.
Pages can even be marked with no access at all. Debugging tools use this trick to catch programs that accidentally read or write to memory they shouldn’t touch. Any access to a no-access page immediately triggers a fault, making the bug easy to find.
How the OS Discovers the Memory Map
On a desktop or server PC, the operating system doesn’t automatically know how much RAM is installed or which address ranges are usable. During startup, the BIOS or UEFI firmware provides this information through a system address map. On x86 systems, the classic mechanism is a firmware call (INT 15h, E820h) that the OS invokes repeatedly, each call returning a single entry describing one contiguous range of physical addresses. Each entry includes a base address, a length in bytes, and a type code indicating whether the range is usable RAM, reserved by the firmware, or something else entirely.
The operating system collects all these entries to build its picture of physical memory before it starts managing pages and launching programs.
Memory Maps in Embedded Development
For embedded developers writing firmware for microcontrollers, the memory map is something you configure directly. A linker script defines exactly where in the address space each piece of your compiled code and data should live. You declare memory regions with explicit start addresses and sizes, then assign sections of your program to those regions.
A typical linker script might declare a flash region starting at address 0x08000000 with 512 KB of space, and an SRAM region starting at 0x20000000 with 128 KB. The compiled program code gets placed into flash, while variables go into SRAM. Getting this wrong means your firmware either won’t boot or will corrupt its own data, so the memory map is one of the first things an embedded engineer defines for a new chip.
Viewing a Memory Map on a Running System
On Linux, you can inspect the memory map of any running process. The file at /proc/PID/smaps (where PID is the process ID number) contains a detailed breakdown of every mapped region, including its address range, permissions, and how much physical memory it actually consumes. The command-line tool pmap reads this file and formats it into something more readable. Running “pmap -x” followed by a process ID gives you a detailed view of each segment, its size, and its access permissions.
This is useful when diagnosing memory leaks or understanding why a program is consuming more RAM than expected. You can see exactly which libraries are loaded, how large the heap has grown, and whether any regions have unusual permissions.

