What Is Hex? The Number System Behind Computers

Hex, short for hexadecimal, is a base-16 number system used throughout computing to represent data in a compact, human-readable way. Instead of the 10 digits you’re used to in everyday math (0 through 9), hex uses 16 symbols: the digits 0 through 9 plus the letters A through F, where A represents 10, B represents 11, and so on up to F for 15. If you’ve ever seen a color code like #FF5733 on a website or a memory address starting with 0x in a programming context, you’ve already encountered hex.

Why Computers Use Hex

Computers process everything in binary, which is just long strings of 1s and 0s. Binary is great for machines but terrible for humans. A single byte of data looks like 11010110 in binary, which is hard to read, easy to mistype, and nearly impossible to memorize. Hex solves this by compressing binary into something more manageable.

The key reason hex works so well is that each hex digit maps perfectly to exactly four binary digits (called a nibble). That means one byte, which is 8 binary digits, can always be written as exactly two hex characters. The binary string 11010110 becomes D6 in hex. This clean 4-to-1 relationship makes converting between hex and binary fast and error-free, which is why programmers, network engineers, and hardware designers rely on it constantly.

How Hex Counting Works

In decimal (base 10), each position in a number is worth 10 times more than the one to its right. The number 253 means (2 × 100) + (5 × 10) + (3 × 1). Hex works the same way, but each position is worth 16 times more than the one to its right.

To convert a hex value to a regular decimal number, you multiply each digit by its positional power of 16, starting from the right at position zero. Take the hex number 1A3:

  • 3 is in position 0: 3 × 1 = 3
  • A (10) is in position 1: 10 × 16 = 160
  • 1 is in position 2: 1 × 256 = 256

Add those up and you get 419 in decimal. The system scales the same way for any length of hex number. Once you internalize that A=10, B=11, C=12, D=13, E=14, and F=15, reading hex becomes straightforward.

How to Recognize Hex in the Wild

Because hex digits include regular numbers, you need some way to tell hex apart from ordinary decimal values. Different contexts use different markers. In programming languages like C, C++, Java, and Python, hex numbers get the prefix 0x. So 0x200 means “200 in hexadecimal,” which is 512 in decimal. Some assembly languages use the suffix “h” instead, writing it as 0200h. In web design and CSS, hex values are preceded by a # sign, as in #FFFFFF for white.

Hex Color Codes

This is probably the most common place non-programmers encounter hex. Every color on a screen is a mix of red, green, and blue light, and hex color codes represent the intensity of each using a six-character string in the format #RRGGBB. Each pair ranges from 00 (none of that color) to FF (maximum intensity, which equals 255 in decimal).

The color code #FF0000 is pure red: red is maxed out at FF while green and blue are both 00. Flip it to #00FF00 and you get pure green. #000000 is black (all channels off), and #FFFFFF is white (all channels at full brightness). A color like #FF5733 gives you a strong red (FF), moderate green (57), and low blue (33), producing a warm orange-red. Designers and developers use these codes daily because they’re precise, compact, and universally supported in browsers.

Memory Addresses and Debugging

When programmers inspect what’s happening inside a running program, they see memory addresses and data values displayed in hex. A memory address like 0x7FFF5FBFF8AC tells you exactly where a piece of data lives in the computer’s memory. In decimal, that same address would be 140,734,799,804,588, which is far harder to scan or compare at a glance.

Debugging tools display raw memory contents as rows of hex pairs, with each pair representing one byte. This makes it possible to spot patterns, identify corrupted data, or trace program behavior without drowning in binary. A block of 512 bytes that would take 4,096 binary characters to display fits neatly into 1,024 hex characters, organized as pairs you can read byte by byte.

Hex in Networking

Network hardware uses hex extensively. MAC addresses, the unique identifiers burned into every network device, are written as six pairs of hex digits separated by colons or hyphens, like 00:1A:2B:3C:4D:5E. Each pair represents one byte, giving a 48-bit address in a format compact enough to print on a label.

IPv6 addresses, the newer version of internet addresses, are written entirely in hex. An IPv6 address is 128 bits long, split into eight groups of four hex characters separated by colons. A typical address looks like 2001:0db8:3c4d:0015:0000:0000:1a2f:1a2b. The first 48 bits identify the network site, the next 16 bits specify a subnet, and the final 64 bits identify the specific device. Without hex, these addresses would be unwieldy strings of decimal numbers or impossibly long binary sequences.

Hex vs. Binary vs. Decimal

These three systems represent the same values in different ways, and each has a natural home. Decimal is what you use in daily life. Binary is what the hardware actually processes. Hex sits in between as a translation layer: close enough to binary for technical precision, readable enough for humans to work with efficiently.

A quick comparison makes this concrete. The decimal number 255 is 11111111 in binary and FF in hex. The decimal number 4,096 is 1000000000000 in binary (13 digits) and just 1000 in hex. As values get larger, the compression hex provides over binary becomes even more dramatic, which is exactly why it became the standard shorthand across computing, from low-level chip design to the colors on your screen.