Two’s complement is the standard way computers represent positive and negative whole numbers in binary. Every modern processor uses it because it lets the same hardware circuit perform both addition and subtraction, which saves space, power, and cost on the chip. If you’ve ever wondered how a computer knows the difference between 5 and negative 5 when everything is just 1s and 0s, two’s complement is the answer.
How It Works
In regular binary, each bit position represents a positive power of two. The rightmost bit is worth 1, the next is worth 2, then 4, 8, and so on. Two’s complement keeps all of that the same except for one change: the leftmost bit (called the most significant bit) gets a negative weight instead of a positive one.
In an 8-bit system, for example, the leftmost bit is worth negative 128 instead of positive 128. The remaining seven bits still carry their normal positive values (64, 32, 16, 8, 4, 2, 1). To find the decimal value of any two’s complement number, you multiply each bit by its weight and add everything up.
This means positive numbers always start with a 0 in the leftmost position, and negative numbers always start with a 1. That leftmost bit acts as a sign indicator, but it’s not just a flag. It actively contributes a negative value to the total, which is what makes the math work cleanly.
A Walkthrough With Real Numbers
Take the number 5 in an 8-bit system. In binary, that’s 00000101 (4 + 1 = 5). Simple enough.
Now suppose you want negative 5. The process has two steps: flip every bit, then add one. Flipping the bits of 00000101 gives you 11111010. Adding one produces 11111011. That’s negative 5 in two’s complement. You can verify it using the weights: the leftmost 1 contributes negative 128, and the remaining bits add up to 123 (64 + 32 + 16 + 8 + 2 + 1). Negative 128 plus 123 equals negative 5.
This “flip and add one” shortcut works because negating a number is mathematically the same as subtracting it from zero. When you subtract a binary number from a string of all 1s, you’re inverting every bit. Adding one after that completes the subtraction from the next power of two, which is exactly what two’s complement is defined as: the value subtracted from 2^N, where N is the number of bits.
Why Not Just Use a Sign Bit?
The most intuitive approach to negative numbers would be reserving the leftmost bit as a positive/negative flag and leaving the rest alone. This is called sign-magnitude representation, and early computers actually used it. The problem is that it creates two versions of zero: 00000000 (positive zero) and 10000000 (negative zero). Two zeros means extra comparison logic and special cases in every arithmetic operation.
Another older approach, called one’s complement, negates a number by flipping all the bits without adding one. It also produces two zeros (all 0s and all 1s both represent zero), and addition requires an awkward correction step called an “end-around carry.”
Two’s complement eliminates both problems. Zero has exactly one representation: all bits set to 0. And addition works the same way regardless of whether the numbers are positive, negative, or mixed. The processor doesn’t need to check signs before doing arithmetic. It just adds.
The Range of Values
For any N-bit two’s complement system, the range of representable integers runs from negative 2^(N-1) to positive 2^(N-1) minus 1. The negative side reaches one number further than the positive side because zero takes up one of the positive-side slots.
Here’s what that looks like for common bit widths:
- 8-bit: negative 128 to positive 127
- 16-bit: negative 32,768 to positive 32,767
- 32-bit: negative 2,147,483,648 to positive 2,147,483,647
- 64-bit: roughly negative 9.2 quintillion to positive 9.2 quintillion
The asymmetry (128 vs. 127 in 8-bit, for instance) is a natural consequence of the system. The most negative value, like negative 128 in 8-bit, is represented as 10000000. If you try to negate it using the flip-and-add-one method, you get 10000000 again, because positive 128 doesn’t fit in 8 bits. This is a quirk worth knowing if you’re writing code that negates variables.
Why Processors Prefer It
The real elegance of two’s complement is at the hardware level. A processor can subtract two numbers using the exact same addition circuit it already has. To compute X minus Y, the chip flips the bits of Y, feeds both values into the adder, and sets the carry-in input to 1. That carry-in of 1 is the “add one” step from the negation process. The adder then computes X plus the two’s complement of Y, which equals X minus Y. No separate subtraction circuit needed.
This is why two’s complement dominates computing. Reducing gate count on a chip means lower manufacturing costs, less heat, and faster operations. Multiplication and division of signed numbers also benefit, since the same binary multiplication hardware handles both positive and negative inputs correctly in two’s complement (with minor adjustments for sign extension).
Overflow: When the Math Breaks
Because two’s complement has a fixed range, it’s possible for an arithmetic result to exceed what the bit width can hold. This is called overflow, and it follows predictable rules.
Overflow happens in exactly two situations: adding two positive numbers produces a negative result, or adding two negative numbers produces a positive result. If you’re adding a positive and a negative number together, overflow is impossible, because the result is always between the two inputs.
At the bit level, overflow occurs when a carry bit enters the sign position without a matching carry leaving it (or vice versa). For example, in 8-bit math, 100 + 50 should equal 150, but 150 exceeds the maximum of 127. The binary result wraps around to a negative number. Processors set an overflow flag when this happens, and programming languages handle it in various ways, from silently wrapping around to throwing an error.
Sign Extension
Sometimes you need to take a number stored in a smaller bit width and move it into a larger one, say from 8 bits to 32 bits. For positive two’s complement numbers, you pad the left side with zeros. For negative numbers, you pad with ones. This is called sign extension: you copy the sign bit into all the new positions on the left.
Negative 5 in 8 bits is 11111011. In 16 bits, it becomes 1111111111111011. The value is identical because the extra 1s on the left don’t change the math. They preserve the negative contribution of the sign bit at the new, wider bit position. Sign extension is something compilers and processors handle automatically, but understanding it helps explain why casting between integer sizes in programming languages occasionally produces unexpected results.

