Numbers weren’t invented in a single moment by a single civilization. They emerged slowly over tens of thousands of years, starting as scratches on bone and evolving through clay tokens, pictographic symbols, and eventually the place-value system we use today. The story stretches from prehistoric Africa to ancient Mesopotamia, India, China, and the Americas, with different cultures independently solving the same fundamental problem: how to record and communicate quantity.
Tally Marks: The First 35,000 Years
The oldest known evidence of humans counting is the Lebombo Bone, found in Swaziland and dated to roughly 35,000 years ago. It’s a small baboon fibula with 29 notch marks carved into it, likely used as a tally stick. Ten thousand years later, someone in what is now the Democratic Republic of the Congo carved the Ishango Bone, a more complex artifact with a piece of quartz at one end for writing and groups of notches arranged in three distinct rows.
The Ishango Bone is more than a simple tally. The markings on two of its rows each add to 60. One row contains the prime numbers between 10 and 20. Another row groups notches in patterns consistent with a base-10 system: 20 + 1, 20 – 1, 10 + 1, and 10 – 1. A third row appears to illustrate multiplication by doubling, a technique later used in Egyptian arithmetic. Microscopic analysis has also revealed markings consistent with a six-month lunar calendar. Whether these patterns are intentional or coincidental remains debated, but they suggest that humans were doing more than simple counting at least 25,000 years ago.
These tally systems had an obvious limitation. Each mark stood for one thing. Three sheep meant three marks. Three hundred sheep meant three hundred marks. There was no shorthand, no abstraction. For small-scale life, tallying worked fine. But as societies grew, they needed something more efficient.
Clay Tokens and the Birth of Written Numbers
Around 8,000 years ago, early farmers in the ancient Near East came up with a new approach. Instead of scratching marks on bone, they shaped small clay tokens into different forms to represent different goods. A cone stood for a small measure of barley. A sphere meant a larger measure. A disc represented a sheep. Three small measures of barley were shown by three cones, maintaining the old one-to-one logic but now encoding what was being counted, not just how many.
This system worked for thousands of years. But it created a new problem: storage and verification. When someone owed a debt, the tokens representing what they owed were sealed inside a hollow clay ball called an envelope. To check what was inside without breaking it open, people began pressing the tokens into the wet clay surface before sealing them. A cone left a wedge-shaped impression. A disc left a circular one.
Then came the breakthrough. Someone realized the impressions on the outside made the tokens on the inside redundant. Why bother with three-dimensional objects when two-dimensional marks on a flat clay tablet carried the same information? This shift, which happened around 3100 BC in Mesopotamia, was the invention of writing. The first written documents in human history were not poems or prayers. They were accounting records for barley and sheep.
Full Number Systems Take Shape
Once cultures moved beyond tallying, they faced a design choice: how to organize symbols for larger quantities. Different civilizations solved this differently, and the differences matter.
Egyptian Hieroglyphic Numbers
The Egyptians built an additive decimal system with a unique symbol for each power of ten. A simple stroke meant 1. A hobble (a cattle restraint) meant 10. A coil of rope was 100, a lotus flower 1,000, a bent finger 10,000, a tadpole 100,000, and the god Heh represented a million. To write a number, you simply drew the right combination of symbols and grouped them together. Fifteen would be one hobble and five strokes. To add two numbers, you combined all the symbols and traded up whenever you accumulated ten of one kind for the next symbol up, much like carrying in modern addition.
This system was intuitive and readable, but it scaled poorly. Writing large numbers required enormous strings of repeated symbols. There was no compact way to express, say, 999,999.
Babylonian Base-60
The Babylonians inherited a base-60 (sexagesimal) system from the Sumerians, and scholars have proposed many theories for why 60 was chosen. The most practical explanation is that 60 is the smallest number divisible by 1, 2, 3, 4, and 5, making it extremely convenient for dividing goods and measurements into equal parts without fractions. Another widely accepted theory suggests that 60 arose from the merging of two cultures, one using base 12 and the other base 5, with 60 as the natural common ground for trade between them.
There’s even a finger-counting explanation. Each of the four fingers on your left hand has three segments divided by the joints, giving you 12 positions. Point to each segment with one of the five fingers on your right hand, and you can count to 60 on two hands. Whatever the origin, base-60 survives today every time you read a clock (60 minutes, 60 seconds) or measure an angle (360 degrees in a circle).
Chinese Counting Rods
Starting around the 4th century BC, Chinese mathematicians used bamboo or ivory rods on a checkerboard grid. The rightmost column held units, the next column to the left held tens, and so on. This was a natural place-value system: one rod in the units column meant 1, while one rod in the tens column meant 10. To multiply by 10, you simply moved all the rods one column to the left.
The system had an ambiguity problem, though. Three rods in a row could mean 3, or 21, or 12, or 111 if the rods shifted out of position. Chinese mathematicians solved this cleverly by alternating between two visual styles of rod arrangement in adjacent columns, making it immediately clear which column each rod belonged to. By the 5th century AD, Chinese texts show an understanding of not just positive powers of 10 but also decimal fractions as negative powers of 10.
The Invention of Zero
Early positional systems, including the Babylonian and Mayan versions, initially had no symbol for zero. A blank space indicated a skipped place value, which created obvious confusion. The number 203 and the number 23 could look identical if the blank wasn’t wide enough.
Zero as a written symbol appeared independently in at least two places. In Mesoamerica, the oldest known zero dates to 31 BC, found on a stone monument at the Olmec site of Tres Zapotes in Veracruz, Mexico. The Maya developed this concept further within their base-20 (vigesimal) number system, representing zero as a shell, a seed, a flower, or a human head in profile depending on the context. The seed form was typically used in arithmetic, while the flower appeared in calendar inscriptions. For the Maya, zero carried philosophical weight as well, symbolizing both beginning and ending, absence and potential.
In India, the Bakhshali manuscript contains hundreds of zeros written as dots, used as placeholders to distinguish 10 from 100 from 1,000. Radiocarbon dating places parts of this manuscript in the 3rd or 4th century AD, roughly five centuries older than the 9th-century temple inscription in Gwalior that was long considered India’s earliest zero. But the truly revolutionary step happened in 628 AD, when the mathematician Brahmagupta wrote the first known text treating zero as a number in its own right, with rules for adding, subtracting, and multiplying with it. This was the leap no other civilization had made: zero wasn’t just an empty placeholder but a quantity you could calculate with.
Why Positional Notation Won
The fundamental difference between ancient number systems comes down to whether the position of a symbol changes its value. In additive systems like Roman numerals, V always means 5 no matter where it appears (with minor exceptions for subtractive pairs like IV). To write one million in Roman numerals, you would need one thousand M’s. There’s no compact notation for very large or very small numbers.
Positional systems solve this completely. In our modern system, the digit 3 can mean 3, 30, 300, or 3,000,000 depending on where it sits. You only need ten symbols (0 through 9) to write any number of any size. This efficiency made arithmetic dramatically easier: addition, subtraction, multiplication, and long division all became mechanical procedures that anyone could learn, rather than specialized skills requiring years of training.
The positional decimal system we use today consolidated in India, combining place value with the zero concept and a set of nine digit symbols. Arab scholars adopted and refined it, and in 1202, the Italian mathematician Leonardo of Pisa (later known as Fibonacci) published his book Liber Abaci after learning the system from North African merchants. He wrote it explicitly to introduce this knowledge to Italians, who “up to now are found without a minimum” of such mathematics. The book became a major force in persuading European merchants and scholars to abandon Roman numerals for the new system.
The transition wasn’t instant. European merchants and bookkeepers resisted for centuries, partly from habit and partly from suspicion that the unfamiliar symbols could be easily forged or altered. But the computational advantages were too large to ignore. By the 16th century, the Hindu-Arabic numeral system had become the standard across Europe, and from there it spread worldwide to become the universal language of quantity.

