Archaeology doesn’t have a single starting date. The impulse to dig up the past and make sense of it stretches back at least 2,500 years, but the recognizable scientific discipline only took shape in the 1800s. Between those two points lies a long, gradual transformation from royal curiosity to rigorous method.
The First Known Excavation: 542 BC
The earliest recorded archaeological dig belongs to Nabonidus, the last king of the Babylonian empire, who ruled from 556 to 539 BC. While restoring the Ebabbar temple of the sun god Shamash in Sippar (in modern-day central Iraq), Nabonidus ordered workers to dig down through the ruins in search of the original foundations. During that excavation, he uncovered an inscription left by Naram-Sin, king of the Akkadian empire, who had ruled roughly 1,700 years earlier. Nabonidus didn’t just collect the inscription as a trophy. He attempted to date it based on what he found in the dig, making this the oldest known example of someone intentionally excavating a site and then studying and dating the finds. The account survives on a cuneiform text called the Nabonidus Cylinder of Sippar.
Other rulers almost certainly dug into older structures before Nabonidus, but no written record of those efforts has survived. His cylinder is unique because it describes the process itself: the intentional search, the discovery, and the attempt at interpretation.
Antiquarians and the Pre-Scientific Era
For most of the centuries that followed, interest in ancient objects was driven by collectors, not scientists. During the Renaissance, roughly the 14th through 17th centuries, European scholars known as antiquarians gathered statues, coins, and inscriptions from Greek and Roman sites. Their work involved cross-referencing material objects with written texts to build a picture of the past, and it represented a real shift in historical thinking. But the approach was more about assembling impressive collections and confirming literary accounts than about understanding how people lived or how societies changed over time.
In England, figures like William Camden used physical evidence alongside written sources to challenge old mythological histories of Britain. This was a meaningful step: treating objects and monuments as evidence rather than mere curiosities. Still, antiquarians had no systematic way to determine when something was made if no written record existed. A Roman coin and a prehistoric stone tool were both just “old things” with no reliable way to tell which came first.
Thomas Jefferson’s Mound Excavation
One of the earliest excavations that resembles modern archaeology took place in Virginia around 1783, when Thomas Jefferson investigated a Native American burial mound on his property. Rather than simply digging a hole to see what was inside, Jefferson cut a trench straight through the mound to examine its internal structure layer by layer. He noted collections of human bones at different depths, from six inches to three feet below the surface, and observed that remains closer to the surface were less decayed than those deeper down. He estimated the mound contained roughly a thousand burials.
This matters because Jefferson was thinking stratigraphically, recognizing that deeper layers were older. He wasn’t just treasure hunting. He was reading the structure of the earth itself as a record of time, decades before that approach became standard practice.
The Three-Age System: 1836
The moment archaeology gained its first real chronological framework came in 1836, when Danish scholar Christian Jürgensen Thomsen published a guidebook for the National Museum of Denmark. Thomsen proposed that human prehistory could be divided into three successive ages based on the dominant material used for tools: Stone, Bronze, and Iron. He developed the idea while trying to organize a massive, chaotic collection of prehistoric artifacts from Denmark, and it gave scholars something they had never had before: a way to arrange objects in rough chronological order even without written records.
The three-age system was, as one historian put it, “only a plausible idea of the progress of human industrial development.” But it worked well enough to become the backbone of prehistoric archaeology across Europe, and versions of it are still used today.
Late 1800s: Archaeology Becomes a Science
The late 19th century is when archaeology truly separated from antiquarianism. Two figures were especially important in establishing the methods that define the field.
Augustus Pitt-Rivers, a British general, pioneered what he called “typology,” the practice of organizing artifacts by form and function rather than by where they were found. He arranged objects like spears, bows, fishing tools, and baskets into sequences that showed how designs changed over time. No one before him had created realistic typologies, and later generations of archaeologists built directly on his work. His approach also emphasized careful recording and systematic excavation, setting a standard that moved the field away from the haphazard digging of earlier decades.
Around the same time, Flinders Petrie developed “sequence dating” while working in Egypt. Analyzing ceramics from the Upper Egyptian cemeteries of Naqada, Ballas, and Diospolis Parva, Petrie defined more than 700 specific types of funerary pottery based on shape, finish, and material. He then examined over 900 tombs, listing the pottery types found in each one on strips of card and physically rearranging them until he had a sequence that showed gradual, incremental changes in style. His key insight was that certain forms “degraded” in predictable ways: wavy-handled pots, for instance, evolved from globular shapes with functional handles to cylindrical forms with purely decorative wavy lines. This gave him a relative timeline for graves that had no inscriptions or written dates at all.
Archaeology Enters the University
The professionalization of archaeology happened unevenly. By the mid-1800s, Germany already had ten university chairs of archaeology, and France had one. Britain lagged behind. In 1851, John Disney established the first British university chair in archaeology at Cambridge, though the proposal barely passed, winning approval by a vote of eight to seven. Oxford didn’t create its own chair of classical archaeology for another three decades. For most of the 19th century, British archaeology was driven by amateur societies and museums, particularly the British Museum, rather than by universities.
Radiocarbon Dating Changes Everything
Every dating method before 1950 was relative: this object is older than that one, but how old exactly? That changed when American chemist Willard Libby developed radiocarbon dating. Libby first proposed the underlying idea in 1946 and published his landmark monograph in 1952. The method measures the decay of a naturally occurring radioactive form of carbon in organic materials like wood, bone, and charcoal, providing an absolute date rather than just a position in a sequence.
Libby validated the technique against tree rings of known age and independently dated Egyptian artifacts, demonstrating its accuracy over a span of nearly 5,000 years. The impact was immediate. Within a few years, more than a hundred radiocarbon dating laboratories had been established worldwide, and a dedicated journal, Radiocarbon, was launched. Entire prehistoric timelines that had been educated guesses were suddenly testable. Some held up. Others were dramatically wrong and had to be rewritten.
Theoretical Shifts in the 20th Century
By the 1970s, archaeology had developed enough confidence in its scientific tools that a major theoretical movement, known as processual archaeology or “New Archaeology,” took hold in the United States. Processual archaeologists treated human societies as systems that could be studied with the same scientific rigor applied to natural phenomena. Culture was viewed as an adaptive mechanism, and the goal was to identify general laws of human behavior from material remains.
Almost immediately, a counter-movement emerged. Post-processual archaeology, rooted in dissatisfaction with the direction the field was taking, argued that treating people as components of adaptive systems missed what mattered most. Post-processualists shifted the focus from behavior (what people did) to practice (what people did understood through their knowledge, aims, and intentions). Early post-processual studies examined symbolism, the meaning of pottery decoration, house design, and burial practices. This was a cognitive archaeology interested in how ancient people thought, not just how they survived.
These two schools of thought continue to shape archaeological research, though most working archaeologists today draw freely from both traditions depending on the questions they’re asking.
A Timeline of Key Milestones
- ~542 BC: Nabonidus excavates the Ebabbar temple in Sippar, producing the earliest known written account of intentional excavation and artifact dating.
- 14th–17th centuries: Renaissance antiquarians begin systematically collecting and cross-referencing ancient objects with written sources.
- 1783: Thomas Jefferson cuts a stratigraphic trench through a Virginia burial mound, reading soil layers as a timeline.
- 1836: Christian Jürgensen Thomsen publishes the three-age system (Stone, Bronze, Iron), giving prehistory its first chronological framework.
- 1851: Cambridge establishes the first British university chair of archaeology.
- Late 1800s: Pitt-Rivers and Flinders Petrie develop typology and sequence dating, establishing archaeology’s core methods.
- 1952: Willard Libby publishes his radiocarbon dating method, enabling absolute dates for organic materials up to tens of thousands of years old.
- 1970s: Processual and post-processual archaeology emerge as competing theoretical frameworks for interpreting the past.

