The Euclidean algorithm is a method for finding the greatest common divisor (GCD) of two numbers. The GCD is the largest number that divides evenly into both. For example, the GCD of 12 and 8 is 4. What makes this algorithm remarkable is its speed and simplicity: instead of breaking numbers down into their prime factors, it uses repeated division to shrink the problem until the answer falls out.
How the Algorithm Works
The core idea relies on one key insight: the GCD of two numbers doesn’t change if you replace the larger number with the remainder after dividing it by the smaller one. In mathematical terms, gcd(a, b) = gcd(b, a mod b). This means you can keep swapping and dividing until one of the numbers hits zero, and the other number is your answer.
Here’s the process, step by step:
- Divide the larger number by the smaller one and note the remainder.
- Replace the larger number with the smaller one, and the smaller one with the remainder.
- Repeat until the remainder is zero. The last nonzero remainder is the GCD.
Let’s walk through an example with 270 and 192. Divide 270 by 192: you get 1 with a remainder of 78. Now find gcd(192, 78). Divide 192 by 78: you get 2 with a remainder of 36. Now find gcd(78, 36). Divide 78 by 36: you get 2 with a remainder of 6. Now find gcd(36, 6). Divide 36 by 6: you get 6 with a remainder of 0. The last nonzero remainder is 6, so gcd(270, 192) = 6.
Each step produces a strictly smaller remainder, which guarantees the process always terminates. You never get stuck in an infinite loop.
Why It Works
The algorithm rests on a simple property of divisors. If some number d divides both a and b, then d also divides any combination of a and b, including the remainder when you divide a by b. Writing that out: if a = qb + r, then any divisor shared by a and b is also a divisor shared by b and r. This means gcd(a, b) and gcd(b, r) are the same value. Each round of division preserves the GCD while making the numbers smaller, until one of them reaches zero. At that point, gcd(something, 0) is just “something,” and you have your answer.
How Fast It Runs
One reason the Euclidean algorithm has survived for over two thousand years is its efficiency. A result known as Lamé’s theorem puts an upper bound on the number of division steps: it takes at most five times the number of digits in the smaller number. So if the smaller number has 3 digits (anything up to 999), the algorithm finishes in 20 divisions or fewer, regardless of how large the other number is. More precisely, for a smaller number b, the algorithm completes within roughly 2 × log₂(b) steps in the worst case.
Compare this to the brute-force alternative of finding the GCD through prime factorization. Factoring a number into primes requires checking potential divisors up to its square root, which grows exponentially with the size of the number. For large numbers (the kind used in cryptography, with hundreds of digits), prime factorization becomes impractical while the Euclidean algorithm remains fast.
The worst case for the Euclidean algorithm occurs when you feed it two consecutive Fibonacci numbers, like 89 and 55. These produce the smallest possible remainders at each step, forcing the maximum number of divisions. Even then, the algorithm stays well within its logarithmic bound.
The Original Version From Euclid
The algorithm appears in Book VII of Euclid’s “Elements,” written around 300 BCE, making it one of the oldest algorithms still in practical use. Euclid’s original version didn’t use division. Instead, it worked by repeated subtraction: you subtract the smaller number from the larger one over and over until the two numbers are equal, and that equal value is the GCD.
For example, starting with 48 and 18: subtract 18 from 48 to get 30, subtract 18 from 30 to get 12, then subtract 12 from 18 to get 6, then subtract 6 from 12 to get 6. Both numbers are now 6, so that’s the GCD. This subtraction-based approach is conceptually simpler but slower, since it might take many subtractions where a single division would do. The modern version using division (the “mod” operation) is a direct optimization of Euclid’s original idea.
The Extended Euclidean Algorithm
A powerful extension of the basic algorithm doesn’t just find the GCD. It also finds two integers, x and y, such that ax + by = gcd(a, b). This is called the Bézout identity, and it has deep practical importance in number theory and cryptography.
The method works by running the standard Euclidean algorithm forward, then tracing back through the steps in reverse to express each remainder as a combination of the original two numbers. Take 8 and 5 as an example:
Going forward: 8 = 1 × 5 + 3, then 5 = 1 × 3 + 2, then 3 = 1 × 2 + 1. The GCD is 1. Now work backward: that final equation says 1 = 3 − 1 × 2. The previous step told us 2 = 5 − 1 × 3, so substitute: 1 = 3 − 1 × (5 − 1 × 3) = 2 × 3 − 1 × 5. And since 3 = 8 − 1 × 5, substitute again: 1 = 2 × (8 − 1 × 5) − 1 × 5 = 2 × 8 − 3 × 5. So x = 2 and y = −3.
This extended version is the backbone of modular arithmetic operations like finding multiplicative inverses, which are essential in RSA encryption and other public-key cryptography systems.
Writing It as Code
The algorithm translates into code almost directly from its mathematical description. In a recursive form, the logic is just a few lines: if the second number is zero, return the first number. Otherwise, call the function again with the second number and the remainder of dividing the first by the second.
In Python, that looks like:
def gcd(a, b):
if b == 0:
return a
return gcd(b, a % b)
An iterative version avoids recursion by using a loop:
def gcd(a, b):
while b != 0:
a, b = b, a % b
return a
Both versions produce identical results. The iterative form is sometimes preferred for very large numbers because it avoids the overhead of recursive function calls. Most programming languages include a built-in GCD function (Python’s math.gcd, for instance), and under the hood, they typically use the Euclidean algorithm or a close variant.
Where It Gets Used
Beyond the obvious task of finding greatest common divisors, the Euclidean algorithm shows up in simplifying fractions (divide numerator and denominator by their GCD), computing modular inverses for cryptography, solving linear equations with integer constraints, and even in signal processing and computer graphics. Its combination of mathematical elegance, guaranteed correctness, and computational speed is why an algorithm first described in ancient Greece remains a foundational tool in modern computer science.

