O(log n) describes an algorithm whose running time grows logarithmically as the input size (n) increases. In practical terms, every time the input doubles in size, the algorithm only needs one additional step. An algorithm that handles 1,000 items in about 10 steps can handle 1,000,000 items in about 20. That makes O(log n) one of the most efficient time complexities you’ll encounter.
What the Notation Actually Means
Big O notation is a way of expressing how an algorithm’s performance scales as its input grows. The “O” stands for “order of,” and the expression inside the parentheses tells you the growth rate. O(log n) means the number of operations grows proportionally to the logarithm of n.
A logarithm is the inverse of an exponent. If 2 raised to the 10th power equals 1,024, then the log base 2 of 1,024 is 10. So for an input of about a thousand elements, an O(log n) algorithm performs roughly 10 operations. For a billion elements (about 2 to the 30th), it performs roughly 30 operations. The input grew by a factor of a million, but the work only tripled.
You might wonder which base the logarithm uses. In Big O notation, the base doesn’t matter. Changing the base of a logarithm is the same as multiplying by a constant, and Big O ignores constant factors. Whether you think in base 2, base 10, or base e, O(log n) means the same thing. That said, most computer science examples use base 2 because the algorithms involved split things in half.
Why Halving Creates Logarithmic Time
The signature pattern behind O(log n) is an algorithm that cuts its remaining work in half at every step. Binary search is the classic example. You have a sorted list of n items and you’re looking for a specific value. You check the middle element. If it’s too high, you throw away the entire upper half. If it’s too low, you throw away the lower half. Either way, you’ve eliminated half the remaining possibilities in a single comparison.
The math works out cleanly. After the first step, you have n/2 items left. After the second, n/4. After the third, n/8. After k steps, you have n/2^k items left. The search ends when only one item remains, which means n/2^k = 1. Solving for k gives you k = log₂(n). A University of Maryland derivation shows the total iterations come to 1 + log₂(n), but Big O drops the constant, leaving O(log n).
This is why O(log n) algorithms feel almost magical at large scales. Searching a sorted list of 4 billion entries with binary search takes at most about 32 comparisons. A linear search (O(n)) could take all 4 billion.
Where You’ll See O(log n) in Practice
Binary search is the textbook example, but O(log n) shows up across many data structures and algorithms.
- Balanced binary search trees. Structures like red-black trees and AVL trees keep themselves balanced so that searching, inserting, and deleting all take O(log n) time. A red-black tree with n nodes has a height of at most 2·log(n+1), so operations that walk from root to leaf stay logarithmic. An unbalanced tree, by contrast, can degrade to O(n) if it ends up looking like a straight line of nodes.
- Heap operations. Inserting into or extracting the minimum/maximum from a binary heap takes O(log n) because the heap is a complete binary tree. The element “bubbles” up or down through at most log n levels.
- Divide-and-conquer subroutines. Algorithms that split the problem in half and only recurse on one half produce O(log n) running times. Binary search fits this template: the recurrence is T(n) = T(n/2) + O(1), which solves to O(log n).
- Exponentiation by squaring. Computing x to the nth power doesn’t require n multiplications. By squaring the result and halving the exponent at each step, you get the answer in O(log n) multiplications.
How O(log n) Compares to Other Complexities
To get a feel for how good O(log n) is, consider how different complexities scale for n = 1,000,000:
- O(1) — constant time, always the same number of steps regardless of input. The fastest possible, but few algorithms achieve it for non-trivial work.
- O(log n) — about 20 steps.
- O(n) — 1,000,000 steps.
- O(n log n) — about 20,000,000 steps. This is the complexity of efficient sorting algorithms like merge sort.
- O(n²) — 1,000,000,000,000 steps.
O(log n) sits just above constant time, which makes it extremely desirable. The gap between O(log n) and O(n) is one of the biggest performance jumps in algorithm design. Choosing a data structure with O(log n) lookups over one with O(n) lookups can be the difference between a program running in milliseconds and one that takes minutes.
Recognizing O(log n) in Code
If you’re trying to determine whether a piece of code runs in O(log n) time, look for a specific pattern: a loop or recursive call where the remaining input shrinks by a constant fraction (usually half) on each iteration. The telltale signs are a variable being divided by 2 in a while loop, or a recursive call that passes n/2 as the new input size.
A simple example in pseudocode:
while n > 1:
n = n / 2
(do constant work)
That loop runs log₂(n) times. Each iteration does a fixed amount of work (one comparison, one division), so the total is O(log n).
If you see the input being divided by 3 instead of 2, it’s still O(log n). Dividing by 3 gives log₃(n) iterations, but since the base of the logarithm doesn’t affect Big O classification, it’s the same complexity class. The key requirement is that each step removes a constant fraction of the input, not a constant amount. Subtracting 1 each time gives O(n). Dividing by 2 each time gives O(log n). That distinction matters enormously.

