What Is Linear Time Complexity? O(n) Explained

Linear time complexity, written as O(n), means an algorithm’s running time grows in direct proportion to the size of its input. If you double the input, the work roughly doubles. If you multiply the input by ten, the work multiplies by roughly ten. This predictable, straight-line relationship between input size and execution time is what makes it “linear.”

How O(n) Works

The “n” in O(n) represents the size of the input, whether that’s the number of items in a list, rows in a database, or characters in a string. An algorithm runs in linear time when there’s a fixed upper bound on how much work it does per element. Formally, it means the total number of operations is at most some constant multiplied by n. The actual constant doesn’t matter for classification purposes: whether your algorithm does 2n or 50n operations, it’s still O(n) because the growth rate is the same.

A simple analogy: cleaning rooms. Cleaning two rooms takes about twice as long as cleaning one. Three rooms, three times as long. The time scales in lockstep with the size of the task. That’s linear behavior.

What Linear Time Looks Like in Code

The most common code pattern that produces O(n) complexity is a single loop that iterates through every element of the input once. For example, scanning an array to find the largest value requires checking each item exactly one time. The loop runs n times, and each pass through the loop does a fixed amount of work, so the total is n times a constant, which simplifies to O(n).

Here are everyday operations that run in linear time:

  • Searching an unsorted list. To find a specific value, you may need to check every element. In the best case you get lucky on the first check, but in the worst case you examine all n items.
  • Summing all elements in an array. You touch each number once to add it to a running total.
  • Counting characters in a string. You walk through the string from start to finish.
  • Removing an item from the middle of a list. After deletion, every item that came after it needs to shift forward by one position, which in the worst case means moving all n items.

The key tell is a single loop (or a fixed number of sequential loops) over the input with no nested looping. Two separate loops that each run n times give you 2n operations, which is still O(n). But a loop inside a loop, where both depend on n, jumps you to O(n²).

Where O(n) Fits Among Other Complexities

Time complexities form a hierarchy from fastest-growing to slowest-growing. Here’s how the most common classes compare as input size increases:

  • O(1), constant time. The work stays the same no matter how large the input gets. Adding two numbers is O(1) regardless of how big they are.
  • O(log n), logarithmic time. The work grows slowly. Doubling the input adds only one extra step. Binary search on a sorted list works this way.
  • O(n), linear time. The work grows in direct proportion to input size.
  • O(n log n), linearithmic time. Slightly faster than quadratic. Efficient sorting algorithms like merge sort fall here.
  • O(n²), quadratic time. Doubling the input quadruples the work. Triple the input and you need nine times as long.
  • O(2ⁿ), exponential time. Each additional input element doubles the total work. These algorithms become unusable very quickly.

On a graph, O(n) appears as a straight diagonal line. O(1) is a flat horizontal line beneath it. O(n²) curves sharply upward, and exponential growth shoots nearly vertical. For small inputs the differences are negligible, but as n grows into the thousands or millions, the gap between linear and quadratic becomes enormous.

Why Linear Time Matters at Scale

Linear time is often considered the practical threshold for “efficient enough” when dealing with very large datasets. Companies like Google and Facebook operate on billions of users and trillions of links, storing millions of terabytes of data. At that scale, even a quadratic-time algorithm (O(n²)) becomes impractical because doubling the data quadruples the processing time. A linear-time algorithm, by contrast, keeps pace: double the data, double the time, and nothing worse.

Researchers studying graph problems on extremely large networks have found that insisting on strict linear-time solutions is what makes processing datasets of that magnitude feasible at all. When n is in the billions, the difference between O(n) and O(n²) isn’t a minor optimization. It’s the difference between a computation that finishes in minutes and one that would take years.

That said, O(n) isn’t always fast enough in absolute terms. If your input has a billion elements and each operation takes a microsecond, a linear scan still takes about 17 minutes. For real-time applications, you might need O(log n) or O(1) access patterns, which is why data structures like hash tables and balanced search trees exist.

Linear Time vs. Linear Space

Time complexity and space complexity are separate measurements that both use Big O notation. Linear time complexity, O(n), means the number of operations scales with input size. Linear space complexity, also written O(n), means the amount of memory the algorithm uses scales with input size.

An algorithm can have one without the other. Scanning an array to find the maximum value is O(n) in time but O(1) in space, because you only need a single variable to track the current maximum regardless of how large the array is. On the other hand, creating a copy of the entire array is O(n) in both time and space, since you’re doing n operations and allocating memory for n new elements. When evaluating an algorithm’s efficiency, both dimensions matter.

How to Recognize O(n) in Practice

When you’re reading or writing code and want to identify linear time complexity, look for these patterns. A single for-loop or while-loop that processes each input element once is the classic indicator. Functions that use a pointer or cursor to walk through a data structure from beginning to end, like traversing a linked list, are typically O(n). Built-in operations like checking whether an item exists in an unsorted list, concatenating strings in some languages, or copying a collection are all linear under the hood even if they look like single statements in your code.

Watch out for hidden non-linear behavior. Calling a linear operation inside a loop creates O(n²). Sorting inside a loop is even worse. If you’re aiming for O(n) overall, every operation inside your main loop needs to run in constant time.