Is Quicksort Faster Than Merge Sort?

Quicksort is generally faster than merge sort in practice, even though both algorithms share the same average time complexity of O(n log n). The difference comes down to how each algorithm uses memory and interacts with your computer’s hardware. But “faster” depends heavily on the situation: the type of data, how much of it there is, and whether you need guarantees about worst-case performance.

Why Quicksort Is Faster on Average

Both quicksort and merge sort make roughly the same number of comparisons on average. The speed gap comes from something less obvious: cache performance. Quicksort sorts data in place, meaning it works within the same block of memory rather than copying elements to a separate array. Modern processors are built to work fastest when they access memory locations that are close together (this is called cache locality). Because quicksort swaps elements within the original array, it takes full advantage of this hardware optimization.

Merge sort, by contrast, needs to allocate a separate chunk of memory to merge sorted halves back together. For an array of n elements, that extra memory is typically O(n). This means the processor has to jump between two different memory locations repeatedly, which slows things down. The extra memory allocation itself also takes time, especially when sorting large datasets.

In benchmarks on random data, quicksort typically runs 2 to 3 times faster than merge sort for in-memory sorting of arrays. The constant factors (the hidden overhead behind the O(n log n) label) are simply smaller for quicksort.

When Quicksort Falls Apart

Quicksort has a serious vulnerability: its worst-case performance is O(n²), which is dramatically slower than merge sort’s guaranteed O(n log n). This worst case happens when the pivot element chosen at each step is always the largest or smallest value in the subarray. Instead of splitting the data roughly in half, the algorithm peels off one element at a time, turning what should be a divide-and-conquer strategy into something closer to a slow linear scan. The total work becomes n + (n-1) + (n-2) + … + 1, which equals n×(n+1)/2.

Already-sorted or nearly-sorted data triggers this worst case when the implementation picks the first or last element as the pivot. Choosing the median of three elements, or picking a random pivot, mostly avoids the problem but can’t eliminate it entirely. For adversarial inputs (data specifically crafted to exploit your pivot selection), pure quicksort can always be forced into O(n²) territory.

Merge sort never has this problem. It always divides the array exactly in half regardless of the data, so its performance is O(n log n) in every case: best, average, and worst.

Stability: A Key Difference

A stable sorting algorithm preserves the original order of elements that have equal values. If you sort a list of employees by salary, and two people earn the same amount, a stable sort keeps them in whatever order they were in before. Merge sort is stable. Quicksort is not.

This matters more than it might sound. When sorting complex data, you often sort by one field, then by another. Stability ensures that your first sort isn’t scrambled by the second. If you need stability and choose quicksort, you’d have to carry extra information to track original positions, which adds overhead and complexity.

What Real-World Languages Actually Use

No major programming language uses pure quicksort or pure merge sort in its standard library anymore. Instead, they use hybrids that combine the strengths of multiple algorithms.

  • C++ (std::sort) uses Introsort, which starts with quicksort but switches to heapsort if the recursion depth gets too deep. This gives you quicksort’s fast average performance with a guaranteed O(n log n) worst case. For small partitions, it switches to insertion sort, which is faster on tiny arrays due to lower overhead.
  • Rust (slice::sort) also uses Introsort, starting with quicksort and falling back to heapsort when needed.
  • Python (sorted / list.sort) uses Timsort, which is built on merge sort but optimized to detect and exploit runs of already-sorted data. It’s stable by design.
  • Java uses a dual-pivot quicksort variant for primitive types and Timsort for objects (where stability matters).

The pattern is telling. Languages that don’t need stability lean toward quicksort-based hybrids for raw speed. Languages that do need stability lean toward merge sort-based hybrids. Before a standards update (LWG713), C++ actually allowed its sort to be implemented with pure quicksort, accepting the O(n²) worst case. That’s no longer permitted.

When Merge Sort Wins

Merge sort pulls ahead in several specific scenarios. For linked lists, merge sort is the clear winner because it can merge nodes by simply reassigning pointers, with no extra memory needed. Quicksort’s advantage of cache-friendly array access disappears entirely with linked lists.

External sorting, where data is too large to fit in memory and lives on disk, also favors merge sort. Reading and writing large sequential blocks from disk plays to merge sort’s strengths, and the extra memory buffer it needs is trivial compared to the dataset size.

If your data is already partially sorted, merge sort (especially Timsort-style variants) can finish in nearly O(n) time by recognizing and preserving existing sorted runs. Quicksort gets no such benefit and may actually perform worse on sorted data depending on pivot selection.

Choosing Between Them

For sorting arrays of numbers or simple values in memory, quicksort (or a quicksort-based hybrid) is the faster choice in the vast majority of cases. The cache performance advantage is real and significant.

Pick merge sort when you need guaranteed worst-case performance, stable ordering, or you’re working with linked lists or data that doesn’t fit in memory. If you’re writing code in a high-level language, your standard library’s built-in sort has already made this decision for you, using a carefully tuned hybrid that handles edge cases you’d otherwise have to worry about yourself.