Quick sort is the algorithm most programming languages use under the hood when you call .sort(). It's fast, memory-efficient, and elegant — a masterclass in divide-and-conquer problem solving.
Let's break down exactly how it works.
The Core Idea: Divide and Conquer
Quick sort follows a three-step strategy:
- Choose a pivot — pick one element from the array
- Partition — rearrange elements so everything smaller than the pivot is on the left, everything larger is on the right
- Recurse — apply quick sort to the left and right sub-arrays
After partitioning, the pivot is in its final sorted position. We then recursively sort the elements to its left and right. Eventually, every element has been a pivot, and the array is sorted.
A Concrete Example
Let's sort [8, 3, 1, 7, 5, 6, 2, 4] using the last element as the pivot:
Step 1: Pivot = 4
Partition the array around 4:
- Elements ≤ 4:
[3, 1, 2] - Pivot:
[4] - Elements > 4:
[8, 7, 5, 6]
Result: [3, 1, 2, **4**, 8, 7, 5, 6] — 4 is in its final position (index 3).
Step 2: Sort left sub-array [3, 1, 2]
Pivot = 2. Partition:
- Elements ≤ 2:
[1] - Pivot:
[2] - Elements > 2:
[3]
Result: [1, **2**, 3]
Step 3: Sort right sub-array [8, 7, 5, 6]
Pivot = 6. Partition:
- Elements ≤ 6:
[5] - Pivot:
[6] - Elements > 6:
[8, 7]
Result: [5, **6**, 8, 7]
Continue recursively on [8, 7]:
Pivot = 7. Result: [**7**, 8]
Final result: [1, 2, 3, 4, 5, 6, 7, 8]
The Partitioning Algorithm
The partition step is where the real work happens. Here's the Lomuto partition scheme:
def partition(arr, low, high):
pivot = arr[high] # Choose last element as pivot
i = low - 1 # Index of smaller element boundary
for j in range(low, high):
if arr[j] <= pivot:
i += 1
arr[i], arr[j] = arr[j], arr[i] # Swap
arr[i + 1], arr[high] = arr[high], arr[i + 1] # Place pivot
return i + 1 # Return pivot's final positionThe variable i tracks the boundary between "elements ≤ pivot" and "elements > pivot." As j scans through the array, whenever we find an element ≤ pivot, we swap it into the left partition.
The Full Algorithm
def quick_sort(arr, low=0, high=None):
if high is None:
high = len(arr) - 1
if low < high:
pivot_idx = partition(arr, low, high)
quick_sort(arr, low, pivot_idx - 1) # Sort left
quick_sort(arr, pivot_idx + 1, high) # Sort right
return arrThe elegance is in its simplicity — just partition and recurse. No merging step needed (unlike merge sort) because the partition puts elements in the right relative position.
Time Complexity
| Case | Complexity | When |
|---|---|---|
| Best | O(n log n) | Pivot always splits evenly |
| Average | O(n log n) | Random data |
| Worst | O(n²) | Already sorted + bad pivot choice |
Why O(n log n) on average?
Each partition step does O(n) work — it scans every element once. If the pivot splits the array roughly in half each time, we get log n levels of recursion. Total: n × log n.
When does O(n²) happen?
If the pivot is always the smallest or largest element, one partition is empty and the other has n-1 elements. Instead of log n levels, we get n levels. This happens when:
- The array is already sorted and you always pick the first/last element as pivot
- All elements are identical
Preventing worst-case
- Randomized pivot: Pick a random element instead of always using the last
- Median-of-three: Use the median of the first, middle, and last elements
- Introsort: Switch to heap sort if recursion gets too deep (what most standard libraries use)
Space Complexity
Quick sort is in-place — it doesn't create copies of the array. But it uses the call stack for recursion:
- Average: O(log n) stack space
- Worst: O(n) stack space
This is better than merge sort's O(n) auxiliary space for the merge buffer.
Quick Sort vs Merge Sort
These two O(n log n) algorithms are often compared:
| Aspect | Quick Sort | Merge Sort |
|---|---|---|
| Average time | O(n log n) | O(n log n) |
| Worst time | O(n²) | O(n log n) |
| Space | O(log n) | O(n) |
| Stable? | No | Yes |
| Cache-friendly? | Yes | Less so |
| In practice | Usually faster | Guaranteed consistent |
Quick sort wins in practice because:
- Its inner loop is simple (just comparisons and swaps)
- It's cache-friendly — elements are accessed sequentially
- The O(n²) worst case is easily avoided with good pivot selection
- No extra memory allocation needed
Merge sort wins when:
- You need a guaranteed O(n log n) worst case
- Stability matters (preserving order of equal elements)
- You're sorting linked lists (merge sort is naturally suited)
Pivot Selection Strategies
The pivot choice dramatically affects performance:
Last element (Lomuto): Simple but worst-case on sorted data.
Random element: Excellent average case. Makes worst-case astronomically unlikely.
Median-of-three: Take the median of the first, middle, and last elements. Good balance of simplicity and performance.
Ninther: Median of three medians-of-three. Used in some high-performance implementations.
Why Quick Sort is King in Practice
Despite merge sort's better worst-case guarantee, quick sort is the default choice for most standard library sort implementations:
- C's
qsort()— typically introsort (quick sort + heap sort fallback) - C++
std::sort()— introsort - Java
Arrays.sort()for primitives — dual-pivot quick sort - Python
sorted()— uses Timsort (based on merge sort, actually)
The constant factors in quick sort are small, cache performance is excellent, and the worst case is easily avoided.
Practice These Concepts
Apply partitioning and sorting to real problems:
- Kth Largest Element — Quickselect uses the same partition logic as quick sort
- 3Sum — Sorting enables the two-pointer technique
- Top K Frequent Elements — Compare sorting vs heap vs bucket sort
Related Articles
- Merge Sort: Divide and Conquer Made Simple — Another O(n log n) algorithm with guaranteed worst-case performance
- Bubble Sort Explained with Visualizations — The simplest sorting algorithm and a great starting point
- Big O Notation: A Practical Guide — Understand what O(n log n) vs O(n²) really means
- What is an Algorithm? The Complete Beginner's Guide — Start here if you're new to algorithms
See It In Action
Quick sort's recursive partitioning is much easier to understand when you can see it happening. Our interactive visualization shows:
- The pivot being selected and highlighted
- Elements being compared and swapped during partitioning
- The recursive breakdown into smaller and smaller sub-arrays
- Each pivot landing in its final sorted position
Step through it at your own pace, or watch it run at full speed to appreciate its efficiency.