Big-O Complexity Guide
Big-O notation is the language programmers use to describe how an algorithm's time or space requirements grow as the input size grows. It lets you compare algorithms independent of hardware or implementation details.
What Is Big-O?
Big-O describes the upper bound of an algorithm's growth rate. It captures the worst-case behavior and ignores constants and lower-order terms, because those become irrelevant at large scale.
Formally: f(n) is O(g(n)) if there exist constants c > 0 and n₀ such that f(n) ≤ c·g(n) for all n ≥ n₀.
In practice: ask yourself "if I double the input size, what happens to the running time?"
Complexity Classes
O(1) — Constant Time
The operation takes the same time regardless of input size. Array index access, hash table lookup (average), and pushing to a stack are all O(1).
arr = [1, 2, 3, 4, 5]
first = arr[0] # O(1) — always one operation regardless of array size
cache = {}
cache["key"] = "value" # O(1) amortized
val = cache.get("key") # O(1) average
O(log n) — Logarithmic Time
Each step halves the remaining problem. Binary search on a sorted array is O(log n) — searching 1 million elements takes ~20 steps. Balanced BST operations (insert, search, delete) are O(log n).
def binary_search(arr, target):
lo, hi = 0, len(arr) - 1
while lo <= hi:
mid = (lo + hi) // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
lo = mid + 1 # discard left half
else:
hi = mid - 1 # discard right half
return -1
# O(log n) — halves the search space each iteration
O(n) — Linear Time
Work scales proportionally with input size. Iterating through an array, finding max/min, and linear search are O(n).
O(n log n) — Linearithmic Time
The theoretical lower bound for comparison-based sorting. Merge sort, heap sort, and quicksort (average case) are O(n log n).
O(n²) — Quadratic Time
Typically appears with nested loops over the same input. Bubble sort, insertion sort, and selection sort are O(n²). Checking all pairs in an array is O(n²).
# O(n^2) — nested loop
def has_duplicate(arr):
for i in range(len(arr)):
for j in range(i + 1, len(arr)):
if arr[i] == arr[j]:
return True
return False
# O(n) improvement using a set
def has_duplicate_fast(arr):
return len(arr) != len(set(arr))
O(2ⁿ) — Exponential Time
Growth doubles with each addition to the input. Generating all subsets of a set is O(2ⁿ). Naive recursive Fibonacci is O(2ⁿ) due to repeated subproblems.
Visual Comparison
Space Complexity
Space complexity measures the auxiliary memory an algorithm uses, not counting the input itself.
- O(1) space: In-place algorithms. Swapping two elements, bubble sort.
- O(n) space: Creating a copy of the input. Merge sort's merge step.
- O(log n) space: Recursive algorithms with depth proportional to log n. Binary search recursion.
- O(n) stack space: Recursive algorithms with depth proportional to n. Naive recursive tree traversal.
Common Patterns
| Code Pattern | Complexity | Example |
|---|---|---|
| Single loop over input | O(n) | Array sum, linear search |
| Nested loops, same input | O(n²) | Bubble sort, all pairs |
| Divide in half each step | O(log n) | Binary search |
| Loop + halving inside | O(n log n) | Merge sort |
| Hash table lookup | O(1) avg | Cache, set membership |
| Recursion on all subsets | O(2ⁿ) | Power set, backtracking |
| Recursion on all permutations | O(n!) | TSP brute force |
Amortized Analysis
Amortized analysis looks at the average cost per operation over a sequence, not the worst-case cost of a single operation.
A dynamic array (like Python's list or Java's ArrayList) doubles its internal buffer when full. A single append can trigger an O(n) copy. But this doubling happens exponentially rarely — after n/2 cheap appends before the next copy. Over n appends, the total work is O(n), so the amortized cost per append is O(1).
# Each append is O(1) amortized, even though occasionally O(n)
arr = []
for i in range(1_000_000):
arr.append(i) # Amortized O(1) — list doubles capacity as needed
# Total work: n + n/2 + n/4 + ... = 2n = O(n) for n appends
# Per operation: O(n) / n = O(1) amortized
Other amortized O(1) operations: dict.__setitem__, set.add, stack push. All may occasionally trigger a resize but are O(1) amortized.