Skip to content

Algorithmic Complexity

Algorithmic complexity is the foundation of computer science and systems programming. Understanding how algorithms scale with input size is crucial for designing efficient systems, especially in performance-critical domains like high-frequency trading where every microsecond counts.

Let's explore the concepts of algorithmic complexity and understand how to analyze and compare algorithms.

What is Algorithmic Complexity?

Algorithmic complexity measures how the performance of an algorithm changes as the input size grows. It helps us answer questions like:

  • How fast will this algorithm run on larger inputs?
  • How much memory will it use?
  • Is this the best algorithm for my problem?

Big O Notation: The Language of Complexity

Big O notation describes the upper bound of an algorithm's growth rate. It tells us how the algorithm's performance scales as input size increases.

Understanding the Notation

O(f(n)) means the algorithm's performance grows no faster than f(n) as n (input size) increases. Key insight: We care about the growth rate, not the exact number of operations.

O(1) - Constant Time

The algorithm takes the same amount of time regardless of input size.

cpp
int getFirstElement(const std::vector<int>& arr) {
    return arr[0];  // Always one operation
}

Real-world example: Array access, hash table lookup (average case)

O(log n) - Logarithmic Time

The algorithm's time grows logarithmically with input size.

cpp
int binarySearch(const std::vector<int>& arr, int target) {
    int left = 0, right = arr.size() - 1;

    while (left <= right) {
        int mid = left + (right - left) / 2;
        if (arr[mid] == target) return mid;
        if (arr[mid] < target) left = mid + 1;
        else right = mid - 1;
    }
    return -1;
}

Why log n?: Each iteration halves the search space. Real-world example: Binary search, balanced tree operations

O(n) - Linear Time

The algorithm's time grows linearly with input size.

cpp
int findMax(const std::vector<int>& arr) {
    int max = arr[0];
    for (int i = 1; i < arr.size(); i++) {
        if (arr[i] > max) max = arr[i];
    }
    return max;
}

Real-world example: Linear search, array traversal, most single loops

O(n log n) - Linearithmic Time

Common in efficient sorting and divide-and-conquer algorithms.

cpp
// Merge sort complexity
void mergeSort(std::vector<int>& arr, int left, int right) {
    if (left < right) {
        int mid = left + (right - left) / 2;
        mergeSort(arr, left, mid);      // O(n/2 log n/2)
        mergeSort(arr, mid + 1, right); // O(n/2 log n/2)
        merge(arr, left, mid, right);   // O(n)
    }
}
// Total: O(n log n)

Real-world example: Merge sort, quick sort (average case), heap sort

O(n²) - Quadratic Time

The algorithm's time grows quadratically with input size.

cpp
void bubbleSort(std::vector<int>& arr) {
    for (int i = 0; i < arr.size(); i++) {
        for (int j = 0; j < arr.size() - i - 1; j++) {
            if (arr[j] > arr[j + 1]) {
                std::swap(arr[j], arr[j + 1]);
            }
        }
    }
}

Real-world example: Bubble sort, selection sort, insertion sort, nested loops

O(2ⁿ) - Exponential Time

The algorithm's time grows exponentially with input size.

cpp
int fibonacci(int n) {
    if (n <= 1) return n;
    return fibonacci(n - 1) + fibonacci(n - 2);
}

Real-world example: Recursive Fibonacci, traveling salesman problem (naive), subset generation

Space Complexity: Memory Usage Analysis

Space complexity measures how much memory an algorithm uses relative to input size.

Examples

cpp
// O(1) space - constant memory usage
int sumArray(const std::vector<int>& arr) {
    int sum = 0;
    for (int num : arr) {
        sum += num;
    }
    return sum;
}

// O(n) space - linear memory usage
std::vector<int> reverseArray(const std::vector<int>& arr) {
    std::vector<int> result(arr.size());
    for (int i = 0; i < arr.size(); i++) {
        result[arr.size() - 1 - i] = arr[i];
    }
    return result;
}

// O(n) space - recursive call stack
int factorial(int n) {
    if (n <= 1) return 1;
    return n * factorial(n - 1);  // n recursive calls = O(n) stack space
}

Practical Complexity Analysis

1. Identify the Basic Operations

What operations are most expensive or most frequent?

cpp
int findElement(const std::vector<int>& arr, int target) {
    for (int i = 0; i < arr.size(); i++) {  // Loop: n iterations
        if (arr[i] == target) {             // Comparison: O(1)
            return i;                       // Return: O(1)
        }
    }
    return -1;                              // Return: O(1)
}

Basic operation: Comparison (arr[i] == target)

2. Count Operations in Terms of Input Size

cpp
// Best case: target is first element
// Operations: 1 comparison = O(1)

// Worst case: target is not found
// Operations: n comparisons = O(n)

// Average case: target is in middle
// Operations: n/2 comparisons = O(n)

3. Determine the Dominant Term

When analyzing complexity, focus on the fastest-growing term:

cpp
void algorithm(int n) {
    for (int i = 0; i < n; i++) {           // O(n)
        for (int j = 0; j < n; j++) {       // O(n)
            // Some operation
        }
    }

    for (int k = 0; k < n; k++) {           // O(n)
        // Some operation
    }
}
// Total: O(n²) + O(n) = O(n²)

Rule: Drop constants and lower-order terms.

Common Algorithm Complexities

Sorting Algorithms Comparison

AlgorithmBest CaseAverage CaseWorst CaseSpace
Bubble SortO(n)O(n²)O(n²)O(1)
Selection SortO(n²)O(n²)O(n²)O(1)
Insertion SortO(n)O(n²)O(n²)O(1)
Merge SortO(n log n)O(n log n)O(n log n)O(n)
Quick SortO(n log n)O(n log n)O(n²)O(log n)
Heap SortO(n log n)O(n log n)O(n log n)O(1)

Search Algorithms Comparison

AlgorithmTime ComplexitySpace ComplexityWhen to Use
Linear SearchO(n)O(1)Small arrays, unsorted data
Binary SearchO(log n)O(1)Sorted arrays
Hash TableO(1) averageO(n)Fast lookups, no ordering needed
Binary Search TreeO(log n) averageO(n)Ordered data, range queries

When Complexity Matters

High-frequency trading systems:

cpp
// Bad: O(n²) algorithm for order matching
void matchOrders(std::vector<Order>& orders) {
    for (int i = 0; i < orders.size(); i++) {
        for (int j = i + 1; j < orders.size(); j++) {
            if (canMatch(orders[i], orders[j])) {
                executeMatch(orders[i], orders[j]);
            }
        }
    }
}

// Better: O(n log n) using sorting
void matchOrders(std::vector<Order>& orders) {
    std::sort(orders.begin(), orders.end(), compareByPrice);
    // Process sorted orders in O(n)
}

Hidden Complexity

String operations:

cpp
// O(n²) - string concatenation in loop
std::string result = "";
for (int i = 0; i < n; i++) {
    result += "x";  // Each += can be O(n) due to string copying
}

// O(n) - pre-allocate space
std::string result(n, 'x');  // Direct construction

Container operations:

cpp
// O(n) - vector insertion at beginning
std::vector<int> vec;
for (int i = 0; i < n; i++) {
    vec.insert(vec.begin(), i);  // Shifts all elements
}

// O(1) - vector insertion at end
std::vector<int> vec;
for (int i = 0; i < n; i++) {
    vec.push_back(i);  // Amortized O(1)
}

Amortized Analysis

Some operations have varying costs, but average out over time.

Dynamic array (std::vector):

cpp
std::vector<int> vec;
for (int i = 0; i < 1000; i++) {
    vec.push_back(i);  // Most operations: O(1)
    // Occasionally: O(n) when resizing
    // Amortized: O(1) per operation
}

Why amortized analysis matters:

  • Individual operations may be expensive
  • Average cost over many operations is often much better
  • Important for understanding real-world performance

Complexity vs Performance

Important distinction:

  • Complexity: How algorithm scales with input size
  • Performance: Actual runtime on specific hardware

Example:

cpp
// O(n²) but fast for small inputs
void bubbleSort(std::vector<int>& arr) {
    // Simple, cache-friendly, good for small arrays
}

// O(n log n) but slower for small inputs
void quickSort(std::vector<int>& arr) {
    // Complex, function call overhead, better for large arrays
}

When to consider actual performance:

  • Small input sizes (n < 100)
  • Cache effects and memory access patterns
  • Hardware-specific optimizations
  • Constant factors matter

Algorithm Design Strategies

Divide and Conquer

Break problem into smaller subproblems, solve recursively, combine results.

cpp
// Merge sort: O(n log n)
std::vector<int> mergeSort(const std::vector<int>& arr) {
    if (arr.size() <= 1) return arr;

    // Divide
    int mid = arr.size() / 2;
    auto left = mergeSort({arr.begin(), arr.begin() + mid});
    auto right = mergeSort({arr.begin() + mid, arr.end()});

    // Conquer
    return merge(left, right);
}

Dynamic Programming

Store solutions to subproblems to avoid recomputation.

cpp
// Fibonacci with memoization: O(n) time, O(n) space
int fibonacci(int n, std::vector<int>& memo) {
    if (n <= 1) return n;
    if (memo[n] != -1) return memo[n];

    memo[n] = fibonacci(n - 1, memo) + fibonacci(n - 2, memo);
    return memo[n];
}

Greedy Algorithms

Make locally optimal choice at each step.

cpp
// Activity selection: O(n log n)
std::vector<Activity> selectActivities(std::vector<Activity>& activities) {
    std::sort(activities.begin(), activities.end(), 
              (const Activity& a, const Activity& b) {
                  return a.end < b.end;
              });

    std::vector<Activity> selected;
    int lastEnd = 0;

    for (const auto& activity : activities) {
        if (activity.start >= lastEnd) {
            selected.push_back(activity);
            lastEnd = activity.end;
        }
    }

    return selected;
}

The Bottom Line

Algorithmic complexity is essential for:

  • Algorithm selection: Choosing the right algorithm for your problem
  • Performance prediction: Understanding how algorithms will scale
  • System design: Designing systems that can handle expected loads
  • Optimization: Identifying bottlenecks and improvement opportunities

Key principles:

  • Focus on growth rate, not exact operation counts
  • Consider both time and space complexity
  • Understand when complexity matters vs when it doesn't
  • Use complexity analysis to guide algorithm selection

Remember: The best algorithm depends on your specific constraints - input size, performance requirements, memory limitations, and implementation complexity. Complexity analysis helps you make informed decisions, but it's just one tool in the systems programmer's toolkit.

Questions

Q: What does O(n) time complexity mean?

O(n) means the algorithm's runtime grows linearly with the input size.

Q: Which sorting algorithm has the best average-case time complexity?

Quick sort has O(n log n) average-case time complexity, which is optimal for comparison-based sorting.

Q: What is the time complexity of binary search?

Binary search has O(log n) time complexity because it halves the search space in each iteration.

Q: What does space complexity measure?

Space complexity measures the amount of memory an algorithm uses relative to the input size.

Q: Which complexity class represents the fastest growing algorithms?

O(2ⁿ) represents exponential growth, which is the fastest growing among common complexity classes.