LanRaccoon

Practical tech advice

complexity

What is computational complexity ?

Complexity analysis leans on the more theoretical side of computer science. Ironically, I know, given this site’s motto, however, bear with me, it might just be worth it at the end.

Allow me to answer the first question you should have: What exactly is computational complexity ? I’m glad you asked! Computational complexity is one of the measuring sticks we’re using to compare different solutions, in an attempt to decide which one is the better choice.

What are we measuring ?

The goal for us is to decide which solution is better. That means, usually, how fast does the algorithm do its job. The problem with this approach, is that computational speed is influenced by both algorithm design and the hardware it’s run on. It wouldn’t be any fun to always decide that the best algorithm is the one that runs on the fastest computer, right ?! The good news is that a better algorithm will always run faster if the input is large enough. I classic fables terms: any slow turtle can beat the hare if they run on different tracks (In this analogy, the animal is the computer and the race track is the solution).

This sounds hard – How do we measure this?

Counting all the operations the algorithm makes is borderline impossible so instead we try to get a feel of how many operations we need to do for a given input. So you have to settle for an approximation, the upper bound of that approximation is the famous big O notation.

To get a simple form for of the big O notation you can do this: If you have of n elements as an input and your program does let’s say 45 * n + 55 operations, to get the big O value, you ignore the constants and keep only the biggest term involving n. So the big O notation for our algorithm would be O(n). This means that if you double the input, your program will run for about twice as long.

More so: When you add two solutions with big O notations (as in solve two problems one after the other in your program) the bigger one wins. That is O(n^2) + O(n) = O(n^2). If the alternatives are equal, you can pick either one. This is a quick and dirty way of getting to a solution, which tends to work in practice when dealing with algorithms. There are times when the bigger term is not obvious. You then have to go back to the basics and find a formula that serves as the upper bound for both therms.

Multiplication happens when you solve a different problem on every step of your algorithm. It works pretty much as expected, as in O(n) * O(n^2) = O(n^3).

If you want to read more about the subject, you can check out the Wikipedia article for big O notation

Time for some examples

Let’s take some examples and see what’s their computational complexity.

O(1)

The execution time remains the same, regardless of the input size.

  • Basic arithmetic operation (addition, subtraction etc)
  • Accessing elements in an array by index.
  • Searching for elements in a hash-map / dictionary object
  • Searching an object in a db using an indexed column
O(log(n))

If you can’t design anything that works in O(1), this is  usually the next best thing. You can double the input for the run time to increase by one unit. Pretty sweet deal I’d say. Some common algorithms are:

  • Searching an element in a sorted list
  • Insert / delete operations on a binary tree / heap
  • Raise to the power (Exponential by squaring)
    • Theoretically this is not O(log(n)) because the result size can grow linearly with the input size. However, I still think it’s a neat algorithm and worth mentioning. Furthermore, it behaves quite well in practice.
O(n)

The execution time grows with the input size. Doubling the input size doubles the execution time. Not a great deal, but it could work.

  • Search an element in an unordered list
  • Searching through a  table using an indexed column.
O(n*log(n))

These are slightly more inefficient than O(n), although they offer pretty good performance.

  • Sorting – mostly sorting algorithms.
O(n^2)

The execution time grows squarely with the input size. Therefore, with double the input, the program takes four times as long to finish. 

I’m adding to this bucket all the polynomial algorithms (O(n^x), regardless x). They are pretty similar so it’s not exactly cheating. Just go with it!

  • Generating pairs of elements
  • Inefficient sorting (bubble sort)
  • Joining tables
O(n!)

These are exploration algorithms. You get the input and you look at all possible solutions to see which ones work.

I’m also including here exponential time like O(2^n). The execution time grows so much that on an average computer you can usually handle an input size in the low tens (i.e. 30 / 40). Avoid using these if you can.

  • Backtracking
  • Traveling salesman problem

Disclaimer and other notations

The big O notation is a simple way to provide an upper bound on the growth rate of the execution time. In most cases that is enough, however, there are other notations that can be used if the situation demands it.

  • Ω – big omega  – the execution time grows at least as fast as the notation. It’s a lower bound on the execution time.
  • Θ – big theta – the algorithm is bounded by both big O and big Omega. This can translate into: the execution time grows as fast as the notation.

We also have a little variant for all the notations, I’m not going to get into any details, but I just wanted to let you know.

2 thoughts on “What is computational complexity ?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to top