Computer latency – What is the cost of latency

Usually, when we think about improving the performance of a program, our mind wonders towards  the Big O notation. As good as that model may be, there are more aspects of performance that we have to consider. One of the important assumptions that we make when we use the Big O notation, is that all operations are created equal and take the same time to execute. I’m sorry to burst your bubble, but the operations we use are not the same. The time they take to execute varies widely, therefore, if you are aware of the different operation types, you can truly improve the performance of your code.

CPU’s have got faster

Millions of operations per second of different cpus thought the years

This is the evolution of the speed of the average CPU in Millions of operations per second. Source here. You may have seen it, or you may be familiar with Moore’s law. Getting back to the graph above, you might notice that CPU’s speed has been increasing steadily for a while now. Unfortunately however, programs need more than CPUs to run. They also need RAM, storage and most of times a network connection.

For example: While CPUs on average perform 12 times more operations per second in 2017 compared to 2007, the average internet speed for USA has increased merely 6 times (source here). The point I am trying to make here is that CPUs have gotten faster and the rest of the computer has a hard time keeping up.

Overview of computer latency

Here is an overview of the time it takes to do different types of tasks. To make things easier to understand I will turn those into dollars so you have a better understanding of the resources you are spending.

  • 1 cpu cycle – 0.4 ns – $1 – a cup of coffee
  • Layer 1 cache – 0.9 ns – $2 – cheap menu at McDonald
  • Layer 2 cache – 2.8 ns – $7 –  your average stapler
  • Layer 3 cache – 28 ns – $70 – Netflix subscription for 6 months
  • Reference main memory – 100ns – $250 – a cheap smartphone
  • Reading 1 MB from an SSD – 1 ms – $2 500 000 – a really nice house
  • Reading 1 MB from an HDD – 10 ms – $25 000 000 – a fortune 500 company s yearly profits
  • Send packet San Francisco to Hong Kong – 141 ms – $352 500 000 – a bit more than the GDP of Micronesia in 2017

Another point that might be worth mentioning here is that the further we get from the CPU, the higher the risk that something will go wrong. And so many things can go wrong with a computer programs.

What to do with this information

The purpose of all of this is to give you the broader picture when you design and/or implement your programs. Sometimes, mischievous traps are hidden behind the most benign database call. Be mindful where you spend your resources or you might find yourself running out of budget too soon.

What is computational complexity ?

Fractal - computational complexity

Complexity analysis leans on the more theoretical side of computer science. Ironically, I know, given this site’s motto, however, bear with me, it might just be worth it at the end.

Allow me to answer the first question you should have: What exactly is computational complexity ? I’m glad you asked! Computational complexity is one of the measuring sticks we’re using to compare different solutions, in an attempt to decide which one is the better choice.

What are we measuring ?

The goal for us is to decide which solution is better. That means, usually, how fast does the algorithm do its job. The problem with this approach, is that computational speed is influenced by both algorithm design and the hardware it’s run on. It wouldn’t be any fun to always decide that the best algorithm is the one that runs on the fastest computer, right ?! The good news is that a better algorithm will always run faster if the input is large enough. I classic fables terms: any slow turtle can beat the hare if they run on different tracks (In this analogy, the animal is the computer and the race track is the solution).

This sounds hard – How do we measure this?

Counting all the operations the algorithm makes is borderline impossible so instead we try to get a feel of how many operations we need to do for a given input. So you have to settle for an approximation, the upper bound of that approximation is the famous big O notation.

To get a simple form for of the big O notation you can do this: If you have of n elements as an input and your program does let’s say 45 * n + 55 operations, to get the big O value, you ignore the constants and keep only the biggest term involving n. So the big O notation for our algorithm would be O(n). This means that if you double the input, your program will run for about twice as long.

More so: When you add two solutions with big O notations (as in solve two problems one after the other in your program) the bigger one wins. That is O(n^2) + O(n) = O(n^2). If the alternatives are equal, you can pick either one. This is a quick and dirty way of getting to a solution, which tends to work in practice when dealing with algorithms. There are times when the bigger term is not obvious. You then have to go back to the basics and find a formula that serves as the upper bound for both therms.

Multiplication happens when you solve a different problem on every step of your algorithm. It works pretty much as expected, as in O(n) * O(n^2) = O(n^3).

If you want to read more about the subject, you can check out the Wikipedia article for big O notation

Time for some examples

Let’s take some examples and see what’s their computational complexity.


The execution time remains the same, regardless of the input size.

  • Basic arithmetic operation (addition, subtraction etc)
  • Accessing elements in an array by index.
  • Searching for elements in a hash-map / dictionary object
  • Searching an object in a db using an indexed column

If you can’t design anything that works in O(1), this is  usually the next best thing. You can double the input for the run time to increase by one unit. Pretty sweet deal I’d say. Some common algorithms are:

  • Searching an element in a sorted list
  • Insert / delete operations on a binary tree / heap
  • Raise to the power (Exponential by squaring)
    • Theoretically this is not O(log(n)) because the result size can grow linearly with the input size. However, I still think it’s a neat algorithm and worth mentioning. Furthermore, it behaves quite well in practice.

The execution time grows with the input size. Doubling the input size doubles the execution time. Not a great deal, but it could work.

  • Search an element in an unordered list
  • Searching through a  table using an indexed column.

These are slightly more inefficient than O(n), although they offer pretty good performance.

  • Sorting – mostly sorting algorithms.

The execution time grows squarely with the input size. Therefore, with double the input, the program takes four times as long to finish. 

I’m adding to this bucket all the polynomial algorithms (O(n^x), regardless x). They are pretty similar so it’s not exactly cheating. Just go with it!

  • Generating pairs of elements
  • Inefficient sorting (bubble sort)
  • Joining tables

These are exploration algorithms. You get the input and you look at all possible solutions to see which ones work.

I’m also including here exponential time like O(2^n). The execution time grows so much that on an average computer you can usually handle an input size in the low tens (i.e. 30 / 40). Avoid using these if you can.

  • Backtracking
  • Traveling salesman problem

Disclaimer and other notations

The big O notation is a simple way to provide an upper bound on the growth rate of the execution time. In most cases that is enough, however, there are other notations that can be used if the situation demands it.

  • Ω – big omega  – the execution time grows at least as fast as the notation. It’s a lower bound on the execution time.
  • Θ – big theta – the algorithm is bounded by both big O and big Omega. This can translate into: the execution time grows as fast as the notation.

We also have a little variant for all the notations, I’m not going to get into any details, but I just wanted to let you know.