Scheduling – Come one step closer to full productivity

Productivity plan

 

The unicorn of office days: Having a productive day, each day, every day. It’s an admirable goal. It’s a goal we all strive to achieve. A step in that direction is having an efficient schedule. I’m here to help. Let get cracking!

Get the data

Step one in solving any problem is defining it. Try to keep a list of the tasks that need to be done within a given period. I usually find a week to work best. Use whatever you want to get a list of tasks. Forward warning, this might take some time and some trial and error to get used to. Some apps that I find useful when collecting tasks are:

  • Todoist – I use this to keep track of small tasks, repetitive tasks and reminders. I also use this as an idea bank to keep track of ideas before I get back to them and turn them into something more useful.
  • Trello – I use this to keep track of bigger projects. I find Trello boars offer a nice way to keep on top of multiple ongoing projects. If you’re anything like me, you have your work and then a couple of ongoing personal projects you need to deal with.

Disclaimer: There are the tools I’m using, and they seem to work fine for me. However, I must warn you, finding the right tool is a process. You will most likely need to try a couple of alternatives to find the one that works best for you.

A couple of alternatives might be: google tasks, google keep, your favorite calendar app.

Arrange the data

Do not underestimate the power of a well thought schedule. In computer science there is a concept called context switching. It’s the list of operations that the CPU needs to do to move from one task to another. They’re not processing the task, they’re the overhead needed to move from one task to another. Doing that to often can be a problem. You can spend more time switching between tasks than doing actual work. Our brains are not that different. To switch between tasks you need a long time. More so, you have to spend a lot of mental effort to do so. Why not try to minimize that?! Group similar tasks together! You have  a bunch of meetings you need to schedule ? Bundle them all together, so they don’t interrupt your other work! Need to do some paperwork for the week? Do all the paperwork in one day. It will be a boring day probably, but once you get that out of the way, you can focus on your other tasks. More so, you have the bonus that you don’t have to think about paperwork.

Remember, it’s your schedule and it should be custom tailored to you. You know yourself best. I find that certain times of the day work best for certain types of task. For me, morning works best for intense, focused work, so I try to reserve big blocks of time to code early. In the afternoon, I don’t seem to have that much energy left, so I try to do my more mundane tasks: reports, meetings, planning etc. Try out different things to find out what works best for you!

Give yourself some wiggle room

Things happen. More often than not, those things cause delays in your schedule. This is normal. Unknowns cause delays. Keep this in mind when designing your schedule. Meetings run late. Tasks take longer than expected. So many things can go wrong. Part of the purpose of the schedule is to isolate those issues so that they don’t affect other tasks. Some takeaway here could be: don’t put meetings that can run late just before meetings that cannot start late. Find out what the important tasks are and deal with them early (so that you have a bit of time for the tasks to run late)

Tinker with the schedule

A schedule is not a one-time job. It’s living creature, it grows, it evolves. It’s aliveee! Every one in a  while, review your schedule. Who knows, maybe there is something you can improve. Try out some different ways of scheduling, see which one works best for you. It’s your little world. You can do anything with it.

Do you have any more tips on how to build a nice schedule ?

What is computational complexity ?

Fractal - computational complexity

Complexity analysis leans on the more theoretical side of computer science. Ironically, I know, given this site’s motto, however, bear with me, it might just be worth it at the end.

Allow me to answer the first question you should have: What exactly is computational complexity ? I’m glad you asked! Computational complexity is one of the measuring sticks we’re using to compare different solutions, in an attempt to decide which one is the better choice.

What are we measuring ?

The goal for us is to decide which solution is better. That means, usually, how fast does the algorithm do its job. The problem with this approach, is that computational speed is influenced by both algorithm design and the hardware it’s run on. It wouldn’t be any fun to always decide that the best algorithm is the one that runs on the fastest computer, right ?! The good news is that a better algorithm will always run faster if the input is large enough. I classic fables terms: any slow turtle can beat the hare if they run on different tracks (In this analogy, the animal is the computer and the race track is the solution).

This sounds hard – How do we measure this?

Counting all the operations the algorithm makes is borderline impossible so instead we try to get a feel of how many operations we need to do for a given input. So you have to settle for an approximation, the upper bound of that approximation is the famous big O notation.

To get a simple form for of the big O notation you can do this: If you have of n elements as an input and your program does let’s say 45 * n + 55 operations, to get the big O value, you ignore the constants and keep only the biggest term involving n. So the big O notation for our algorithm would be O(n). This means that if you double the input, your program will run for about twice as long.

More so: When you add two solutions with big O notations (as in solve two problems one after the other in your program) the bigger one wins. That is O(n^2) + O(n) = O(n^2). If the alternatives are equal, you can pick either one. This is a quick and dirty way of getting to a solution, which tends to work in practice when dealing with algorithms. There are times when the bigger term is not obvious. You then have to go back to the basics and find a formula that serves as the upper bound for both therms.

Multiplication happens when you solve a different problem on every step of your algorithm. It works pretty much as expected, as in O(n) * O(n^2) = O(n^3).

If you want to read more about the subject, you can check out the Wikipedia article for big O notation

Time for some examples

Let’s take some examples and see what’s their computational complexity.

O(1)

The execution time remains the same, regardless of the input size.

  • Basic arithmetic operation (addition, subtraction etc)
  • Accessing elements in an array by index.
  • Searching for elements in a hash-map / dictionary object
  • Searching an object in a db using an indexed column
O(log(n))

If you can’t design anything that works in O(1), this is  usually the next best thing. You can double the input for the run time to increase by one unit. Pretty sweet deal I’d say. Some common algorithms are:

  • Searching an element in a sorted list
  • Insert / delete operations on a binary tree / heap
  • Raise to the power (Exponential by squaring)
    • Theoretically this is not O(log(n)) because the result size can grow linearly with the input size. However, I still think it’s a neat algorithm and worth mentioning. Furthermore, it behaves quite well in practice.
O(n)

The execution time grows with the input size. Doubling the input size doubles the execution time. Not a great deal, but it could work.

  • Search an element in an unordered list
  • Searching through a  table using an indexed column.
O(n*log(n))

These are slightly more inefficient than O(n), although they offer pretty good performance.

  • Sorting – mostly sorting algorithms.
O(n^2)

The execution time grows squarely with the input size. Therefore, with double the input, the program takes four times as long to finish. 

I’m adding to this bucket all the polynomial algorithms (O(n^x), regardless x). They are pretty similar so it’s not exactly cheating. Just go with it!

  • Generating pairs of elements
  • Inefficient sorting (bubble sort)
  • Joining tables
O(n!)

These are exploration algorithms. You get the input and you look at all possible solutions to see which ones work.

I’m also including here exponential time like O(2^n). The execution time grows so much that on an average computer you can usually handle an input size in the low tens (i.e. 30 / 40). Avoid using these if you can.

  • Backtracking
  • Traveling salesman problem

Disclaimer and other notations

The big O notation is a simple way to provide an upper bound on the growth rate of the execution time. In most cases that is enough, however, there are other notations that can be used if the situation demands it.

  • Ω – big omega  – the execution time grows at least as fast as the notation. It’s a lower bound on the execution time.
  • Θ – big theta – the algorithm is bounded by both big O and big Omega. This can translate into: the execution time grows as fast as the notation.

We also have a little variant for all the notations, I’m not going to get into any details, but I just wanted to let you know.