coding === function(){
  return _joy;
}

Big O - Part 1: Why do we care?

This is a planned seven part series taking a journey into Big O.

For this discussion, we will be presenting ideas we learned about Big O. These are our interpretations and attempt at humanizing Big O.

We encourage you to correct us. Let us know where we’ve gone wrong. Letting us know what we got right would be an added bonus.

Before we started this Journey to become better software developers, we gave zero fukcs about Big O. We've never even had to discuss it with anyone. About the only times we can recall anyone mentioning Big O is when someone was showing off on Stack Overflow or the occassional academic mathematician word-dropping the Big O. Obviously, there are rumors all over the internet of big tech firms pulling out the Big O question to weed out candidates. But, in the real-world of day-to-day development, we cannot recall anyone ever mentioning Big O.

What we have heard about Big O is it has to do with the running time performance of our functions. Yet, if we were concerned about our function's performance, we would use a performance profiler to identify the hot spots. The problem with this approach is once we've identified the hot spots, we did not have enough understanding of Big O to properly identify if our data structures were the bottleneck, or if our algorithm were the bottleneck, or both. In the end, we resolved our performance issues through the process of trial and error.

And let's be honest, whenever we did read about Big O, we felt like we had just walked into the middle of a foreign film festival; where the props made no sense; and the language was a dialect from another planet.

Why should we care now?

We should care, not because the big tech firms will inevitably ask us about Big O during an interview, but because the big tech firms use the concepts behind Big O in production. They are successful because they understand Big O. They are successful because they understand the purpose of Big O. If we are to be successful, we need to understand the concepts that helped others become successful. And Big O is one of the fundamental concepts we need to understand.

To begin answering the question of why we should care, we first needed to derive a plain-english definition of Big O. The wikipedia definition for Big O is:

“...a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity.”

We're far from mathematicians, so at first glance this wasn't very helpful to us. However, as we read and re-read parts of the article, we were able to derive a less “math-term” heavy definition:

Big O is a notation describing the upper bound of a function's (or method) run time as its inputs (i.e., method parameters) increase in size.”

So if we had a function and the input to that function can be represented as a set of data, then Big O will allow us to graph the forecast of “worst-case” run times as the input data set grows.

Whoa wait! Looking at this definition we can measurably determine the scalability of our function. And better yet, compare the scalability of several functions to select the correct one for the situation without actually needing tons of data! And since we can essentially graph the input size to infinity, we can feel more comfortable with our final solution, regardless if our customer's data doubles, triples, or quadruples in size. We can say “Bring it on!”1

Big O == Common Language

Another aspect is Big O allows software developers to describe an algorithm's efficiency to each other in a common set of language terms and notations. Think of it as a subset of the human language allowing us to describe abstract concepts to each other without a concrete implementation. In other words:

As our application's data grows, Big O will help us understand if the algorithm we use to search for data in our application tier will scale and remain efficient as the size of the data increases.

In the real word: When we're tasked with keeping track of duplicate items, we could store the items in an Array or a HashSet. Both structures have mechanisms to check if an item exists, but one structure will be more scalable as the size of the data increases (HashSet), while the other structure will be faster with small amounts of data (Array).

The common language of Big O provides us the vocabullary where we can now have an educated discussion over which structure's algorithm [for determining duplicates] will be better given the amount of data we will be dealing with over time.

Conclusion - Why we care about Big O

We may never need to formally prove the characteristics of an algorithm with Big O. But, we do need to understand the abstract concepts of Big O to make an informed decision about the trade-offs in our design choices. To summarize:

With Joy!

1 This only refers to the algorithms in place; hardware, data storage, CPU, etc. is another aspect, but our customers have people to manage hardware. We're software.