Big O is at the heart of the optimization process, and Big-O allows you to monitor the spatiotemporal complexity of your code or how complex an algorithm is and how long will it take to execute it.
Please note that I’m not a professional engineer, but a humble indy game programming student, so I’ll be scratching the surface of Big-O, but it is something I will be studying more as the necessity grows for it with larger games like rpgs and especially online multiplayer rpgs where data and variables can potentially be calculated by the millions.
Big-O helps you to figure out the efficiency of different problem-solving approaches. There are 4 types of Big-O notation:
- O(1): Constant time complexity, best case scenario.
- O(n): Linear time complexity, how long your list is (linear search).
- O(log n): Logarithmic time complexity, divide & conquer (binary search).
- O(n²): Quadratic time complexity. (Bubble sort, Don’t do this!)
- O(n log n): Merge Sort, it’s basically O(log n) with sorting.
O(1) indicates that an algorithm takes a constant length of time to run, regardless of the quantity of the input. Usually for very short arrays of a hundred or less usually.
For example, a foreach loop is a linear search that may be fine for searching fingers, but may not be the best choice for finding a specific intestinal villi
numberOfIntestinalVilli is an O(n) case, where the algorithm execution time is proportional to the amount of data. It will take some time to go through several hundred thousand villi one by one!
A best case scenario is whatever you’re searching for is the first item in the array, that would be O(1). The worst case scenario is the very last villi; that would be O(n) where n = Length of the numberOfIntestinalVilli array.
O (Log n)
Binary searching may be a better option for large data sets. For example, what if you’re looking for villi #4000 of 500,000?
A binary search algorithm will split the array in two and then do a check to see if 4k > the value at the midpoint. if it isn’t, it keeps dividing in two until 4k is larger than the value of the midpoint of the ever shrinking array. Once it does find a case where it’s larger, it discards the smaller array and picks the larger one and keeps splitting the array until it finds the number.
The binary search is much faster because where a linear search would have needed to check each and every number in an array that was half a million in size whereas the binary search would find it in about a dozen steps! This is known as O(log n)
The previous algorithms work great if the elements in the array are sorted but what if they’re not?
One way of sorting data is using a bubble sort, this is known as O(n²) in big-o notation and can be explained as checking a list twice and usually uses nested loops which is highly unadvisable.
Bubble or popcorn sorting is where we start at the first element of an array and compare it to the next and if it’s greater, then it gets swapped and the process begins again and when it reaches the final element, We do it all again but starting at the next element until we reach the end of the array. It’s a bit of a brute force approach but it gets the job done.
O(n log n)
O of N you’ll recognize is a linear search multiplied by the divide and conquer search. This is called a merge sort, where we halve arrays and stitch them back together in order. While not as quick as O(log n), if you have unsorted data it’s much better than bubble sorting O(n²).
Finally, if we were to sort these methods by performance and speed, they would be:
- O(1): Best case scenario where item is the first item in the array or list.
- O(log n): Faster than O(n log n) because there’s no linear search.
- O(n) straight linear search.
- O(n log n) faster than bubble searching because of divide & conquer
- O(n²) Bubble Searching, creates exponential work depending on size.
While this is an extremely simple introduction to Big O, I hope it’s given you an idea of why it’s important and encourage you to investigate it more as I will!