The series post, which contains more stuff formally trained programmers know, can be found here.
Big O Notation
This one has had me always confused and always seemed to be something out of my reach. It really is simple once I actually sat down and worked through it. Lets start with the syntax:
O(n)
The "O" just is a indicator that we using big O notation and the n is the cost. Cost could mean a variety of things, memory, cpu cycles but mostly people think of it as the number of times the code will execute. The best cost would be code that never runs (i.e. `O(0)`) but that likely has no value.
To help explain it, let's look at a simple example:
Console.WriteLine("Hello 1");
The cost for that is 1, so we could write `O(1)`. If we put that in a for loop like this:
for (var counter = 0; counter < 10; counter++) { Console.WriteLine("Hello "+counter); }
The cost would be 10, so we could write `O(10)`.
n
Rather than having to be explicit with number (like 10 above) we can use a short hand notation. The common one is ```n``` which means it will run once per item. For our for loop example about that means it could be written as `O(n)` so that regardless if we looping 10 times or a 100 times the relative cost is the same and can be referenced the same. From this point on it really is just about adding math to it.
If we were to have a loop inside a loop as follows, which will run 100 times (10 X 10) we could write this as `O(n<sup>2</sup>)`.
var n = 10; for (var outerCounter = 0; outerCounter<n; outerCounter++) { for (var counter = 0; counter < n; counter++) { Console.WriteLine("Hello "+counter); } }
The other common one used with Big O Notation is `log`, i.e. Logarithm, which could be written like this: O(log n). In this case the cost per item gets less (relative to earlier items) as we add more items.
Further reading
The best guide I found was from Rob Bell.