Big O Notation: Understanding the Efficiency of Algorithms

Preview

In the old days, computers did not have a lot of memory resources, so programs needed to be efficient and fast then not consume a lot of resources memory and time so the system to become down.

But today in our age most computers or devices have more memory than in the old days, but memory is still a limited resource

Knowing that memory has a limited resource, programmers must always think about how to write code in an efficient what and avoid memory overflow. But now we have concerns how do we know if a program runs fast or manages memory better or that a program is efficient?

That is why Big O Notation came.

What is Big O Notation?

Big O is a way to measure the efficiency of a program or algorithm not depending on how faster processors, more memory etc.

Example :

Imagine that you have to write a program for searching a lot of items like big arrays like 1000000 elements. The computer with high specification may be can operate this thing, but the low specification off app will not capable to do so. The factor is about how fast the algorithm is running to find that item on the large array.

The Big O measures the efficiency of a program or algorithm independent of the computer's available resources. This is also known as running time analysis

Alright let's see about Big O Complexity Chart

you can see the article about the image above here: https://biercoff.com/big-o-complexity-cool-cheat-sheet/

The Ide here is to measure how efficiently a program runs. So he has some status above here that says horrible, bad, fair, good, excellent. In the X axis, we have the elements and in the Y-Axis we have Operations.

What is Time Complexity?

The main thing to remember here is that running time analysis or time complexity is an approximation of how long the algorithm will take to process some input, just like what I said. Time complexity describes the efficiency of the algorithm by the magnitude of the operations. in the simple is the more elements or the bigger the input how does it translate into the operations?

The conclusion is the fewer operations the algorithm has, the faster it will run, that is mean that if we have a lot of elements or a lot of input and it takes fewer operations to run to get the result, then that is very good.