Time Complexity of Algorithms


The time complexity of an algorithm is the time in which the program runs to completion. We express the time complexity of the algorithms by using the big O notation.

The time complexity of an algorithm is determined by the functions that are defined in an algorithm that is we count the functions that are performed by our algorithm. The time for an algorithm can be varied because the functions that are defined in an algorithm may take different types of values which is considered as the worst case of time complexity.


Calculating time complexity:

The time complexity can be calculated by using the big O notation. When we use the big O notation to calculate time complexity then all the constants and the constant factors are removed and then we can directly relate the running time to N. For example consider the following statement:

statement;

This is a single statement which has a constant time complexity that is it will be run in a constant time no matter what the statement is about. Now consider the following statement:

for (j =1; j<= N; j++)

{

statements;

}

In the above example the time complexity will depend on the value of N that is, the number of times the loop will be run is depending on the value of N and hence the time complexity will be directly proportional to value of N. When value of n is doubled the time complexity will also increase accordingly. In other words we can say that the time complexity of the above statement will be linear. Now consider the following statement:

for (j = 1; j< N; j++)

{

for (x = 1; x < = N; x++)

{

statements;

}

}

In the above statement the time complexity is depending on two loops. There are two N’s in the above statement hence the time complexity of the above statement will be directly proportional to N * N. When the value of N doubles then the time complexity is increased by N * N. Therefore, the time complexity of the above code is quadratic.

An algorithm will have a logarithm time complexity if the algorithm is searching for a particular field and dividing the numbers into halves. Such an algorithm will have a running time proportional to the number of times in which N is divided by 2 that is N is halved.

In other words we can say that an algorithm will have be linear if the algorithm is one dimensional that is doing something to the item in one direction. If the algorithm is processing the item in two dimensions then the algorithm will be quadratic and if the algorithm is dividing the item into two equal parts then the running time of such type of algorithm will be logarithm time complexity.


Types of Notations for Time Complexity:

The following are some of the notations that are used for time complexity:

Big Oh:

It is used to denote the iteration of fewer than or same as expression.

Big Omega:

It is used to denote the iteration of more than or same as expression.

Big Theta:

It is used to denote the iteration of same as expression.

Little Oh:

It is used to denote the iteration of fewer than expression.

Little Omega:

It is used to denote the iteration of more than expression.


Understanding Notations of Time Complexity with Example:

The O expression is that function that grows slower or at the same rate as the expression. The omega expression is hat function which grows faster than or even at the same rate as the expression. The theta expression is composed of both the o expression and the omega expression.