Complexity of Algorithms

Abhi Ram
4 min readJan 13, 2022

--

What is an Algorithm?

The set of instructions which are designed to perform certain task is called an algorithm.

Major Factors in Algorithms Design

  1. Correctness
  2. Algorithm Efficiency

Nowadays, it is very important to write codes which are computationally efficient and correct. There are multiple ways to solve a problem or perform a certain operation in a software, but one should learn the process of comparing different algorithms and choose the best code & it is dependent on its efficiency.

Highly efficient algorithms use less system resources and runs as fast as possible. The efficiency of an algorithm depends upon the complexity.

What is Complexity?

The Complexity of an algorithm is a function that provides the running time and space for data, depending on the size provided by the user.

Types of Complexities

  1. Time Complexity
  2. Space Complexity

1. Time Complexity

Time Complexity of an algorithm is the amount of time taken by an algorithm to run as a function of the length of the input.

2. Space Complexity

Space Complexity of an algorithm is the amount of space or memory taken by an algorithm to run as a function of the length of the input.

Time and space complexity depends on many other things like hardware, operating system, processors. But these factors are not considered while analyzing an algorithm.

Complexity Analysis

  1. Best Case
  2. Average Case
  3. Worst Case

Best, Worst and Average Case

Complexity of algorithms is usually evaluated in the worst case (most unfavorable scenario). This means in the average case they can work faster, but in the worst case they work with the evaluated complexity and not slower.

Let’s take an example: searching in array.

To find the searched key in the worst case, we have to check all the elements in the array. In the best case we will have luck and we will find the element at first position. In the average case we can expect to check half the elements in the array until we find the one, we are looking for. Hence in the worst case the complexity is linear.

In the average case the complexity is linear, because when evaluating complexity, one does not take into account the constants. In the best case we have a constant complexity, because we make only one step and directly find the element.

The complexity can be found in any form such as constant, logarithmic, linear, quadratic, cubic, exponential. It is nothing but the order of constant, logarithmic, linear and so on, the number of steps encountered for the completion of a particular algorithm. To make it even more precise, we often call the complexity of an algorithm as “running time”.

Typical Complexities of an Algorithm

  1. Constant Complexity
  2. Logarithmic Complexity
  3. Linear Complexity
  4. Quadratic Complexity
  5. Cubic Complexity
  6. Exponential Complexity

1. Constant Complexity

It imposes a complexity of O(1). If an algorithm’s time complexity is constant, it means that it will always run in the same amount of time, no matter the input size.

2. Logarithmic Complexity

It imposes a complexity of O(log(N)). It undergoes the execution of the order of log(N) steps.

3. Linear Complexity

An algorithm is said to take linear time, or O(n) time, if its time complexity is O(n). It encompasses the same number of steps as that of the total number of elements to implement an operation on N elements.

4. Quadratic Complexity

It imposes a complexity of O(n^2). For N input data size, it undergoes the order of N^2 count of operations on N number of elements for solving a given problem.

5. Cubic Complexity

It imposes a complexity of O(n^3). For N input data size, it executes the order of N^2 steps on N elements to solve a given problem.

6. Exponential Complexity

It imposes a complexity of O(2^n), O(N!), O(n^k), …. For N elements, it will execute the order of count of operations that is exponentially dependable on the input data size.

Notations for complexities

  1. Big O notation(O)
  2. Theta notation(θ)
  3. Omega notation(Ω)

1. Big O notation(O)

Comparison of different algorithms denoted by Big O notation

For more complex algorithms finding the complexity can be difficult. However, it can be easier to say that it will never exceed a certain bound. Therefore, we don’t have to find exactly how fast it runs (the complexity), we just need to find specific bounds for an algorithm. Big O notation is used to define upper bound of the algorithm in its worst case.

2. Theta notation(θ)

Graphical representation of Theta notation

Theta notation encloses the function from above and below. As it represents the upper and the lower bound of the running time of an algorithm, it is used for analyzing the average-case complexity of an algorithm.

3. Omega notation(Ω)

Graphical representation of Omega notation

Omega notation represents the lower bound of the running time of an algorithm. Thus, it provides the best-case complexity of an algorithm.

--

--