When talking about algorithm design and implementation, every once in a while you may have heard about Computational Complexity. Computational complexity theory is a branch of theory of computation which is also a sub-field of theoretical computer science. It focuses on classifying computational problems like problems in algorithm designs according to their inherent difficulties. It also deals with relating classes to each other. When talking about computational complexity, we should first define what computational problem actually is? Computational problem is actually any problem that we want to solve using a computer which may also be solved by using mechanical or mathematical means such as an algorithm.
In Computational complexity a problem is declared as being inherently difficult if it requires significant resources irrespective of the algorithm used. This task is accomplished by using mathematical models of computation to study a given problem and quantifying the resources such as time and storage that may be required to solve the given problem. Although time and storage are the most important criteria in classifying a problem as being inherently difficult, but other complexity measures such as resources utilized in communication, gates used in a circuit and the number of processors (in case of parallel computing) are also used. One of the main goals of computational complexity is to classify things which current computers can and can’t do.
For example take the example of two different algorithms used to generate a Sudoku board. One uses a brute force approach to check all the possible combinations of individual cells to until it finds one that fills all the cells and the other algorithm uses recursion to generate a Sudoku board. Since on average the brute force algorithm will take more time as compared to the recursion algorithm, so it will inherently difficult.