Algorithmic information theory is a cross between the fundamentals involved in both information theory and computer science. It revolves around the belief that all objects are composed of strings. The information on these strings corresponds to the shortest, compressed illustration of that string. In terms of algorithm, this compressed information is the program. When the program runs, it shows the whole or the totality of the whole string.
This subfield was first founded by Ray Solomonoff. He was looking into the various ideas that composed the algorithmic probability when he stumbled upon the fundamental ideas with which the algorithmic information theory was based on. But back in 1960, it was not a subfield on its own. Its main ideas were only loosely included and reported by Solomonoff in his report on the General Theory of Inductive Inference.
It was only in 1965 when the theory came to be. Andrey Kolmogorov further developed on the ideas of Solomonoff. In fact, algorithmic information theory is best understood through the Kolmogorov complexity. Since the theory provides specifications as to what constitutes the randomness of strings and infinite sequences, let us discuss that in consideration of the Kolmogorov complexity. Information is always measured in terms of the length of its most compressed description. To be more accurate, it is only when the Kolmogorov complexity of the string, which is what is used to define the object, is equivalent to the length of the string can we say that the string is indeed random, provided that the compressed content of the string runs on a fixed reference universal computer, aka the Turing machine.
Through all these, we can say that the algorithmic information theory seeks to address the information inherent on objects through the use of computer science. It seeks to address our questions involving information, computation and randomness.