I often derive more pleasure in the introspection of computing than the physical interaction, and thought of making an attempt to share that pleasure in something that otherwise tends to appear painfully dry, without resorting to terms such as uncountable sets, mapping reductions, diagonalization, pumping lemma, and decidability. For a slightly more in-depth, yet still relatively light prose, see this short guide.
The only formal component really that I wish to emphasize is the Turing Machine, mainly because I find it elegant, minimalist, and necessary as the most basic yet sufficient computational model. The TM is probably my favorite contraption of mostly theoretical utility. It also carries a cool name.
A Turing Machine models what’s algorithmically possible, although it’s not the only such model. Alonzo Church also developed the lambda calculus, which can simulate a TM and vice-versa. In fact, as typical in computational evolution, a myriad of other such models were likely developed by other mathematicians in the similar time period. I did not find the original publication introducing the Turing Machine, On Computable Numbers by Alan Turing, to be a particularly entertaining read. I never even finished reading that publication. Fortunately, the work has undergone multiple clarifications and revisions, accessible in more digestible sources such as Introduction to the Theory of Computation by Michael Sipser.
Many publications on Turing Machines can distance a reader not otherwise theoretically inclined, especially upon encountering some variant of a formal TM tuple definition: M = (Q, Σ, Γ, δ, q0, qa, qr). Actually, the fact that a TM can be defined by a tuple of seven parameters speaks a lot on sheer simplicity. Imagine a tuple definition of a modern multi-core, multi-level cache processor with registers, a program counter, redundant secondary storage, RAM, a network interface, and a myriad of other components I’m omitting. Incidentally, there’s a simplified abstract TM-variant model somewhat resembling a modern architecture. Anyway…
The simplest TM contains an infinite tape of cells, each of which can be marked by one of predefined symbols, with the current tape position indicated by a tape head. What makes a TM a sufficient computational model are the states, one of which a TM can find itself in, and a transition function, demonstrated simpler by a finite state machine, that dictates, based on 1) the current tape cell content and 2) the current state: a) what shall be the next state, b) what shall be the new cell value, if to be modified, and c) shall we move the tape head right or left.
The TM receives initial input written on the tape, and the computation ensues one transition at a time until an accept or reject state is encountered, also part of the TM definition.
The beauty of even this simple TM model (vs others of more elaborate complexity), is the expressiveness to represent any algorithm or any problem algorithmically possible by, let’s presume, a non-Quantum computer.
Now, while the TM is capable of modelling any algorithm, it may require an insurmountable time frame to carry out the computation. Imagine representing the states and transitions of a modern operating system with a graphical frontend by means of a TM!
Thankfully, by virtue of being a theoretical model, such concerns are of superfluous nature. We are not bound by any clock cycle time or CPU frequency in determining the TM transition speed.
Some questions a TM helps resolve are the following.
Can a problem be computed?
This can be demonstrated in one of two ways. Either the problem is sufficiently simple that we can describe a TM that based on any input reaches an accept or reject state as relevant to the problem. Alternatively, we can map a problem to another of which the computability is known. We carry out this mapping by means of (you guessed) another TM. The computability of the latter problem then implies the computability of the former. As a contrapositive, the non-computability of the former problem implies the non-computability of the latter. As a consequence of such a mapping, we can demonstrate that the Halting Problem cannot be computed, which, more practically, dictates that an infinite loop checker cannot infallibly and tractably exist.
The problem time complexity, and not measured in absolute units dependent on some arbitrary CPU clock time, but rather, by means of worst-case asymptotic run time. In the theory of algorithms, the asymptotic run time demonstrates the run time growth based on the increasing input length and/or size. Specifically, does the run time increase linearly, logarithmically, or in a more demanding trajectory - polynomially, or worst-case, exponentially. This too, can sometimes be demonstrated directly by means of a TM, but more often by means of mapping to a problem of known time complexity.
The Problem space complexity in worst-case asymptotic growth. That is, based on increasing input size, what’s the asymptotic increase in the number of cells explored or written by a TM? In practical matters, the space complexity corresponds to the modern program memory and disk requirements. Similar demonstration strategies apply.
A Turing Machine doesn’t have to compute strictly one problem. Similarly to a modern computer that stores a myriad of programs on disk, loads them into RAM and executes by a CPU, a TM can receive the encoded version of another TM, decode it, and execute accordingly. This is possible even by means of a simplest single-tape TM, albeit convoluted. Multi-tape TM variants exist to more cleanly segregate the input, output, and a working tape.
I find the Turing Machine an intriguing, simple, yet powerful tool. Hobbyists today construct them for demonstration purposes from either mechanical parts or, less exiting, as a software simulation. I find a TM sufficiently fascinating on paper.