Turing machine - one of my favorite theoretical tools in Computer Science. It represents the most basic of a computing model; a model of a simplest, yet sufficient computational unit to execute any algorithmically conceivable task.
TMs are primarily used in computability or computational complexity studies. However, I adore it for it’s own elegant sake. Some tools are pleasureful to wield irrespective of purpose.
Many Turing Machine variants exist, but here I present the slightly simplified definition of the traditional model to avoid unnecessary complications. A TM consists of a
Infinite tape of cells, either preset or entirely blank.
Allowable symbols for writing to the tape, aka the alphabet. (Input and output alphabets are often separately defined, but we can here disregard such trifles.)
The tape head, indicating our present tape position.
The states. Each TM can occupy a fixed amount of states. However, for it to be a valid model, it must define a start state, an accept state, and a reject state. Understand primarily that there must be states.
The transition logic. This is the ‘programming’ aspect. The input comprises of
- the current state
- the tape symbol at the current position.
And the output:
- the next state
- the new symbol to write to the cell
- the head movement - left or right.
So far, setting aside the state/transition logic, an example TM configuration may resemble the following:
The state set could be as simple as A,B,C,D. You may also have noticed that the transition logic strongly resembles a Finite State Machine (FSM). It does.
In fact, the sole but fundamental difference between a TM and a FSM lies in the tape. A TM has a tape to serve as a memory unit. Infinite memory, in fact.
The only bit of memory contained in an FSM, on the other hand, is the state. Were this an infinite state machine, the states could effectively encode the entire TM transition logic, but alas.
Drawing a parallel with a physical and modern computational unit, the TM tape encompasses the memory (RAM), the hard disk, the cache, and the registers, all inclusive.
In fact, if we were to encode the entire TM definition (more on that shortly), the tape even includes the application code.
This simpleton, yet fully expressive aspect highlights my fascination with the TM. It does away with all the modern hardware abstractions, encompassing all read-write storage in a single unit.
With a Turing Machine, we concern ourselves not with performance and hardware costs, but with computability and tractability in the theoretical sense, which, ultimately, bears impact on the practical side.
Anything representable and computable by any computing machine in existence, is also representable and computable by a TM. And vice-versa.
One could simulate the internet on the simplest of TM models, of an alphabet of strictly ones and zeros, be this a painfully nauseating chore. Were we to entertain the notion, better to compile a higher-level representation into a TM, in a way we compile application code into assembly.
But the above houses a powerful invariant indeed. The simplest TM variant can compute anything computable by a more complex TM variant (or a multi-core modern machine).
Beyond the varying sizes of tape alphabets and state sets, characteristics more integral to the TM logic rather than the model, there exist other TM models.
Some feature multiple independent tapes (with the respectfully augmented transition logic).
Some variants involve a combination of a read-only input tape, a read-write working tape, and a write-only output tape.
Some TMs feature more complex head movement (ie, ability to stay put rather than transition the head left or right). We could likewise impose a start-point barrier, restricting any head movement beyond.
Yet any of these variants can simulate any other, of any degree of complexity. The only difference lies in the number of computational steps (simulation computational cost), provided an infinite tape at the disposal.
The necessity for states
It’s worth also mentioning that with an infinite tape, a TM need not strictly feature states. We can easily demonstrate this by introducing an additional tape to simulate the state definition.
The tape need merely store our current “state”, We also augment augment the transition logic to consider one extra tape.
Further yet, since all models are algorithmically omnipotent, we can always compact multiple tapes into one. So a TM need have but a single tape. Now, with nothing more than a tape, demonstrating anything on the TM would prove almost as cumbersome as simulating the entirety of the internet.
The physical representation
What would a TM resemble if it were indeed a physical machine? You can likely imagine:
- a tape; hopefully infinite
- a head, as one imagines a head of a vinyl record, only with more discreet movements between cells.
- the configuration settings burned into the machinery to include the alphabet, the states, and the transition logic.
- circuitry to scan the tape, reference the next transition, write back, and transition the head appropriately.
Hardware vs software (TM universality)
Does a TM need necessarily be a physical machine?
No. As I alluded to earlier, we could encode the entire TM definition as data; as data to pass to another TM to execute.
Let us imagine a parent TM (PTM), whose burned logic consists entirely of receiving, as input, an encoded definition of another, child TM (CTM), along with any appropriate data (parameters) to pass to the child.
PTM, the physical unit, would decode the CTM definition, simulating it s if it were an independent machine.
CTM could, in turn, be a host of yet another encoded TM (or multiple). The invariant is quiet recursive.
This abstraction entirely reflects the way in which a modern computer knows to load the operating system. The OS, in turn, loads and interacts with user applications, The applications load additional functions, modules, etc.
Logic abstracts and simulates other logic, and so on; abstractions built upon abstractions. In formal terminology, the notion we call TM Universality.
A modern computer, beyond performing computations, interfaces with other hardware. It doesn’t just output ones and zeros. Or does it, in a sense?
Ultimately, the hardware transmits these interactions as a series of electrical impulses of ones and zeros indeed. Accordingly, a TM could interface with other hardware by feeding it the appropriate electrical impulses.
It follows, a Turing Machine is computationally equivalent to any computer! And it even equates tractably in the strictly theoretical sense; just not the practical. :)
Questions, comments? Connect.