I want to explore a set of paradoxes that arise from self-referential problem formulations. I will then proceed to present the paradox in a more formal setting of computability theory. While I shall refrain from resorting to raw set theory or the pleasantries of Russel’s paradox (“Does the set of all sets that don’t contain themselves contain itself?), a few subtle references may appear.
Heterological words
Let’s take the basis example for the Grelling-Nelson paradox. A heterological word (an adjective) is one that does not describe itself. Example heterological words include ‘long’, ‘French’, and ‘monosyllabic’, none of which describe their properties. The opposite is an autological word that describes itself. Examples include ‘short’, ‘English’, ‘polysyllabic’.
What about the very word ‘heterological’? Read slowly: if it is heterological, then ‘heterological’ does not describe itself, implying ‘heterological’ is not heterological. If it is not heterological, then ‘heterological’ is autological, describing itself, implying that ‘heterological’ is heterological. Either case leads to a paradox.
What if we try to eliminate the paradox? Let’s modify the formulation of heterological to exclude itself, that is, heterological words are non-autological and also exclude ‘heterological’. We quickly realize that non-autological yields the same paradox. We can continue to further refine these definition by appealing to other synonyms, only to be met with a similar paradox.
Notice that only the negatively-framed self-reference produces the paradox. autological, for example does not yield a contradiction. If we define ‘autological’ as autological, the case logically holds. If we define ‘autological’ as heterological, the case also holds. If this sounds confusing, observe the following. We’re associating the definition of the word ‘autological’ to contain the property of heterological, meaning the word does not respect its property, which logically holds. And yet, this does not hold for whichever way we define the word ‘heterological’.
The barber paradox
How about the village barber B who shaves those and only those villagers that don’t shave themselves. The arrangement sounds solid. But does B shave B? If yes, then he shouldn’t be. If no, then he should.
What if B asks barber A to shave himself? That would work, except in allowing A to shave B, B, by definition, does not shave himself and must, consequently, shave himself. And on it continues.
The paradox arises from a self-referential formulation. However, a slightly more practical formulation dictating that everyone either shaves himself or seeks a barber, results in a satisfiable solution for all, barbers included. Similarly, changing the formulation such that a barber from village V shaves those and only those villagers from village W who don’t shave themselves, also resolves the paradox, this no longer being a self-referential version.
Liar paradoxes
These all bear relation to the basic liar paradox "this statement is false”. If true, then the statement contents contradict the hypothesis. If false, then the statement must be true, again, contradicting the hypothesis.
Derivatives of the liars paradox frequently appear in common speech. “I know that I know nothing”, “nothing is true”, or “this statement is meaningless” are the immediate candidates. When feeling particularly impatient, I could utter a statement to the effect of
“Everything is a theatre.”
What I really mean to say is “don’t take anything seriously.” If true, this implies don’t take even what I say seriously
→ not everything is a theatre
→ the statement could be true
→ maybe you should take me seriously
→ everything might be a theatre
→ not everything might be a theatre
If false, then not everything is a theatre
→ this statement may have certain truth
→ everything might be a theatre
In both cases, the implication leads to the hypothesis contradiction. You might suggest relaxing such a categorical and contrived clause to
“Mostly everything is a theatre.” If true,
→ most likely I’m not to take seriously
→ most likely mostly everything is not a theatre
which contradicts the underlying content of the statement, albeit in a strange probabilistic sense. The false hypothesis leads to a similar contradiction.
We’re abusing set theory now, appealing to probability. What if I were to state something even further pragmatic?
“Some things are a theatre”
You may believe in the safety of such a hypothesis, but regretfully, it leaves the opportunity for nothing to be a theatre (if this statement is one of those things), again, contradicting the hypothesis.
As long as the statement is an element of the set whose property the statement describes, we fall into a self-referential paradox.
How often do we hear the following in spy or political intrigue genres?
“Don’t trust anyone”
→ Don’t trust me
→ Some people you can trust. Contradiction.
Now the following variation I find a weak case for a self-referential paradox, but beyond, it serves little purpose.
“Maybe you shouldn’t trust anyone”
→ Maybe you should trust someone
How often do we encounter these ‘maybe’ statements in common speech? They are meant to inspire a sense of doubt and pragmatism, but if interpreted logically, hold no water, implying that maybe the underlying message is true, and maybe it isn’t, with no element of probability.
Bonus: Catch-22
While not a self-referential paradox, it’s one of my favorites.
The poor WW2 bombardier Yossarian simply could not circumvent flying more missions. Each time he approached the required number of missions, the number would increase. Alternatively, to be grounded from combat duty, one had to request a medical evaluation and be deemed crazy. The very act of pleading to Doc Daneeka on the appeal of lunacy, however, immediately declares him sane, and fit to fly more combat missions.
To avoid flying more combat missions, Yossarian would have to be crazy and not want to avoid them. He would have to authentically be crazy. At this point, he wouldn’t request an evaluation, and still fly more missions. One has to not want the thing they want in order to obtain it. But one also has to ask for it.
Decidability and Undecidability
Let’s continue with a little bit of computability theory. A decidable language L is one for which exists a Turing Machine M that will always halt upon computing L. That is to say, upon receiving input w, M accepts if w ∈ L, and rejects if w ∉ L. M would never indefinitely loop on w.
Stated less formally, we can replace decidable with computable, and state simply that a computable language/problem is one for which exists a computational unit (a Turing Machine or any equivalent computational model) that will always terminate the computation (always halt). Even less formally, a problem is computable if it can be stated algorithmically in a way that it always terminates. To the contrary, a problem is not computable, and the respective language (in the sense of computability) is considered undecidable.
Halting Problem and Diagonalization
Can we determine if a Turing Machine M actually halts? Stated less theoretically, is it possible to determine whether a computer algorithm always terminates, rather than enter an infinite loop? I emphasize, that for M to halt, it must not only accept the valid input, but must reject the invalid. Equivalently, an algorithm must terminate irrespective of what input we provide it. Is it possible to build such a detector?
Let’s explore the classic example of a halting machine, or rather, a Turing Machine HTM that, upon receiving as input a description of another Turing Machine M and some input w, determines if M halts on w. (This universality concept of Turing Machines enables one TM to accept the description of another as input, similar to a modern OS, itself a program, storing a number of other compiled programs.)
We’ll construct HTM in a particular way such that if M halts, HTM will in fact loop forever, and if M does not halt, HTM will indeed halt. In other words, we construct HTM to follow the opposite behavior of M. Now, observe, that if detecting a halting/non-halting computation is possible, then such a HTM construction is also possible. Can we agree on that?
What behavior should we expect if we feed HTM into itself? This looks uncannily familiar. If HTM halts, then it doesn’t, and if HTM loops forever, then it halts. Another self-referential paradox. We clearly cannot construct HTM to follow the observed behavior. (Paradoxes, I point out, are not allowable behaviors in computational models.)
Does the above sound absurd? Absolutely, as all attempts to analyze the aforementioned self-referential paradoxes. Yet this proof method leverages the so-called diagonalization argument, and it’s the same one that all previous paradoxes appealed to.
Intuitively, we can see that detecting an indeed halting computation isn’t the challenge. We know when an algorithm halts. But this isn’t sufficient for our formal definition of a decidable language (HTM must be a decider for the language consisting of all valid TMs M with inputs w such that M halts on w). For a language to be decidable, the respective decider must also halt on non-accepting input. Respectfully, we must also detect when M loops forever. But we cannot, since, intuitively, a program that seems to indefinitely loop may at some point, 3 million years later, halt.
We’ve demonstrated the undecidability of HTM and the impossibility of a Halting Machine. Simple corollaries (detailed by Alan Turing) demonstrate, that any variation of a halting machine is undecidable, and respectively, no possible program, regardless of computational model, can yield an answer to whether another program will invariably halt or loop forever.
Slight caveat
We could construct HTM to consider all possible configurations of M, such that if the same configuration ever repeats, we know that HTM loops. But just how many configurations are we speaking of? What defines a configuration of a TM? Let’s assume a finite TM tape (rather than infinite), which more accurately represents a practical computer with limited memory.
- The current TM state out of S possible states.
- The tape contents for a n-length tape and a tape alphabet consisting of A characters.
- The head position in the tape.
The total number of possible configurations thus equates to SnAn - exponential with respect to the tape length, or the amount of memory in a computational unit. For a relatively small amount of memory characteristic of the 1980’s, the number of configurations quickly surpasses the number of atoms in the universe. HTM would have to also keep track of all visited configurations of M, cross-referencing it with each step.
In exploring the decidability of a language and the viability of a respective decider TM, we consider only computationally-efficient models, and not something astronomically intractable.
Diagonalization demonstrated
Given all Turing Machines in the infinity of the universe {M1, M2, M3, …}, we wish to detect whether a TM does not accept its own description as input. Let’s build precisely a decider Turing Machine Ml for this problem, that takes as input an arbitrary TM description, M, and accepts if and only if M rejects its own description as input. Bear in mind that, Ml itself belongs to the list of all TMs in our infinite set.
Here’s a table demonstrating whether some TM Mi accepts the description of another TM Mj. The diagonal cells, therefore, correspond to the output of a TM being passed its own description, and the respecting cell in the Ml row thus contains the inverse of the diagonal entry.
M1 | M2 | M3 | … | Ml | … | |
---|---|---|---|---|---|---|
M1 | 0 | 0 | 1 | … | 1 | … |
M2 | 1 | 0 | 0 | … | 0 | … |
M3 | 0 | 1 | 1 | … | 1 | … |
… | … | … | … | … | … | … |
Ml | 1 | 1 | 0 | … | ? | … |
In this example, M1 rejects its own description (hence Ml accepts, returning 1), M3 accepts its own description, hence Ml rejects, and so on. If we follow the diagonal, we eventually arrive to the Ml vs Ml cell. What should Ml return if passed its own description? It it accepts, it rejects, and if it rejects, it accepts. We stumble upon the same paradox we’ve observed all along.
Here I demonstrate the diagonalization matrix for the Heterological word paradox:
‘long’ | ‘French’ | ‘monosyllabic’ | ‘polysyllabic’ | ‘heterological’ | |
---|---|---|---|---|---|
long | 0 | 0 | 1 | 1 | 1 |
French | 0 | 0 | 0 | 0 | 0 |
monosyllabic | 1 | 1 | 0 | 0 | 0 |
polysyllabic | 0 | 0 | 1 | 1 | 1 |
heterological | 1 | 1 | 1 | 0 | ? |
Notice, ‘long’ is not a long word, thus is heterological. ‘Polysyllabic’ is a polysyllabic word, thus is not heterological. The entire heterological row, in fact, takes the inverse of the diagonal entry in the respective column. Is ‘heterological’ a heterological word? The respective cell takes the inverse of its own value - a self-referential paradox.
Questions, comments? Connect.