# Lecture 12: Church–Turing Thesis

## Church–Turing Thesis

.center Intuitive notion of computation
equals
Turing-machine model of computation.

The thesis is not a mathematical statement and therefore it is not possible to prove it (in the usual mathematical sense). Instead we should view the thesis as a scientific hypothesis. Nevertheless, the thesis makes interesting mathematical predictions that we can prove or disprove rigorously. (A disproof of a prediciton of the Church–Turing thesis would falsify the hypothesis.)

## Computing on Configurations

We will discuss examples of other computational models (variants of Turing machines) and verify that these models are no more powerful than ordinary Turing machines. (In this way, we verify predictions of the Church–Turing thesis.)

In order to simulate another computational model by Turing machines, we will follow a simple strategy:

• Identify a notion of configuration for the competing computational model.
• Encode these configurations as strings.
• Argue that Turing machines can manipulate these strings in a way that simulates the computation of the competing model.

## Multi-tape Turing machines

Suppose we augment Turing machine with multiple tapes and multiple independent heads. As before, the finite-state control determines the behavior of the heads depending on what the heads read in the current step. Formally, we have a transition function $\delta{\colon}{} Q\times \Gamma^k \to Q\times \Gamma^k \times \{{\mathtt L},{\mathtt R}\}^k$ to control $k$ independent heads.

Theorem: Single-tape Turing machines can simulate multi-tape Turing machines.

Proof Sketch: For simplicity we consider the case $k=2$. We can encode the configuration of a $2$-tape machine $M$ as a string $C$ of the form $C = u_1 q a_1 v_1 \# u_2 q a_2 v_2.$ Here, $q\in Q$ is the state of the control, $u_i\in \Gamma^\ast$ is the content of tape $i$ before the head, $v_i\in \Gamma^\ast$ is the content of tape $i$ after the head, and $a_i\in\Gamma$ is the symbol that head $i$ is reading in the current step. (The hash symbol $\#\notin \Gamma$ delimits the contents of the two tapes.)

The following single-tape Turing machine $M'$ simulates the computation of the $2$-tape machine $M$. For simplicity, we can choose the tape alphabet of $M'$ such that it contains all elements of $\Gamma \cup Q \cup \{\# \}$ (but we could also choose to encode everything as binary strings).

Operation of $M'$ on input $w$:

• Write the start configuration $C_0 = q_0 w \# q_0 { {\scriptstyle \square} }$ of $M$ on the tape of $M'$.
• Repeat the following steps:
• Read the configuration $C$ of $M$ on the tape of $M'$.
• Accept if $C$ is an accept-configuration for $M$ and reject if $C$ is a reject-configuration for $M$.
• Manipulate the tape content such that it becomes the subsequent configuration $C'$ of $M$.

Each of the steps can be implemented by a constant number of passes over the tape. After $i$ full iterations of the loop, the tape of $M'$ contains the configuration of $M$ after $i$ computaton steps. Therefore, $M'$ accepts or rejects on input $w$ if and only if $M$ accepts or rejects on input $w$. $\square$

## Non-deterministic Turing machines

Suppose we allow non-deterministic transitions in the finite-state control of a Turing machine $M$. Formally, we have a transition function $\delta$ such that $\delta(q,a)\subseteq Q\times \Gamma \times \{{\mathtt L},{\mathtt R}\}$ describes the possible transitions if the control is in state $q$ and the head reads $a$. Then, a configuration $C$ of $M$ can yield multiple configurations $C'$ in one step, denoted $C\vdash_\delta C'$.

We say that a non-deterministic Turing machine accepts on input $w$ if there exists a sequence of configurations $C_0,\ldots,C_m$ such that $C_0$ is the start configuration, $C_{i}\vdash_\delta C_{i+1}$ for all $i\in\{0,\ldots m-1\}$, and $C_m$ is an accept-configuration. The language $L(M)$ of a non-deterministic Turing machine $M$ is the set of strings $w\in \Sigma^\ast$ such that $M$ accepts on input $w$. (Unlike for deterministic machines, we don’t define what it means for a non-deterministic machine to reject.)

Theorem: A language $L$ is recognizable by non-deterministic Turing machines if and only if it is recognizable by deterministic Turing machines.

Proof: Let $M$ be a non-deterministic Turing machine. We are to construct a deterministic Turing machine $M'$ with the same language $L(M')=L(M)$. The idea is that $M'$ enumerates all possible computation paths of $M$.

Operation of $M'$ on input $w$:

• Enumerate all binary strings $x$ (e.g., in lexicographic order) and perform the following operations:
• Check that $x$ encodes a sequence of configuration $C_0,\ldots,C_m$.
• Check that $C_0= q_o w$ is the start configuration.
• Check that $C_i \vdash_\delta C_{i+1}$ for all $i\in {0,\ldots, m-1}$.
• Check that $C_m$ is an accept configuration.
• Accept if all checks are passed.

The machine $M'$ can perfom each step of the loop by doing a finite number of passes over the tape content. If $M'$ accepts, then there exists an accepting computation path for $M$ and therefore $M$ accepts. On the other hand, if $M$ accepts, then there exists an accepting computation path and an encoding of the path as a binary string $x$. The machine $M'$ will accept when the enumeration reaches the string $x$. $\square$

## Computing on Turing Machines

Going the beyond idea of encoding configurations as strings and computing on them, we can encode whole machines as strings and compute on them. For a Turing machine $M$, we write $\langle M \rangle$ to denote an encoding of it as a binary string. We assume that the formatting of the encoding is so that it is easy for another Turing machine to manipulate it. For convenience, we also assume that every binary string encodes some Turing machine. (For example, we can say if a binary string doesn’t respect the usual formatting, then it decodes to a dummy machine $M_0$.)

Another property of such encodings that is sometimes useful is that for every machine $M$, there are infinitely many binary strings that decode to $M$ or to a machine that behaves exactly in the same way as $M$. We can assume this property because by adding unreachable dummy states to $M$, we can get infinitely many machines that behave exactly like $M$.

In the following, we will use the notation $\langle \cdot \rangle$ to encode arbitrary mathematical objects as binary strings.

## Universal Turing machines

We will show that there exists a Turing machine $U$ that can simulate arbitrary Turing machines. Here, it is interesting that the number of states of $U$ and its tape alphabet are fixed, but it can still simulate machines with many more states and many more tape alphabet symbols. A Turing machine with this property is called universal. (In contrast, the machines in our previous constructions were allowed to depend on the machine they are simulating.)

The fact that modern computers are multi-purpose devices and can, in principle, execute arbitrary programs stems from this formal idea of universality.

Theorem: There exist a Turing machine $U$ that on input $\langle M, w \rangle$ simulates the operation of the Turing machine $M$ on input $w\in \{0,1\}^\ast$.

Proof: It’s convenient to first build a two-tape machine $U_0$ that achieves this task. Then, we can simulate $U_0$ by a single-tape machine $U$ using our previous construction for multi-tape machines.

Operation of $U_0$ on input $\langle M, w\rangle$.

• Write $M$’s start configuration $C_0=q_0w$ as binary string $C_0$ on tape $1$.
• Write $M$’s description as binary string $\langle M \rangle$ on tape $2$.
• Repeat the following steps:
• Invariant: tape $1$ holds the binary encoding of a configuration $C$ of $M$.
• Locate on tape $1$ the state of $M$ (encoded as binary string).regular.grey.
• Locate on tape $1$ the tape symbol under $M$’s head (encoded as binary string).comment.
• Look up the corresponding part in the transition table of $M$ on tape $2$.
• Accept or reject if the current state is the accept or reject of $M$.
• Update the content of tape $1$ according to this transition.

It’s important to note the finite control of $U_0$ cannot “remember” the current state of $M$ or the tape symbol that $M$ is currently reading. Nevertheless, $U_0$ can find the positions of these items on the tapes. (We can implement this process by moving markers across the tape until we find a match.)

After $i$ full iterations of the loop, tape $1$ holds the configuration of $M$ after $i$ steps on input $w$. Therefore, the machine $U_0$ on $\langle M,w \rangle$ accepts or rejects if and only if $M$ on $w$ accepts or rejects. $\square$

## Acceptance problem for Turing machines

Theorem: The acceptance problem ${A_{\mathrm{TM}}}$ for Turing machines is recognizable, ${A_{\mathrm{TM}}}= \{ \langle M, w \rangle \mid \text{M accepts on input w} \}.$

Proof: The universal Turing machine $U$ from the previous theorem recognizes this language, because $U$ accepts on input $\langle M, w\rangle$ if and only if $M$ accepts on input $w$. $\square$