\documentclass[11pt]{article}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{amscd}
\usepackage{amsfonts}
\usepackage{epsfig}
\setlength{\topmargin}{-.55in} \setlength{\oddsidemargin}{0in}
\setlength{\evensidemargin}{0in} \setlength{\textwidth}{6.5in}
\setlength{\textheight}{9.1in}
\begin{document}
\title{Church's Thesis}
\author{Guram Bezhanishvili\footnote{Department of Mathematical Sciences,
New Mexico State University, Las Cruces, NM 88003;
\texttt{gbezhani@nmsu.edu}.}\thinspace\footnote{With thanks to Joel
Lucero-Bryan.}}
\date{}
\maketitle
%\begin{center}
%{\large \textbf{Guram Bezhanishvili}{}}\footnote{Department of Mathematical Sciences,
%New Mexico State University, Las Cruces, NM 88003;
%\texttt{gbezhani@nmsu.edu}.}\thinspace\footnote{With thanks to Joel Lucero-Bryan.}\smallskip
%\end{center}
\subsection*{Introduction}
In this project we will learn about both primitive recursive and
general recursive functions. We will also learn about Turing
computable functions, and will discuss why the class of general
recursive functions coincides with the class of Turing computable
functions. We will introduce the effectively calculable functions,
and the ideas behind Alonzo Church's (1903--1995) proposal to
identify the class of effectively calculable functions with the
class of general recursive functions, known as ``Church's thesis."
We will analyze Kurt G\"{o}del's (1906--1978) initial rejection of
Church's thesis, together with the work of Alan Turing (1912--1954)
that finally convinced G\"{o}del of the validity of Church's thesis.
We will learn much of this by studying and working with primary
historical sources by G\"{o}del, Stephen Cole Kleene (1909--1994),
and Turing.
We begin by asking the following question: What does it mean for a
function $f$ to be \emph{effectively calculable}?
%or calculable by algorithms?
Obviously if we can find an algorithm to calculate $f$, then $f$ is
effectively calculable. For example, the famous Euclidean algorithm tells us
that the binary function producing the greatest common divisor of two integers
is effectively calculable. But what if we can not find an algorithm that
calculates $f$? The reason could be that there is no algorithm calculating
$f$; or it could be that $f$ is effectively calculable but we were not
successful in finding an algorithm. Thus, it is evident that we need better
means to identify effectively calculable functions.
The problem of identifying the effectively calculable functions (of natural
numbers) was at the center stage of mathematical research in the twenties and
thirties of the twentieth century. In the early thirties at Princeton, Church
and his two gifted students Kleene and John Barkley Rosser (1907--1989) were
developing the theory of \emph{$\lambda$-definable} functions. Church proposed
to identify the effectively calculable functions with the $\lambda$-definable
functions. Here is Kleene's description of these events, taken from page 59 of
\cite{GBKleene81}:
\begin{quote}
{\sf The concept of $\lambda$-definability existed full-fledged by
the fall of 1933 and was circulating among the logicians at
Princeton. Church had been speculating, and finally definitely
proposed, that the $\lambda$-definable functions are all the
effectively calculable functions---what he published in
\cite{GBChurch36}, and which I in \cite{GBKleene52} Chapter XII (or
almost in \cite{GBKleene43}) called ``Church's thesis''.
When Church proposed this thesis, I sat down to disprove it by
diagonalizing out of the class of the $\lambda$-definable
functions. But, quickly realizing that the diagonalization cannot
be done effectively, I became overnight a supporter of the
thesis.}
\end{quote}
Though Kleene became an \textquotedblleft overnight\textquotedblright%
\ supporter of the thesis, it was a different story with G\"{o}del.
G\"{o}del arrived at Princeton in the fall of 1933. Church proposed
his thesis to G\"{o}del early in 1934, but, according to a November
29, 1935, letter from Church to Kleene, G\"{o}del regarded it ``as
thoroughly unsatisfactory." Instead, in his lectures during the
spring of 1934 at Princeton \cite{GBGoedel34}, G\"{o}del generalized
the notion of \emph{primitive recursive} functions, which was
introduced by him in his epoch-making paper on undecidable
propositions \cite{GBGodel31}.\footnote{It has to be noted that what
we now call ``primitive recursive" functions G\"{o}del simply called
``recursive." The term ``primitive recursive" was introduced by
Kleene in \cite{GBKleene36b}.} He did this by modifying a suggestion
made by Jacques Herbrand (1908--1931) in a 1931 letter, to obtain
the notion of general recursive functions (also known as the
Herbrand-G\"{o}del general recursive functions). Below we give an
excerpt from the abovementioned letter from Church to Kleene (taken
from \cite{GBDavis82}) that gives an account of his discussion of
effective calculability with G\"{o}del:
\begin{quote}
\textsf{In regard to G\"{o}del and the notions of recursiveness and effective
calculability, the history is the following. In discussion [sic] with him the
notion of lambda-definability, it developed that there was no good definition
of effective calculability. My proposal that lambda-definability be taken as a
definition of it he regarded as thoroughly unsatisfactory. I replied that if
he would propose any definition of effective calculability which seemed even
partially satisfactory I would undertake to prove that it was included in
lambda-definability. His only idea at the time was that it might be possible,
in terms of effective calculability as an undefined notion, to state a set of
axioms which would embody the generally accepted properties of this notion,
and to do something on that basis. Evidently it occurred to him later that
Herbrand's definition of recursiveness, which has no regard to effective
calculability, could be modified in the direction of effective calculability,
and he made this proposal in his lectures. At that time he did specifically
raise the question of the connection between recursiveness in this new sense
and effective calculability, but said he did not think that the two ideas
could be satisfactorily identified ``except heuristically.''}
\end{quote}
It appears that G\"{o}del's rejection of $\lambda$-definability as a possible
\textquotedblleft definition\textquotedblright\ of effective calculability was
the main reason behind Church's announcement of his thesis in terms of general
recursive functions \cite{GBChurch35}. Church made his announcement at a meeting
of the American Mathematical Society in New York City on April 19, 1935. Below
is an excerpt from his abstract:
\begin{quote}
\textsf{Following a suggestion of Herbrand, but modifying it in an important
respect, G\"{o}del has proposed (in a set of lectures at Princeton, N. J.,
1934) a definition of the term \emph{recursive function}, in a very general
sense. In this paper a definition of \emph{recursive function of positive
integers} which is essentially G\"{o}del's is adopted. And it is maintained
that the notion of an effectively calculable function of positive integers
should be identified with that of a recursive function, since other plausible
definitions of effective calculability turn out to yield notions which are
either equivalent to or weaker than recursiveness.}
\end{quote}
Note that in the abstract Church relegated $\lambda$-definability to
\textquotedblleft other plausible definitions of effective
calculability\textquotedblright\ that were \textquotedblleft either equivalent
to or weaker than recursiveness,\textquotedblright\ which indicates that, at
the time, Church was not yet certain whether $\lambda$-definability was
equivalent to general recursiveness. Kleene filled in this gap in
\cite{GBKleene36a} by showing that these two notions were indeed equivalent.
Thus, in the full version of his paper \cite{GBChurch36}, Church was already
fully aware that the two notions of general recursiveness and $\lambda
$-definability coincide.
Kleene's theorem that identified general recursive and $\lambda$-definable
functions, together with Kleene's famous Normal Form Theorem\footnote{The
Normal Form Theorem appeared in \cite{GBKleene36b} and considerably simplified
the notion of general recursive functions.} were beginning to convince
G\"{o}del of the validity of Church's thesis. However, it wasn't until the
work of Turing that he finally accepted Church's thesis.
Turing's famous paper \cite{GBTuring36} appeared in 1936 (a correction to it was
published in 1937). Turing introduced what we now call \emph{Turing machines},
and defined a function to be \emph{computable} if it can be computed on a
Turing machine. His work was entirely independent of the related research
being done in Princeton. According to \cite{GBKleene81}, page 61:
\begin{quote}
\textsf{Turing learned of the work at Princeton on $\lambda$-definability and
general recursiveness just as he was ready to send off his manuscript, to
which he then added an appendix outlining a proof of the equivalence of his
computability to $\lambda$-definability. In \cite{GBTuring37} he gave a proof of
the equivalence in detail.}
\end{quote}
Thus, Turing introduced his notion of computability in 1936--1937 and, using
some of the results of Kleene, showed that the three notions of Turing
computable, general recursive, and $\lambda$-definable functions coincide.
On page 72 of G\"{o}del's ``postscriptum'' to his 1934 lecture notes which he
prepared in 1964 for \cite{GBDavis65}, G\"{o}del states:
\begin{quote}
\textsf{Turing's work gives an analysis of the concept of ``mechanical
procedure'' (alias ``algorithm'' or ``computation procedure'' or ``finite
combinatorial procedure''). This concept is shown to be equivalent with that
of a ``Turing machine''.}
\end{quote}
Thus, G\"{o}del made it clear that, in his view, Turing's work was of
fundamental importance in establishing the validity of Church's thesis. In
particular, it influenced G\"{o}del to accept it.
Our account of effective calculability would be incomplete if we did not
mention that around the same time and independently of Turing, but not of the
work in Princeton, Emil Leon Post (1897--1954) formulated yet another
equivalent version of computability \cite{GBPost36}. However, his work was less detailed
than Turing's.
Lastly we mention that \emph{partial} recursive functions were
introduced by Kleene in \cite{GBKleene38}. In \cite{GBKleene52} he
also generalized the notion of Turing computable functions to
partial functions and showed that a partial function is Turing
computable if, and only if, it is partial recursive. The importance
of his work is underlined in footnote 20 of \cite{GBDavis82} quoted
below:
\begin{quote}
\textsf{It is difficult for those who have learned about recursive
functions via a treatment that emphasized partial functions from the
outset to realize just how important Kleene's contribution was. Thus
Rogers' excellent and influential treatise \cite{GBRogers67}, p.\
12, contains an historical account which gives the impression that
the subject had been formulated in terms of partial functions from
the beginning.}
\end{quote}
To summarize, in the mid-thirties there were several versions proposed to
formalize the intuitive concept of effective calculability. These were
$\lambda$-definability (Church), general recursiveness (G\"{o}del), Turing
computability (Turing), and Post computability (Post). All these concepts were
seen to be equivalent to each other, thus producing evidence for general
acceptance of Church's thesis.
This project will only scratch the surface of the subject. Instead
of working with partial functions, we will restrict our attention to
total functions. We will learn about primitive recursive and general
recursive functions from the work of G\"{o}del and Kleene. We will
also learn about Turing machines, Turing computable sequences (of
natural numbers), and Turing computable real numbers (in the
interval $[0,1]$) from the work of Turing and Kleene. We will
examine Turing's and Kleene's definitions of computable functions
(of natural numbers), and show that every general recursive function
is computable on a Turing machine. The fact that every Turing
computable function is general recursive requires the relatively
advanced technique of G\"{o}del numbers and will not be addressed in
this project. Instead we refer the interested reader to
\cite{GBGodel31}, \cite{GBGoedel34} or \cite{GBKleene52} for the
definition of G\"{o}del numbers, and \cite{GBTuring37} or
\cite{GBKleene52} for the proof that every Turing computable
function is general recursive.
The four sources by G\"{o}del, Turing, and Kleene that will be used
in the project are \cite{GBGoedel34}, \cite{GBTuring36},
\cite{GBKleene43}, and \cite{GBKleene52}. Edited versions of the
first three with forewords have been reprinted in \cite{GBDavis65}.
\subsection*{Part one. Primitive recursive functions}
Note: For the reading in part I\ we will use excerpts of G\"{o}del's
1934 lecture notes \cite{GBGoedel34} reprinted in Davis
\cite{GBDavis65} and edited by G\"{o}del himself. We decided to
choose the reprinted version over the original since its definition
of rule (1) is more convenient for our purposes.
\vspace{2mm}
\noindent 1.(a) Read carefully the following excerpt of G\"{o}del's
1934 lecture notes \cite{GBGoedel34} reprinted in Davis
\cite{GBDavis65}.
\begin{quote}
{\sf The function $\phi(x_{1},\dots,x_{n})$ shall be
\emph{compound} with respect to $\psi(x_{1},\dots,x_{m})$ and
$\chi_{i}(x_{1},\dots,x_{n})$ ($i=1,\dots,m$) if, for all natural
numbers $x_{1},\dots,x_{n}$,
\[
(1) \ \ \phi(x_{1},\dots,x_{n})=\psi(\chi_{1}(x_{1},\dots,x_{n}),\dots
,\chi_{m}(x_{1},\dots,x_{n})).
\]
$\phi(x_{1},\dots,x_{n})$ shall be said to be \emph{recursive} with respect to
$\psi(x_{1},\dots$ $,x_{n-1})$ and $\chi(x_{1},\dots,x_{n+1})$ if, for all
natural numbers $k,x_{2},\dots,x_{n}$,
\[
(2)\quad%
\begin{tabular}
[c]{l}%
$\phi(0,x_{2},\dots,x_{n})=\psi(x_{2},\dots,x_{n})$\\
$\phi(k+1,x_{2},\dots,x_{n})=\chi(k,\phi(k,x_{2},\dots,x_{n}),x_{2}%
,\dots,x_{n}). $%
\end{tabular}
\]
In both (1) and (2), we allow the omission of each of the variables in any (or
all) of its occurrences on the right side (e.g. $\phi(x,y)=\psi(\chi
_{1}(x),\chi_{2}(x,y))$ is permitted under (1))\footnote{\textsf{[This
sentence could have been] omitted, since the removal of any of the occurrences
of variables on the right may be effected by means of the function $U_{j}^{n}%
$.} This footnote occurs in the original source.}. We define the class of
\emph{recursive} functions to be the totality of functions which can be generated
by substitution, according to the scheme (1), and recursion, according to the
scheme (2), from the successor function $x+1$, constant functions
$f(x_{1},\dots,x_{n})=c$, and identity functions $U_{j}%
^{n}(x_{1},\dots,x_{n})=x_{j}$ ($1\leq j\leq n$). In other words, a function
$\phi$ shall be recursive if there exists a finite sequence of functions
$\phi_{1},\dots,\phi_{n}$ which terminates with $\phi$ such that each function
of the sequence is either the successor function $x+1 $ or a constant function
$f(x_{1},\dots,x_{n})=c$, or an identity function $U_{j}^{n}(x_{1},\dots
,x_{n})=x_{j}$, or is compound with respect to preceding functions, or is
recursive with respect to preceding functions.}
\end{quote}
\noindent 1.(b) Rewrite rule (1) for $n=1$ and $m=2$. Also
rewrite rule (2) for $n=1$. Explain in your own words what the
rules express.
\vspace{2mm}
\noindent 1.(c) Give a definition of primitive recursive
functions in your own words.
\vspace{2mm}
\noindent 1.(d) On page 44 of \cite{GBDavis65} G\"{o}del states:
\begin{quote}
\textsf{The functions $x+y$, $xy$, $x^{y}$ and $x!$ are clearly [primitive]
recursive.}
\end{quote}
\noindent Give an argument for why this is so. Hint: Review your
answer to 1.(c) and make sure that you have a good grasp of G\"{o}del's
definition of primitive recursive functions.
\vspace{2mm}
\noindent 1.(e) Is every function of natural numbers
primitive recursive? Hint: Use a cardinality argument.
\vspace{2mm}
\noindent 1.(f) Read carefully the excerpts from Church's 1935
letter to Kleene and Church's 1935 abstract appearing above. Also
read carefully the following excerpt of G\"{o}del's 1934 lecture
notes \cite{GBGoedel34} reprinted in Davis \cite{GBDavis65}.
\begin{quote}
\textsf{Recursive functions have the important property that, for each given
set of values of the arguments, the value of the function can be computed by a
finite procedure\footnote{\textsf{The converse seems to be true, if, besides
recursions according to the scheme (2), recursions of other forms (e.g., with
respect to two variables simultaneously) are admitted. This cannot be proved,
since the notion of finite computation is not defined, but it serves as a
heuristic principle.} This footnote occurs in the original source.}.}
\end{quote}
\noindent Do you see any connection between G\"{o}del's writing and Church's
thesis? Explain your answer.
\subsection*{Part two. General recursive functions}
\noindent 2.(a) Read carefully the following excerpt of section 1 of
Kleene \cite{GBKleene43}.
\begin{quote}
{\sf We consider the following schemata as operations for the
definition of a function $\phi$ from given functions appearing in
the right members of the equations ($c$ is any constant natural
number):
\begin{tabular}[c]{ll}%
(I) & $\phi(x)=x^{\prime}$,\\
(II) & $\phi(x_{1},\dots,x_{n})=c$,\\
(III) & $\phi(x_{1},\dots,x_{n})=x_{i}$,\\
(IV) & $\phi(x_{1},\dots,x_{n})=\theta(\chi(x_{1},\dots,x_{n}),\dots,\chi
_{m}(x_{1},\dots,x_{n}))$,\\
(Va) & $%
\begin{cases}%
\begin{array}[c]{l}%
\phi(0)=c\\
\phi(y^{\prime})=\chi(y,\phi(y)),
\end{array}
\end{cases}
$\\
(Vb) & $%
\begin{cases}%
\begin{array}[c]{l}%
\phi(0,x_{1},\dots,x_{n})=\psi(x_{1},\dots,x_{n})\\
\phi(y^{\prime},x_{1},\dots,x_{n})=\chi(y,\phi(y,x_{1},\dots,x_{n}%
),x_{1},\dots,x_{n}),
\end{array}
\end{cases}
$\\
&
\end{tabular}
Schema (I) introduces the successor function, Schema (II) the
constant functions, and Schema (III) the identity functions.
Sche\-ma (IV) is the schema of definition by substitution, and
Schema (V) the schema of primitive recursion. Together we may call
them (and more generally, schemata reducible to a series of
applications of them) the \emph{primitive recursive} schemata.
A function $\phi$ that can be defined from given functions $\psi
_{1},\dots,\psi_{k}$ by a series of applications of these schemata
we call \emph{primitive recursive} in the given functions; and in
particular, a function $\phi$ definable ab initio\footnote{From the beginning.} by these means,
\emph{primitive recursive}.}
\end{quote}
\noindent Is your answer to 1.(c) equivalent to Kleene's definition of
primitive recursive functions? Explain why.
\vspace{2mm}
\noindent 2.(b) Recall the definition of \emph{total} and
\emph{partial} functions. Discuss briefly the difference between the
two. Read carefully the following definition of the $\mu$-operator
taken from the beginning of section 3 of Kleene \cite{GBKleene43}.
\begin{quote}
{\sf Consider the operator: $\mu y$ (the least $y$ such that). If
this operator is applied to a predicate $R(x_{1},\dots,x_{n},y)$
of the $n+1$ variables $x_{1},\dots,x_{n},y$, and if this
predicate satisfies the condition
\[
(2) \qquad(\forall x_{1})\cdots(\forall x_{n})(\exists y)R(x_{1},\dots
,x_{n},y),
\]
we obtain a function $\mu yR(x_{1},\dots,x_{n},y)$ of the
remaining $n$ free variables $x_{1},\dots,$ $x_{n}$.
Thence we have a new schema,
\[
(\mbox{VI}_{1}) \qquad\phi(x_{1},\dots,x_{n})=\mu y[\rho(x_{1},\dots
,x_{n},y)=0],
\]
for the definition of a function $\phi$ from a given function $\rho$ which
satisfies the condition
\[
(3) \qquad(\forall x_{1})\cdots(\forall x_{n})(\exists y)[\rho(x_{1}%
,\dots,x_{n},y)=0].
\]
}
\end{quote}
\noindent Give an example of a function obtainable by the application of the
$\mu$-operator that is not total. What does condition (3) guarantee about the
function obtained by using schema (VI$_{1}$)?
\vspace{2mm}
\noindent 2.(c) Read carefully excerpts of theorem III and the
corollary to it taken from page 51 of Kleene \cite{GBKleene43}.
\begin{quote}
{\sf THEOREM III. The class of general recursive functions is
closed under applications of Schemata (I)--(VI) with (3) holding
for applications of (VI).
\vspace{2mm}
COROLLARY. Every function obtainable by applications of Schemata
(I)--(VI) with (3) holding for applications of (VI) is general
recursive.}
\end{quote}
\noindent Based on the corollary, give a definition of general recursive
functions. Explain whether or not every primitive recursive function is
general recursive.
\vspace{2mm}
\noindent 2.(d) Is every function of natural numbers
general recursive? Explain your answer.
\vspace{2mm}
\noindent 2.(e) (Extra Credit) Give a reasonable argument
for why the class of general recursive functions is
\emph{strictly} larger than the class of primitive recursive
functions.
\subsection*{Part three. Turing machines}
\noindent 3.(a) Read carefully the following excerpt of section 1 of
Turing \cite{GBTuring36}.
\begin{quote}
\textsf{We may compare a man in the process of computing a real number to a
machine which is only capable of a finite number of conditions $q_{1}%
,q_{2},\dots,q_{R}$ which will be called ``$m$-configura\-tions". The machine
is supplied with a ``tape" (the analogue of paper) running through it, and
divided into sections (called ``squares") each capable of bearing a ``symbol".
At any moment there is just one square, say the $r$-th, bearing the symbol
$\mathfrak{S}(r)$ which is ``in the machine". We may call this square the
``scanned square". The symbol on the scanned square may be called the
``scanned symbol". The ``scanned symbol" is the only one of which the machine
is, so to speak, ``directly aware". However, by altering its $m$-configuration
the machine can effectively remember some of the symbols which it has ``seen"
(scanned) previously. The possible behaviour of the machine at any moment is
determined by the $m$-configuration $q_{n}$ and the scanned symbol
$\mathfrak{S}(r)$. This pair $q_{n},\mathfrak{S}(r)$ will be called the
``configuration": thus the configuration determines the possible bahaviour of
the machine. In some of the configurations in which the scanned square is
blank (i.e.\ bears no symbol) the machine writes down a new symbol on the
scanned square: in other configurations it erases the scanned symbol. The
machine may also change the square which is being scanned, but only by
shifting it one place to right or left. In addition to any of these operations
the $m$-configuration may be changed. Some of the symbols written down will
form the sequence of figures which is the decimal of the real number which is
being computed. The others are just rough notes to ``assist the memory". It
will only be these rough notes which will be liable to erasure.}
\end{quote}
\noindent Also read carefully the following excerpt of section 2 of
Turing \cite{GBTuring36}.
\begin{quote}
{\sf
\emph{Automatic machines.}
If at each stage the motion of a machine (in the sense of \S 1) is
\emph{completely} determined by the configuration, we shall call
the machine an ``automatic machine" (or a-machine).
For some purposes we might use machines (choice machines or
c-machines) whose motion is only partially determined by the
configuration (hence the use of the word ``possible" in \S 1).
When such a machine reaches one of these ambiguous configurations,
it cannot go on until some arbitrary choice has been made by an
external operator. This would be the case if we were using
machines to deal with axiomatic systems. In this paper I deal only
with automatic machines, and will therefore often omit the prefix
a-.
\emph{Computing machines.}
If an a-machine prints two kinds of symbols, of which the first
kind (called figures) consists entirely of $0$ and $1$ (the others
being called symbols of the second kind), then the machine will be
called a computing machine. If the machine is supplied with a
blank tape and set in motion, starting from the correct initial
$m$-configuration, the subsequence of the symbols printed by it
which are of the first kind will be called the \emph{sequence
computed by the machine}. The real number whose expression as a
binary decimal is obtained by prefacing this sequence by a decimal
point is called the \emph{number computed by the machine}.
At any stage of the motion of the machine, the number of the
scanned square, the complete sequence of all symbols on the tape,
and the $m$-configuration will be said to describe the
\emph{complete configuration} at that stage. The changes of the
machine and tape between successive complete configurations will
be called the \emph{moves} of the machine.
\emph{Circular and circle-free machines.}
If a computing machine never writes down more than a finite number
of symbols of the first kind, it will be called \emph{circular}.
Otherwise it is said to be \emph{circle-free}.
A machine will be circular if it reaches a configuration from
which there is no possible move, or if it goes on moving, and
possibly printing symbols of the second kind, but cannot print any
more symbols of the first kind...
\emph{Computable sequences and numbers.}
A sequence is said to be computable if it can be computed by a
circle-free machine. A number is computable if it differs by an
integer from the number computed by a circle-free machine.}
\end{quote}
\noindent Note: The machines that Turing designs will be called Turing machines.
\vspace{2mm}
%\item[3.2]
\noindent 3.(b) What are the primary components of a Turing machine?
Describe $m$-configurations, configurations, and complete configurations of a
Turing machine, and their differences.
\vspace{2mm}
\noindent 3.(c) Formulate in your own words Turing's
definition of a computing machine, a computable sequence (of
natural numbers), and a computable real number (in the interval
$[0,1]$).
\vspace{2mm}
\noindent 3.(d) Read carefully the following excerpt of section 3 of
Turing \cite{GBTuring36} where it is shown that the sequence
$010101...$ is computable.
\begin{quote}
{\sf A machine can be constructed to compute the sequence
$010101\dots$. The machine is to have the four configurations
``$\mathfrak{b}$", ``$\mathfrak{c} $", ``$\mathfrak{f}$",
``$\mathfrak{e}$" and is capable of printing ``$0$" and ``$1$".
The behaviour of the machine is described in the following table
in which ``$R$" means ``the machine moves so that it scans the
square immediately on the right of the one it was scanning
previously". Similarly for ``$L$". ``$E$" means ``the scanned
symbol is erased" and ``$P$" stands for ``prints". This table (and
all succeeding tables of the same kind) is to be understood to
mean that for a configuration described in the first two columns
the operations in the third column are carried out successively,
and the machine then goes over into the $m$-configuration
described in the last column. When the second column is left
blank, it is understood that the behaviour of the third and fourth
columns applies for any symbol and for no symbol. The machine
starts in the $m$-configuration $\mathfrak{b}$ with a blank tape.
\[%
\begin{tabular}
[c]{cc}%
\emph{Configuration} & \emph{Behaviour}\\
\ & \\%
\begin{tabular}
[c]{cc}%
$m$\emph{-config.} & \emph{symbol}\\
\ & \\
$\mathfrak{b}$ & None\\
$\mathfrak{c}$ & None\\
$\mathfrak{e}$ & None\\
$\mathfrak{f}$ & None
\end{tabular}
&
\begin{tabular}
[c]{cc}%
\emph{operations} & \emph{final }$m$\emph{-config.}\\
\ & \\
$P0,R$ & $\mathfrak{c}$\\
$R$ & $\mathfrak{e}$\\
$P1,R$ & $\mathfrak{f}$\\
$R$ & $\mathfrak{b}$%
\end{tabular}
\end{tabular}
\]
If (contrary to the description in \S 1) we allow the letters
$L,R$ to appear more than once in the operations column we can
simplify the table considerably.
\[%
\begin{tabular}
[c]{cccc}%
$m$-\emph{config.} & \emph{symbol} & \emph{operations} & \emph{final
$m$-config.}\\
\ & & & \\
$\mathfrak{b}$ & $\left\{
\begin{array}
[c]{l}%
\mbox{None}\\
0\\
1
\end{array}
\right. $ &
\begin{tabular}
[c]{c}%
$P0$\\
$R,R,P1$\\
$R,R,P0$%
\end{tabular}
&
\begin{tabular}
[c]{c}%
$\mathfrak{b}$\\
$\mathfrak{b}$\\
$\mathfrak{b}$%
\end{tabular}
\end{tabular}
\]
}
\end{quote}
\noindent What can you conclude about the real number $\frac{1}{3}$? Design a
Turing machine that computes the real number $\frac{1}{7}$. Give an argument
for why the machine you just designed does what it is supposed to do.
\vspace{2mm}
\noindent 3.(e) (Extra Credit) Read carefully the following excerpt
of section 3 of Turing \cite{GBTuring36} where it is shown that the
sequence $001011011101111011111\dots$ is computable.
\begin{quote}
{\sf As a slightly more difficult example we can construct a
machine to compute the sequence $001011011101111011111\dots$. The
machine is to be capable of five $m$-configurations, viz.
\textquotedblleft$\mathfrak{o}$",
\textquotedblleft$\mathfrak{q}$",
\textquotedblleft$\mathfrak{p}$",
\textquotedblleft$\mathfrak{f}$", \textquotedblleft$\mathfrak{b}$"
and of printing \textquotedblleft$\partial$",
\textquotedblleft$x$", \textquotedblleft$0$",
\textquotedblleft$1$". The first three symbols on the tape will be
\textquotedblleft$\partial\partial0$"; the other figures follow on
alternate squares. On the intermediate squares we never print
anything but \textquotedblleft$x$". These letters serve to
\textquotedblleft keep the place" for us and are erased when we
have finished with them. We also arrange that in the sequence of
figures on alternate squares there shall be no blanks.
\begin{center}
\begin{tabular}
[c]{cccc}%
\multicolumn{2}{c}{\emph{Configuration}} & \multicolumn{2}{c}{\emph{Behaviour}%
}\\
$m$\emph{-config.} & \emph{symbol} & \emph{operations} & $%
%TCIMACRO{\QDATOP{\text{\emph{final}}}{m\text{\emph{-config.}}}}%
%BeginExpansion
\genfrac{}{}{0pt}{0}{\text{\emph{final}}}{m\text{\emph{-config.}}}%
%EndExpansion
$\\
$\mathfrak{b}$ & \multicolumn{1}{l}{} &
\begin{tabular}
[c]{l}%
$P\partial,R,P\partial,R,P0,R,R,P0,L,L$%
\end{tabular}
&
\begin{tabular}
[c]{l}%
$\mathfrak{o}$%
\end{tabular}
\\
$\mathfrak{o}$ & \multicolumn{1}{l}{$\left\{
\begin{tabular}
[c]{l}%
$1$\\
$0$%
\end{tabular}
\ \ \ \ \right. $} &
\begin{tabular}
[c]{l}%
$R,Px,L,L,L$\\
\quad
\end{tabular}
& $%
\begin{tabular}
[c]{l}%
$\mathfrak{o}$\\
$\mathfrak{q}$%
\end{tabular}
$\\
$\mathfrak{q}$ & \multicolumn{1}{l}{$\left\{
\begin{tabular}
[c]{l}%
Any ($0$ or $1$)\\
None
\end{tabular}
\ \ \ \ \right. $} &
\begin{tabular}
[c]{l}%
$R,R$\\
$P1,L$%
\end{tabular}
& $%
\begin{tabular}
[c]{l}%
$\mathfrak{q}$\\
$\mathfrak{p}$%
\end{tabular}
$\\
$\mathfrak{p}$ & \multicolumn{1}{l}{$\left\{
\begin{array}
[c]{l}%
x\\
\partial\\
\text{None}%
\end{array}
\right. $} &
\begin{tabular}
[c]{l}%
$E,R$\\
$R$\\
$L,L$%
\end{tabular}
& $%
\begin{array}
[c]{c}%
\mathfrak{q}\\
\mathfrak{f}\\
\mathfrak{p}%
\end{array}
$\\
$\mathfrak{f}$ & \multicolumn{1}{l}{$\left\{
\begin{tabular}
[c]{l}%
Any\\
None
\end{tabular}
\ \ \ \ \right. $} &
\begin{tabular}
[c]{l}%
$R,R$\\
$P0,L,L$%
\end{tabular}
& $%
\begin{array}
[c]{c}%
\mathfrak{f}\\
\mathfrak{o}%
\end{array}
$%
\end{tabular}
\end{center}
To illustrate the working of this machine a table is given below
of the first few complete configurations. These complete
configurations are described by writing down the sequence of
symbols which are on the tape, with the $m$-configuration written
below the scanned symbol. The successive complete configurations
are separated by colons.
\[%
\begin{tabular}
[c]{l}%
\ \ \ $:\partial\partial0$ $\ 0:\partial\partial0$ $\ 0:\partial\partial0$
$\ 0:\partial\partial0$ $\ 0$ \ \ \ \ \ $:\partial\partial0$ $\ 0$ \ $\ 1$\\
$\mathfrak{b}$ \ \ \ \ $\ \mathfrak{o}$ \ \ \ \ \ \ \ $\ \ \mathfrak{q}$
\ \ \ \ \ \ \ \ \ \ \ \ $\mathfrak{q}$
\ \ \ \ \ \ \ \ \ \ \ \ \ \ $\mathfrak{q}$ \ \ \ \ \ \ \ \ \ \ \ $\mathfrak{p}%
$\ \ \\
$\partial\partial0$ \ \ $0$ \ \ $1:\partial\partial0$ \ $0$ \ $1:\partial
\partial0$ \ $0$ \ $1:\partial\partial0$ \ $0$ \ $1:\partial\partial0$ \ $0$
\ $1:$\\
\ \ \ \ \ \ $\mathfrak{p}$ \ \ \ \ \ \ \ \ \ \ $\mathfrak{p}$
\ \ \ \ \ \ \ \ \ \ \ \ \ \ $\mathfrak{f}$
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\mathfrak{f}$\\
$\partial\partial0$ \ $0$ \ $1:\partial\partial0$ \ $0$ \ $1$
\ \ \ \ \ $:\partial\partial0$ \ $0$ \ $1$ \ $0:$\\
\ \ \ \ \ \ \ \ \ \ \ $\mathfrak{f}$
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\mathfrak{f}$
\ \ \ \ \ \ \ \ \ \ \ $\ \mathfrak{o}$\\
$\partial\partial0$ \ $0$ \ $1x0:\ldots.$\\
\ \ \ \ \ \ \ $\mathfrak{o}$%
\end{tabular}
\]
}
\end{quote}
\noindent In this example Turing uses the symbols \textquotedblleft$\partial
$\textquotedblright\ and \textquotedblleft$x$\textquotedblright, which are the
symbols of the second kind. Explain the need for these symbols, and their use
in this particular Turing machine. Give an argument for why the machine
described above does what it is supposed to do.
\subsection*{Part four. Turing computable functions}
\noindent 4.(a) Read carefully Turing's definition of computable
functions of natural numbers taken from page 254 of
\cite{GBTuring36}.
\begin{quote}
\textsf{If $\gamma$ is a computable sequence in which $0$ appears
infinitely\footnote{\textsf{If $\mathcal{M}$ computes $\gamma$, then the
problem whether $\mathcal{M}$ prints $0$ infinitely often is of the same
character as the problem whether $\mathcal{M}$ is circle-free.} This footnote
occurs in the original source.} often, and $n$ is an integer, then let us
define $\xi(\gamma,n)$ to be the number of figures $1$ between the $n$-th
and the $(n+1)$-th figure $0$ in $\gamma$. Then $\phi(n)$ is computable if,
for all $n$ and some $\gamma$, $\phi(n)=\xi(\gamma,n)$.}
\end{quote}
\noindent Note: We will call these functions Turing computable.
\vspace{2mm}
\noindent 4.(b) In your own words explain what it means for
a function of natural numbers of one variable to be Turing
computable. Extra Credit. Generalize the concept of
Turing computable functions to functions of two variables.
Hint: Can you code every function of two variables by a
function of one variable? Generalize the concept of Turing
computable functions to functions of multiple variables.
\vspace{2mm}
\noindent 4.(c) In your own words explain Kleene's definition of a
Turing machine by reading carefully the following excerpt from
section 67 of Kleene \cite{GBKleene52}.
\begin{quote}
{\sf The machine is supplied with a linear \emph{tape},
(potentially) infinite in both directions (say to the \emph{left}
and \emph{right}). The tape is divided into \emph{squares}. Each
square is capable of being \emph{blank}, or of having
\emph{printed} upon it any one of a finite list
$s_{1},\dots,s_{j}$ ($j\geq1$) of \emph{symbols}, fixed for a
particular machine. If we write ``$s_{0}$" to stand for ``blank",
a given square can thus have any one of $j+1$ \emph{conditions}
$s_{0},\dots,s_{j}$. The tape will be so employed that in any
``situation" only a finite number ($\geq0$) of squares will be
printed.
The tape will pass through the machine so that in a given
``situation" the machine \emph{scans} just one square (the
\emph{scanned square}). The symbol on this square, or $s_{0}$ if
it is blank, we call the \emph{scanned symbol} (even though
$s_{0}$ is not properly a symbol).
The machine is capable of being in any one of a finite list
$q_{0},\dots,q_{k}$ ($k\geq1$) of (\emph{machine}) \emph{states}
(called by Turing ``machine configurations" or
``$m$-configurations"). We call $q_{0}$ the \emph{passive} (or
\emph{terminal}) \emph{state}; and $q_{1},\dots,q_{k}$ we call
\emph{active states}. The list $q_{0},\dots,q_{k}$ is fixed for a
particular machine.
A (\emph{tape vs. machine}) \emph{situation} (called by Turing
``complete configuration") consists in a particular printing on
the tape (i.e.\ which squares are printed, and each with which of
the $j$ symbols), a particular position of the tape in the machine
(i.e.\ which square is scanned), and a particular state (i.e.\ which
of the $k+1$ states the machine is in). If the state is active, we
call the situation \emph{active}; otherwise, \emph{passive}.
Given an active situation, the machine performs an (\emph{atomic})
\emph{act} (called a ``move" by Turing). The act performed is
determined by the scanned symbol $s_{a}$ and the machine state
$q_{c}$ in the given situation. This pair $(s_{a},q_{c})$ we call
the \emph{configuration}. (It is \emph{active} in the present case
that $q_{c}$ is active; otherwise \emph{passive}.) The act alters
the three parts of the situation to produce a resulting situation,
thus. First, the scanned symbol $s_{a}$ is changed to $s_{b}$.
(But $a=b$ is permitted, in which case the ``change" is
identical.) Second, the tape is shifted in the machine (or the
machine shifts along the tape) so that the square scanned in the
resulting situation is either one square to the left of, or the
same square as, or one square to the right of, the square scanned
in the given situation. Third, the machine state $q_{c}$ is
changed to $q_{d}$. (But $c=d$ is permitted.)
No act is performed, if the given situation is passive.
The machine is used in the following way. We choose some active
situation in which to start the machine. We call this the
\emph{initial situation} or \emph{input}. Our notation will be
chosen so that the state in this situation (the \emph{initial
state}) is $q_{1}$. The machine then performs an atomic act. If
the situation resulting from this act is active, the machine acts
again. The machine continues in this manner, clicking off
successive acts, as long and only as long as active situations
result. If eventually a passive situation is reached, the machine
is said then to \emph{stop}. The situation in which it stops we
call the \emph{terminal situation} or \emph{output}.
The change from the initial situation to the terminal situation
(when there is one) may be called the \emph{operation} performed
by the machine.
To describe an atomic act, we use an expression of one of the
three following forms:
\[
s_{b}Lq_{d}, \hspace{1cm} s_{b}Cq_{d}, \hspace{1cm} s_{b}Rq_{d}.
\]
The ``$L$", ``$C$", ``$R$", indicate that the resulting scanned
square is to the left of, the same as (``center"), or to the right
of, respectively, the given scanned square.
The first part of the act (i.e.\ the change of $s_{a}$ to $s_{b}$)
falls into four cases: when $a=0$ and $b>0$, it is ``prints
$s_{b}$"; when $a>0$ and $b=0$, ``erases $s_{a}$"; when $a,b>0$
and $a\neq b$, ``erases $s_{a}$ and prints $s_{b} $" or briefly
``overprints $s_{b}$"; when $a=b$, ``no change". We often describe
this part of the act as ``prints $s_{b}$" without regard to the
case.
To define a particular machine, we must list the symbols $s_{1}%
,\dots,s_{j}$ and the active states $q_{1},\dots,q_{k}$, and for
each active configuration $(s_{a},q_{c})$ we must specify the
atomic act to be performed. These specifications may be given by
displaying the descriptions of the required acts in the form of a
(\emph{machine}) \emph{table} with $k$ rows for the active states
and $j+1$ columns for the square conditions.
EXAMPLE I. The following table defines a machine
(\textquotedblleft Machine $\mathfrak{A}$") having only one symbol
$s_{1}$ and only one active state $q_{1}$.
\[%
\begin{tabular}
[c]{ccc}%
Name of & Machine & Scanned \ symbol\\
machine & state & $s_{0}$ \ \ \ \ \ \ \ \ $s_{1}$\\
\ & & \\
$\mathfrak{A}$ & $q_{1}$ & $s_{1}Cq_{0}$ \ \ $s_{1}Rq_{1}$%
\end{tabular}
\]
Suppose the symbol $s_{1}$ is actually a tally mark \textquotedblleft$|$". Let
us see what the machine does, if a tape of the following appearance is placed
initially in the machine so that the square which we identify by writing the
machine state $q_{1}$ over it is the scanned square. The conditions of all
squares not shown will be immaterial, and will not be changed during the
action.
\[%
\begin{tabular}
[c]{cccccc}
& $q_{1}$ & & & & \\\hline
& \multicolumn{1}{|c}{$|$} & \multicolumn{1}{|c}{$\ | \ $} &
\multicolumn{1}{|c}{$\ | \ $} & \multicolumn{1}{|c}{\ \ \ } &
\multicolumn{1}{|c}{}\\\hline
\end{tabular}
\]
The machine is in the state $q_{1}$, and is scanning a square on
which the symbol $s_{1}$ is printed. In this configuration, the
atomic act ordered by the table is $s_{1}Rq_{1}$; i.e.\ no change
is made in the condition of the scanned square, the machine shifts
right, and again assumes state $q_{1}$. The resulting situation
appears as follows.
\[%
\begin{tabular}
[c]{cccccc}
& & $q_{1}$ & & & \\\hline
& \multicolumn{1}{|c}{$\ | \ $} & \multicolumn{1}{|c}{$|$} &
\multicolumn{1}{|c}{$\ | \ $} & \multicolumn{1}{|c}{\ \ \ } &
\multicolumn{1}{|c}{}\\\hline
\end{tabular}
\]
The next three acts lead successively to the following situations,
in the last of which the machine stops.
\[%
\begin{tabular}
[c]{cccccc}
& & & $q_{1}$ & & \\\hline
& \multicolumn{1}{|c}{$\ | \ $} & \multicolumn{1}{|c}{$\ | \ $} &
\multicolumn{1}{|c}{$|$} & \multicolumn{1}{|c}{\ \ \ } & \multicolumn{1}{|c}{}%
\\\hline
\end{tabular}
\]%
\[%
\begin{tabular}
[c]{cccccc}
& & & & $q_{1}$ & \\\hline
& \multicolumn{1}{|c}{$\ | \ $} & \multicolumn{1}{|c}{$\ | \ $} &
\multicolumn{1}{|c}{$\ | \ $} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{}%
\\\hline
\end{tabular}
\]%
\[%
\begin{tabular}
[c]{cccccc}
& & & & $q_{0}$ & \\\hline
& \multicolumn{1}{|c}{$\ | \ $} & \multicolumn{1}{|c}{$\ | \ $} &
\multicolumn{1}{|c}{$\ | \ $} & \multicolumn{1}{|c}{$|$} &
\multicolumn{1}{|c}{}\\\hline
\end{tabular}
\]
Machine $\mathfrak{A}$ performs the following operation: It seeks
the first blank square at or to the right of the scanned square,
prints a $|$ there, and stops scanning that square.}
\end{quote}
\vspace{2mm}
\noindent 4.(d) Explain the similarities and differences of
Turing's and Kleene's definitions. Which definition of Turing
machines do you prefer? Explain why.
\vspace{2mm}
\noindent 4.(e) The following is Kleene's definition of Turing
computable functions of multiple variables (see \cite{GBKleene52},
p.\ 359).
\begin{quote}
{\sf Now we define how a machine shall `compute' a partial
number-theoretic function $\phi$ of $n$ variables (cf.\ \S 63). The
definition for an ordinary (i.e.\ completely defined)
number-theoretic function is obtained by omitting the reference to
the possibility that $\phi(x_{1},\dots,x_{n})$ may be undefined.
We begin by agreeing to represent the natural numbers
$0,1,2,\dots$ by the sequence of tallies $|,||,|||,\dots$,
respectively, the tally ``$|$" being the symbol $s_{1}$. There are
$y+1$ tallies in the representation of the natural number $y$.
Then to represent an $m$-tuple $y_{1},\dots,y_{m}$ ($m\geq1$) of
natural numbers on the tape, we print the corresponding numbers of
tallies, leaving a single blank between each two groups of tallies
and before the first and after the last.
EXAMPLE 2. The triple $3,0,2$ is represented thus:
\[%
\begin{tabular}
[c]{c|c|c|c|c|c|c|c|c|c|c|c|c|c}\hline
& \ \ \ & $\ | \ $ & $\ | \ $ & $\ | \ $ & $\ | \ $ & \ \ \ & $\ | \ $ &
\ \ \ & $\ | \ $ & $\ | \ $ & $\ | \ $ & \ \ \ & \\\hline
\end{tabular}
\]
We say that (the representation of) a number $y$ (or of any
$m$-tuple $y_{1},\dots,y_{m}$) on the tape is (\emph{scanned})
\emph{in the standard position}, when the scanned square is the
one bearing the last tally in the representation of $y$ (or of
$y_{m}$).
Now we say that a given machine $\mathfrak{A}$ \emph{computes} a
given partial function $\phi$ of $n$ variables ($n\geq1$), if the
following holds for each $n$-tuple $x_{1},\dots,x_{n}$ of natural
numbers. (For the case $n=0$, cf.\ Remark 1 below.) Let
$x_{1},\dots,x_{n}$ be represented on the tape, with the tape
blank elsewhere, i.e.\ outside of the $x_{1}+\dots +x_{n}+2n+1$
squares required for the representation. Let $\mathfrak{A}$ be
started scanning the representation of $x_{1},\dots,x_{n}$ in
standard position. Then $\mathfrak{A}$ will eventually stop with
the $n+1$-tuple $x_{1},\dots,x_{n},x$ represented on the tape and
scanned in standard position, if and only if
$\phi(x_{1},\dots,x_{n})$ is defined and $\phi
(x_{1},\dots,x_{n})=x$. (If $\phi(x_{1},\dots,x_{n})$ is
undefined, $\mathfrak{A}$ may fail to stop. It may stop but
without an $n+1$-tuple $x_{1},\dots,x_{n},x$ scanned in standard
position.)
Example 2 (concluded). If $\phi(3,0,2)=1$ and $\mathfrak{A}$
computes $\phi$, then when $\mathfrak{A}$ is started in the
situation
\[%
\begin{tabular}
[c]{ccccccccccccccccc}
& & & & & & & & & & & $q_{1}$ & & & & & \\\hline
& \multicolumn{1}{|c}{\ \ \ } & \multicolumn{1}{|c}{$\ | \ $} &
\multicolumn{1}{|c}{$\ | \ $} & \multicolumn{1}{|c}{$\ | \ $} &
\multicolumn{1}{|c}{$\ | \ $} & \multicolumn{1}{|c}{\ \ \ } &
\multicolumn{1}{|c}{$\ | \ $} & \multicolumn{1}{|c}{\ \ \ } &
\multicolumn{1}{|c}{$\ | \ $} & \multicolumn{1}{|c}{$\ | \ $} &
\multicolumn{1}{|c}{$\ | \ $} & \multicolumn{1}{|c}{\ \ \ } &
\multicolumn{1}{|c}{\ \ \ } & \multicolumn{1}{|c}{\ \ \ } &
\multicolumn{1}{|c}{\ \ \ } & \multicolumn{1}{|c}{}\\\hline
\end{tabular}
\]
with all squares other than those shown blank, it must eventually
stop in the situation
\[%
\begin{tabular}
[c]{ccccccccccccccccc}
& & & & & & & & & & & & & & $q_{0}$ & & \\\hline
& \multicolumn{1}{|c}{\ \ \ } & \multicolumn{1}{|c}{$\ | \ $} &
\multicolumn{1}{|c}{$\ | \ $} & \multicolumn{1}{|c}{$\ | \ $} &
\multicolumn{1}{|c}{$\ | \ $} & \multicolumn{1}{|c}{\ \ \ } &
\multicolumn{1}{|c}{$\ | \ $} & \multicolumn{1}{|c}{\ \ \ } &
\multicolumn{1}{|c}{$\ | \ $} & \multicolumn{1}{|c}{$\ | \ $} &
\multicolumn{1}{|c}{$\ | \ $} & \multicolumn{1}{|c}{\ \ \ } &
\multicolumn{1}{|c}{$\ | \ $} & \multicolumn{1}{|c}{$\ | \ $} &
\multicolumn{1}{|c}{\ \ \ } & \multicolumn{1}{|c}{}\\\hline
\end{tabular}
\]
where the condition of the squares other than those shown is
immaterial.
Although only one symbol $s_{1}$ or ``$|$" is used in stating the
arguments and in receiving the function value, others may be used
in the progress of the computation. For each $n\geq1$, each
machine (with its first symbol $s_{1} $ serving as the tally)
computes a certain partial function of $n$ variables.
A partial function $\phi$ is \emph{computable}, if there is a
machine $\mathfrak{A}$ which computes it.}
\end{quote}
\noindent Remark 1 that Kleene refers to is on page 363 of Kleene
\cite{GBKleene52}. Below we give an excerpt from it.
\begin{quote}
\textsf{REMARK 1. In this chapter, outside the present remark and
passages referring to it, we shall understand that we are dealing
with functions of $n\geq1$ variables. Since we have not provided
for representing $n$-tuples of natural numbers on the tape for
$n=0$, we say a machine computes a function $\phi$ of $0$
variables, if it computes the function $\phi(x)$ of $1$ variable
such that $\phi(x)\simeq\phi$.
}
\end{quote}
\noindent State in your own words Kleene's definition of Turing
computable functions of multiple variables. Kleene mentions that the
representation of the tuple $x_{1},\dots,x_{n}$ on a tape requires
$x_{1}+\dots+x_{n}+2n+1$ squares. Explain why.
\vspace{2mm}
\noindent 4.(f) Are Turing's and Kleene's definitions of
computable functions of one variable equivalent? Explain why.
\vspace{2mm}
\noindent 4.(g) (Extra Credit) Is your generalization of
Turing computability to functions of multiple variables equivalent
to Kleene's definition of computable functions of multiple
variables? Explain why.
\subsection*{Part five. Turing computable and general recursive functions}
\noindent 5.(a) Explain why the successor function
$s(x)=x+1$ is Turing computable.
\vspace{2mm}
\noindent 5.(b) Explain why the constant functions $f(x_{1}%
,...,x_{n})=c$ are Turing computable.
\vspace{2mm}
\noindent 5.(c) Explain why the identity functions $U_{j}^{n}%
(x_{1},...,x_{n})=x_j$ are Turing computable.
\vspace{2mm}
\noindent 5.(d) Explain why whenever
$\psi(x_{1},\dots,x_{m})$ and
$\chi_{1}(x_{1},\dots,x_{n}),\dots,\chi_{m}(x_{1},\dots$ $,x_{n})$
are Turing computable, then $\phi(x_{1},\dots,x_{n})$ defined by
\[
\phi(x_{1},\dots,x_{n})=\psi(\chi_{1}(x_{1},\dots,x_{n}),\dots,\chi_{m}%
(x_{1},\dots,x_{n}))
\]
is also Turing computable. In other words, explain why the schema of
substitution preserves Turing computability.
\vspace{2mm}
\noindent 5.(e) Explain why whenever
$\psi(x_{1},\dots,x_{n-1})$ and
$\chi(x_{1},\dots,x_{n+1})$ are Turing computable, then $\phi(x_{1}%
,\dots,x_{n}) $ defined by
\[
\phi(0,x_{2},\dots,x_{n})=\psi(x_{2},\dots,x_{n})
\]
and
\[
\phi(k+1,x_{2},\dots,x_{n})=\chi(k,\phi(k,x_{2},\dots,x_{n}),x_{2},\dots
,x_{n})
\]
is also Turing computable. In other words, explain why the schema of primitive
recursion preserves Turing computability.
\vspace{2mm}
\noindent 5. (f) Explain why whenever
$\rho(x_{1},\dots,x_{n},y)$ is Turing computable and
\[
(\forall x_{1})\dots(\forall x_{n})(\exists y)[\rho(x_{1},\dots,x_{n},y)=0],
\]
then $\phi(x_{1},\dots,x_{n})$ defined by
\[
\phi(x_{1},\dots,x_{n})=\mu y[\rho(x_{1},\dots,x_{n},y)=0]
\]
is also Turing computable. In other words, explain why Kleene's $\mu$-operator
preserves Turing computability.
\vspace{2mm}
\noindent 5.(g) Using exercises 5.(a)--5.(f) make a
deduction concerning primitive recursive and general recursive
functions. How is your conclusion related to Church's thesis?
\section*{Notes to the Instructor}
This project is designed for an advanced undergraduate or graduate
course in mathematical logic. It is also well suited for an advanced
undergraduate or graduate course in computability theory. It could
be assigned as a three to four week project on primite recursive,
general recursive, and Turing computable functions. If there is any
extra time available, the instructor may wish to elaborate on why
every Turing computable function is general recursive.
\begin{thebibliography}{50}
\bibitem{GBChurch35} Church, A.,
``An Unsolvable Problem of Elementary Number Theory, Preliminary
Report (abstract),'' \emph{Bull. Amer. Math. Soc.}, {\bf{41}} (1935),
332--333.
\bibitem{GBChurch36} Church, A.,
``An Unsolvable Problem of Elementary Number Theory,'' \emph{Amer.
Journal Math.}, {\bf{58}} (1936), 345--363. This paper with a short
foreword by Davis was reprinted on pages 88--107 of
\cite{GBDavis65}.
\bibitem{GBDavis65} Davis, M.,
\emph{The Undecidable. Basic Papers on Undecidable Propositions,
Unsolvable Problems and Computable Functions}, Martin Davis (editor),
Raven Press, Hewlett, N.Y., 1965.
\bibitem{GBDavis82} Davis, M.,
``Why G\"odel Didn't Have Church's Thesis,''
\emph{Inform. and Control}, {\bf{54}} (1982), no. 1-2, 3--24.
\bibitem{GBGodel31} G\"{o}del, K.,
``\"{U}ber Formal Unentscheidbare {S}\"{a}tze der {P}rincipia
{M}athematica und Verwandter {S}ysteme {I},'' \emph{Monatsh. Math.
Phys.}, {\bf{38}} (1931), 173--198. The English translation of this
paper by Mendelson with foreword by Davis was reprinted on pages
4--38 of \cite{GBDavis65}.
\bibitem{GBGoedel34} G\"{o}del, K.,
\emph{On Undecidable Propositions of Formal Mathematical Systems},
Mimeographed lecture notes by S. C. Kleene and J. B. Rosser,
Institute for Advanced Study, Princeton, N.J., 1934. This lecture
note with foreword by Davis and postscriptum by G\"{o}del were
reprinted in \cite{GBDavis65}, pages 39--74.
\bibitem{GBKleene36b} Kleene, S. C.,
``General Recursive Functions of Natural Numbers,'' \emph{Math.
Ann.}, {\bf{112}} (1936), 727--742. This paper with a short foreword
by Davis was reprinted on pages 236--253 of \cite{GBDavis65}.
\bibitem{GBKleene36a} Kleene, S. C.,
``$\lambda$-Definability and Recursiveness,''
\emph{Duke Math. Journal}, {\bf{2}} (1936), 340--353.
\bibitem{GBKleene38} Kleene, S. C.,
``On Notation for Ordinal Numbers,''
\emph{Journal of Symbolic Logic}, {\bf{3}} (1938), 150--155.
\bibitem{GBKleene43} Kleene, S. C.,
``Recursive Predicates and Quantifiers,'' \emph{Trans. Amer. Math.
Soc.}, {\bf{53}} (1943), 41--73. This paper with foreword by Davis
was reprinted on pages 254--287 of \cite{GBDavis65}.
\bibitem{GBKleene52} Kleene, S. C.,
\emph{Introduction to Metamathematics},
D. Van Nostrand Co., Inc., New York, 1952.
\bibitem{GBKleene81} Kleene, S. C.,
``Origins of Recursive Function Theory,''
\emph{Ann. Hist. Comput.}, {\bf{3}} (1981),
no. 1, 52--67.
\bibitem{GBPost36} Post, E. L.,
``Finite Combinatory Processes, Formulation {I},'' \emph{Journal of
Symbolic Logic}, {\bf{1}} (1936), 103--105. This paper with a short
foreword by Davis was reprinted on pages 288--291 of
\cite{GBDavis65}.
\bibitem{GBRogers67} Rogers, H., Jr.,
\emph{Theory of Recursive Functions and Effective Computability},
McGraw-Hill Book Co., New York, 1967.
\bibitem{GBTuring36} Turing, A. M.,
``On Computable Numbers with an Application to the
Entscheidungsproblem,'' \emph{Proceedings of the London Mathematical
Society} {\bf{42}} (1936), 230--265. A correction, {\bf{43}} (1937),
544--546. This paper with a short foreword by Davis was reprinted on
pages 115--154 of \cite{GBDavis65}.
\bibitem{GBTuring37} Turing, A. M.,
``Computability and $\lambda$-Definability,''
\emph{Journal of Symbolic Logic}, {\bf{2}} (1937), 153--163.
\end{thebibliography}
\end{document}