%\documentstyle[a4,proof]{article}
\documentclass{article}
\usepackage{a4}
\usepackage{proof}
\title{The Mathematical Vernacular}
\author{Freek Wiedijk\\
Nijmegen University\\
{\tt }}
\date{}
\begin{document}
\maketitle
\begin{abstract}\noindent
A `mathematical vernacular' is a formal language for writing
mathematical proofs which resembles the natural language from mathematical texts.
Several systems (Hyperproof, Mizar, Isabelle/Isar) all basically
have the same proof language. It consists of the combination of natural
deduction with first order inference steps. In this note we
compare these three languages and present a simplified common version.
\end{abstract}
\section{Mathematical Vernaculars}
The term `mathematical vernacular' (`wiskundige omgangstaal' or {\sc wot} in
Dutch) was introduced by de Bruijn \cite{bru:87} in the eighties. With this term he
didn't mean the language that mathematicians actually use
to communicate their work in practice
(which is a somewhat stylized variant of natural language interspersed with
formulas.) Instead he meant
a formal language: a system that he had developed to represent mathematics.
Supposedly, it was
close in style to the actual way that mathematicians communicate, hence
the name.
`Vernacular' doesn't seem a word that readily admits a plural:
each natural language (English, Dutch, etc.)\ only has {\em one\/} vernacular.
However, because many people turned out to have different ideas about
the `best' way to have a formal language resemble ordinary mathematics,
one started talking about `mathematical vernaculars' in the plural.
So because many had their own thoughts about what the `mathematical vernacular'
should look like when formalized, the term became the name of a species of formal
language.
This paper doesn't want to promote an `own' variant of the mathematical
vernacular concept. Instead, it just makes an observation about already
existing formal languages. It turns out that in a significant
number of systems (`proof assistants') one encounters
languages that look almost the same. Apparently there is a canonical style
of presenting mathematics that people discover independently: something
like a {\em natural\/} mathematical vernacular. Because this language
apparently is something that people arrive at independently, we might call
it {\em the\/} mathematical vernacular.
Now this `natural' way of representing mathematics basically is the combination of
just a few ideas:
\begin{itemize}
\item Natural deduction, written down in `Fitch style': this kind of proof
has a natural block structure (just like
procedural programming languages from the Algol tradition), corresponding
to the `flag' concept of de Bruijn.
\item Big inference steps: a way to state that
some forward deduction of the shape:
$$A_1,\ldots,A_n\vdash A$$
is valid in first order logic (and that a checker of the language
should ask a first order prover to fill in the proof.)
\item A choice for an input language instead of a presentation language: the language is not
written like natural language, but instead resembles something like a
programming language (so for instance it has a `\verb|thus|' statement, instead of
saying something like `{\em therefore we have proven that\dots\/}'.)
\end{itemize}
Abstract mathematics consists of various different activities: defining
concepts, stating
propositions, giving proofs. In this paper we exclusively focus on the last
activity: proving. However, when discussing the mathematical
vernacular concept, it turns out that many think that the proof fragment of
such a language is not the interesting one. Instead they think that the
interesting questions are about concept representation. This is mysterious for various
reasons:
\begin{itemize}
\item The word `vernacular' suggest {\em language\/}, and hence is about
form. But the way concepts are modelled seems to be a question of {\em content.}
(For instance, the question of the best way to formalize the real numbers is not a
matter of how to write it down, but of how to shape the mathematics
itself.)
\item When one looks at the bulk of a mathematical text, most of it
is proof. The definitions and statements of propositions are a fraction of the text
compared to the proofs (which run on for pages.) Similarly, when looking
at formal mathematics, the majority of the `code' is proof steps.
\item There clearly is {\em not\/} a consensus about the best way
to represent a proof in a formal system.
For instance, both the input language (a sequence
of tactic invocations) and
output language (a proof term in some typed lambda calculus) of the
popular type theoretical proof assistants don't resemble the language
that is discussed in this paper at all.
\end{itemize}
This note started by the observation that the Mizar \cite{muz:93,wie:99}
and Isar \cite{wen:99} systems
have languages that looked very much the same, but that the {\em words\/}
that they use for the various constructions are completely
different. So this note contains a {\em translation table\/} between the
two languages, in section \ref{table}. That section therefore is present in this
paper to show that while the
mathematical vernacular discussed here apparently is canonical, its vocabulary certainly is not.
\section{Three Systems}
Let's consider a very simple mathematical proof. If $A$, $B$, $C$, $D$
are logical formulas and we are given the lemmas:
$$A\to C$$
$$B\to D$$
then from those lemmas we can prove that:
$$\forall x\,(A\wedge B\to C\wedge D)$$
(We chose here not to have $x$ occur inside $A$, $B$, $C$ and $D$.
This is unnatural, but makes the examples simpler.)
Consider the following three formal versions of a proof.
First, in the Hyperproof system of Barwise and Etchemendy \cite{bar:etc:95}, we get:
$$
\begin{tabular}{rrll}
&\footnotesize 1&\sf A $\to$ C&\sf Given\\
&\footnotesize 2&\sf B $\to$ D&\sf Given\medskip\\
&\footnotesize 3&\quad\fbox{\sf a}&\sf Assume\smallskip\\
&\footnotesize 4&\quad\quad\sf A $\wedge$ B&\sf Assume\\
\footnotesize 1, 4 $\Rightarrow$&\footnotesize 5&\quad\quad\sf C&\sf Log Con\\
\footnotesize 2, 4 $\Rightarrow$&\footnotesize 6&\quad\quad\sf D&\sf Log Con\\
\footnotesize 5, 6 $\Rightarrow$&\footnotesize 7&\quad\quad\sf C $\wedge$ D&\sf $\wedge$ Intro\medskip\\
\footnotesize $4\ldots 7$ $\Rightarrow$&\footnotesize 8&\quad\sf (A $\wedge$ B) $\to$ (C $\wedge$ D)&\sf $\to$ Intro\medskip\\
\footnotesize $3\ldots 8$ $\Rightarrow$&\footnotesize 9&\sf $\forall$x ((A $\wedge$ B) $\to$ (C $\wedge$ D))&\sf $\forall$ Intro
\end{tabular}
$$
Second, in the Mizar System of Trybulec \cite{muz:93,wie:99} with:
\begin{verse}
\verb|L1: A implies C|
\\%\end{verse}
%and:
%\begin{verse}
\verb|L2: B implies D|
\end{verse}
we have:
\begin{verse}
\verb|theorem|\\
\verb| for x holds (A & B implies C & D)|\\
\verb|proof|\\
\verb| let a;|\\
\verb| assume L4: A & B;|\\
\verb| thus C by L1,L4;|\\
\verb| thus D by L2,L4;|\\
\verb|end;|
\end{verse}
(Generally in Mizar a combination of \verb|for| and \verb|implies| like this is written
`{\tt for x st A \& B holds C \& D}',
but we use the more explicit syntax here, to make it easier to compare the systems.)
Finally, in the Isabelle/Isar system of Wenzel \cite{wen:99}, with:
\begin{quote}
\begin{verbatim}
L1: "A --> C"
L2: "B --> D"
\end{verbatim}
\end{quote}
this proof is written like\footnote{The {\tt fix} step really needn't be present
(Isar is not {\em that\/} similar to Mizar).
It's only there to make this proof structurally equal to the other two.
Also, this is not the only way to write down this proof with Isar: for instance
the {\tt (intro)} method is not necessary if one structures the proof differently.}:
\begin{quote}
\begin{verbatim}
theorem
"ALL x. A & B --> C & D";
proof (intro);
fix a;
assume L4: "A & B";
from L1 L4; show "C"; by blast;
from L2 L4; show "D"; by blast;
qed;
\end{verbatim}
\end{quote}
(Note that all three systems are powerful enough to prove this statement
in one single step: these example proofs are just to show the mechanisms available for proving.)
Clearly, this is three times the same proof, with just the syntax differing.
The structure of these proofs has a number of notable features:
\begin{itemize}
\item A proof consist of a number of lines, each giving a {\em step\/} in the proof.
Essentially, this proof consists of four steps:
\begin{itemize}\sf
\item introduce the variable $a$
\item introduce the assumption $A\wedge B$
\item deduce the conclusion $C$
\item deduce the conclusion $D$
\end{itemize}
\item There are the basic {\em natural deduction\/} ways to reduce statements that have
to be proved. For instance the \verb|assume| step is used to prove an
implication. It corresponds to $\to$-introduction, reducing a proof obligation
of an implication to that of its conclusion.
\item There is a general purpose {\em first order prover\/} to prove steps
from earlier statements automatically. It is called respectively {\sf Log Con}, \verb|by|
and \verb|blast|.
Actually, the Hyperproof system has a number of variants
of various strengths ({\sf Taut Con}, {\sf Ana Con}) and likewise the Isabelle/Isar
system has a whole spectrum of automated provers
(\verb|simp|, \verb|fast|, \verb|force|, \verb|auto|.) Also, while {\sf Log Con}
and \verb|blast| give full first order derivability, Mizar's \verb|by| is a
weaker version of inference (this makes checking decidable and much faster; in the other
two systems, when the inference does not hold, the check of correctness
may not terminate.)
\end{itemize}
\section{Isar versus Mizar}\label{table}
Both Mizar and Isabelle/Isar make use of `natural language' keywords like
\verb|show|, \verb|thus|, \verb|hence|, \verb|then|, etc. However,
these words apparently don't have a `natural' meaning, because they mean
quite different things in the two systems
\def\keyw#1#2{\mbox{\tt #1}_{\mbox{\footnotesize\em #2\/}}}
($\keyw{show}{Isar} = \keyw{thus}{Mizar}$,
$\keyw{thus}{Isar} = \keyw{hence}{Mizar}$,
$\keyw{hence}{Isar} = \keyw{then}{Mizar}$, etc.)
Here is a translation table between some of the keywords of the two systems:
\def\ex#1{{\footnotesize #1}}
$$
\begin{tabular}{lllclll}
\em Isar&\em Mizar&&\strut\hspace{1cm}\strut&\em Isar&\em Mizar&\\
\cline{1-3}\cline{5-7}
\verb|.|&&\ex{1}&&\verb|hence|&\verb|then|&\\
\verb|..|&&\ex{1}&&&\verb|hereby|&\ex{5}\\
\verb|...|&&\ex{2}&&\verb|let|&\verb|set|&\\
&\verb|.=|&\ex{2}&&$\verb|next|+\verb|assume|$&\verb|suppose|&\\
&\verb|@proof|&\ex{3}&&\verb|note|&&\ex{1}\\
\verb|also|&&\ex{2}&&\verb|obtain|&\verb|consider|&\\
\verb|assume|&\verb|assume|&&&\verb|presume|&&\ex{5}\\
\verb|by|&&\ex{1}&&\verb|proof|&\verb|proof|&\\
\verb|def|&&\ex{4}&&\verb|qed|&\verb|end|&\\
&\verb|deffunc|&\ex{4}&&\verb|show|&\verb|thus|&\\
&\verb|defpred|&\ex{4}&&\verb|sorry|&&\ex{3}\\
\verb|finally|&&\ex{2}&&\verb|then|&&\ex{1}\\
\verb|fix|&\verb|let|&&&\verb|?thesis|&\verb|thesis|&\\
\verb|from|&\verb|by|&&&\verb|thus|&\verb|hence|&\\
&\verb|from|&\ex{1}&&\verb|with|&$\verb|then|+\verb|by|$&\\
&\verb|given|&\ex{5}&&\verb|{{|&\verb|now|&\\
\verb|have|&\em no prefix&&&\verb|}}|&\verb|end|&\\
\cline{1-3}\cline{5-7}
\end{tabular}
$$
Quite a number of keywords don't have an exact match on the other
side. The various categories of this (indicated in the third column of
the table) are:
\begin{enumerate}
\item {\em Forward inference.}
Isar's \verb|from|, which corresponds to Mizar's \verb|by|, is a
combination of Isar's \verb|note| (reference to previous statements)
and \verb|then| (`forward chaining'). In Mizar these two things
can't be separated. Also, in Mizar there's only
one inference engine, so Mizar doesn't need keywords
(like Isar's \verb|by|, \verb|.| and \verb|..|)
to indicate which prover should handle the inference.
\item {\em Equational reasoning.} Both Isar and Mizar have support
for chaining equations together, but their mechanisms work a bit differently
and don't map cleanly to each other. Mizar only does this for equalities,
and has something special at the start of the chain, while Isar does this
for all kinds of relations and has something special at the end (\verb|finally|).
\item {\em Unchecked proofs.} These constructions are for getting around
the obligation of giving a full proof. Isar's \verb|sorry| is used to
get rid of a subgoal without proving it, while Mizar's \verb|@proof| is
for speeding up the reprocessing of a file by skipping over already correct proofs.
\item {\em Local definitions.} It's very useful to be able to locally define
a name to mean some expression. The mechanisms in Isar and Mizar for this are
slightly different and don't map exactly to each other (Isar's \verb|let|
is like Mizar's \verb|set|, but it is more powerful because it also can
take expressions apart; Isar's \verb|def| doesn't do automatic expansion, so
isn't like the local definitions of Mizar.)
\item {\em Goal shuffling.} When proving some step in the course of a proof,
there is a certain amount of freedom of moving the location of the statement around in the proof.
The keywords in this category are related to that.
\end{enumerate}
\section{A Simplified Vernacular}
We now will present a bare bones version of the `vernacular' that we find
in the three systems Hyperproof, Mizar and Isabelle/Isar. We will use
Mizar syntax with a slight modification: instead of using \verb|then| we will
have the special label \verb|-| for referring back to the most recent
{\em proposition\/} in scope. Also the meaning of the `\verb|by|' construction
will be different from that in Mizar: here we have it mean {\em full\/} first-order
derivability.
Because we focus here on the deductive style of the language, we won't
fix the syntax for first order formulas, but just use the ordinary
mathematical notation for
that. So the language will be a bit `fuzzy', because it mixes
mathematical formulas and computer text. Hopefully that will make
it more easy to see what is part of the formulas and what is part of the deductive
structure.
So suppose we are already have syntactic categories {\em label,} {\em variable,}
{\em term\/} and {\em formula.} Then we can give the following grammar%
\footnote{This is of course an extremely simple grammar: the {\em yacc\/} version has only 55 states.}:
\begin{quote}
\begin{flushleft}
{\em statement\/} = {\em proposition\/} {\em justification\/} \verb|;|
{\em proposition\/} = [ {\em label\/} \verb|:| ] {\em formula\/}
{\em justification\/} =\\*
\quad {\em empty\/}\\*
\strut\rlap{$|$}\quad \verb|by| {\em reference\/} \{\verb|,| {\em reference\/}\}\\*
\strut\rlap{$|$}\quad \verb|proof| \{{\em step\/}\} [ {\em cases\/} ] \verb|end|
{\em reference\/} =\\*
\quad {\em label\/}\\*
\strut\rlap{$|$}\quad \verb|-|
{\em step\/} =\\*
\quad {\em statement\/}\\*
\strut\rlap{$|$}\quad \verb|thus| {\em statement\/}\\
\strut\rlap{$|$}\quad \verb|let| {\em variable\/} \{\verb|,| {\em variable\/}\} \verb|;|\\
\strut\rlap{$|$}\quad \verb|assume| {\em proposition\/} \verb|;|\\
\strut\rlap{$|$}\quad \verb|consider| {\em variable\/} \{\verb|,| {\em variable\/}\}
\verb|such| \verb|that| {\em proposition\/}
{\em justification\/} \verb|;|\\*
\strut\rlap{$|$}\quad \verb|take| {\em term\/} \{\verb|,| {\em term\/}\} \verb|;|
{\em cases\/} = \verb|per| \verb|cases| {\em justification\/} \verb|;|
\{\verb|suppose| {\em proposition\/} \verb|;| \{{\em step\/}\}\}
{\em empty\/} =\\*
\end{flushleft}
\end{quote}
Using the language given by this grammar, the example from the first section becomes:
%\begin{verse}
%\verb|L1: |$A\to C$\\
%\verb|L2: |$B\to D$
%\end{verse}
\begin{verse}
$\forall x\,(A\wedge B\to C\wedge D)$\\
\verb|proof|\\
\verb| let |$a$\verb|;|\\
\verb| assume L4: |$A\wedge B$\verb|;|\\
\verb| thus |$C$\verb| by L1,L4;|\\
\verb| thus |$D$\verb| by L2,L4;|\\
\verb|end;|
\end{verse}
For a more complex example, here is an {\em extremely\/} explicit proof of the Drinker's principle:
\begin{verse}
$\exists x\,(P(x)\to\forall y\,P(y))$\\*
\verb|proof|\\*
\verb| |$\neg\neg\exists x\,(P(x)\to\forall y\,P(y))$\\*
\verb| proof|\\*
\verb| assume H1: |$\neg\exists x\,(P(x)\to\forall y\,P(y))$\verb|;|\\
\verb| H2: |$\forall x\,P(x)$\\*
\verb| proof|\\*
\verb| let |$a$\verb|;|\\
\verb| |$\neg\neg P(a)$\\*
\verb| proof|\\*
\verb| assume H3: |$\neg P(a)$\verb|;|\\
\verb| |$\exists x\,(P(x)\to\forall y\,P(y))$\\*
\verb| proof|\\*
\verb| take |$a$\verb|;|\\
\verb| assume |$P(a)$\verb|;|\\
\verb| |$\bot$\verb| by -,H3;|\\
\verb| thus |$\forall y\,P(y)$\verb| by -;|\\*
\verb| end;|\\
\verb| thus |$\bot$\verb| by -,H1;|\\*
\verb| end;|\\
\verb| thus |$P(a)$\verb| by -;|\\*
\verb| end;|\\
\verb| |$\exists x\,(P(x)\to\forall y\,P(y))$\\*
\verb| proof|\\*
\verb| consider |$a$\verb| such that |$\top$\verb|;|\\
\verb| take |$a$\verb|;|\\
\verb| assume |$P(a)$\verb|;|\\
\verb| thus |$\forall y\,P(y)$\verb| by H2;|\\*
\verb| end;|\\
\verb| thus |$\bot$\verb| by -,H1;|\\*
\verb| end;|\\
\verb| thus |$\exists x\,(P(x)\to\forall y\,P(y))$\verb| by -;|\\*
\verb|end;|
\end{verse}
And here is a more `human' proof:
\begin{verse}
$\exists x\,(P(x)\to\forall y\,P(y))$\\*
\verb|proof|\\*
\verb| per cases;|\\*
\verb| suppose H1: |$\forall x\,P(x)$\verb|;|\\*
\verb| consider |$a$\verb| such that |$\top$\verb|;|\\
\verb| take |$a$\verb|;|\\
\verb| thus |$P(a)\to\forall x\,P(x)$\verb| by H1;|\\
\verb| suppose |$\exists x\,\neg P(x)$\verb|;|\\*
\verb| consider |$a$\verb| such that H2: |$\neg P(a)$\verb| by -;|\\
\verb| take |$a$\verb|;|\\
\verb| assume |$P(a)$\verb|;|\\
\verb| |$\bot$\verb| by -,H2;|\\
\verb| thus |$\forall y\,P(y)$\verb| by -;|\\*
\verb|end;|
\end{verse}
Of course, because the statement is provable, all that is really needed is
the empty justification:
\begin{verse}
$\exists x\,(P(x)\to\forall y\,P(y))$\verb|;|
\end{verse}
\section{Proof Steps}
We will now describe the language constructions
that implement various natural deduction rules:
\begin{itemize}
\item \verb|let| or {\em $\forall$-introduction\/}
When the statement that has to be proved is of the form $\forall x\,P(x)$, then after
the step `\verb|let| $y$\verb|;|'
the statement to be proved will be $P(y)$. So we have that:
\begin{verse}
$\langle${\em proof of \/}$\forall x\,P(x)\rangle \equiv$\\*
%\smallskip
\qquad $\langle${\em preliminary steps\/}$\rangle$\\*
\qquad \verb|let |$y$\verb|;|\\*
\qquad $\langle${\em proof of \/}$P(y)\rangle$\\*
\end{verse}
Of course the name of the variable in the \verb|let| step usually will be the same
as the name of the variable under the $\forall$ quantifier.
This step corresponds to the natural deduction rule of $\forall$-introduction:
$$\infer[\mbox{\strut{\tt let} }y]
{\strut\Gamma\,\vdash\,\forall x\,P(x)}
{\strut\Gamma,\,y\,\vdash\,P(y)}$$
(The reason that a rule for {\em introduction\/} causes the relevant quantifier
to be {\em omitted\/} from the goal, is because in deduction the natural way of reasoning
is backwards.)
\item \verb|assume| or {\em $\to$-introduction,} {\em $\neg$-introduction\/} and {\em reductio ad absurdum\/}
The \verb|assume| step can be used in three ways:
\begin{itemize}
\item
First, it is for implication what the \verb|let| step is for
universal quantification. So, when the statement to be proved is of the form
$A\to B$, then after the step `\verb|assume| $A$\verb|;|' the statement to
be proved will be $B$. This means that:
\begin{verse}
$\langle${\em proof of \/}$A\to B\rangle \equiv$\\*
%\smallskip
\qquad $\langle${\em preliminary steps\/}$\rangle$\\*
\qquad \verb|assume |$A$\verb|;|\\*
\qquad $\langle${\em proof of \/}$B\rangle$\\*
\end{verse}
In natural deduction style this is $\to$-introduction:
$$\infer[\mbox{\strut{\tt assume} }A]
{\strut\Gamma\,\vdash\,A\to B}
{\strut\Gamma,\,A\,\vdash\,B}$$
\item
Second, because $\neg A$ is equivalent to $A\to\bot$, the \verb|assume| step also
can be used to prove negation:
\begin{verse}
$\langle${\em proof of \/}$\neg A\rangle \equiv$\\*
%\smallskip
\qquad $\langle${\em preliminary steps\/}$\rangle$\\*
\qquad \verb|assume |$A$\verb|;|\\*
\qquad $\langle${\em proof of \/}$\bot\rangle$\\*
\end{verse}
This is $\neg$-introduction:
$$\infer[\mbox{\strut{\tt assume} }A]
{\strut\Gamma\,\vdash\,\neg A}
{\strut\Gamma,\,A\,\vdash\,\bot}$$
\item
Finally, if deducing a contradiction from $A$ proves $\neg A$, deducing a
contradiction from $\neg A$ proves $A$:
\begin{verse}
$\langle${\em proof of \/}$A\rangle \equiv$\\*
%\smallskip
\qquad $\langle${\em preliminary steps\/}$\rangle$\\*
\qquad \verb|assume |$\neg A$\verb|;|\\*
\qquad $\langle${\em proof of \/}$\bot\rangle$\\*
\end{verse}
This is the principle of `reductio ad absurdum', reasoning from a contradiction:
$$\infer[\mbox{\strut{\tt assume} }\neg A]
{\strut\Gamma\,\vdash\,A}
{\strut\Gamma,\,\neg A\,\vdash\,\bot}$$
\end{itemize}
\item \verb|thus| or {\em $\wedge$-introduction\/}
The converse to the \verb|assume| step (which eliminates an hypothesis
from the statement to be proved) is the \verb|thus| step (which eliminates
a conclusion from the statement to be proved.) If the statement to
be proved is of the form $A\wedge B$, then
after a `\verb|thus| $A$ \verb|by| $\ldots$\verb|;|' (where the
\verb|by|-justification proves $A$) the statement
left to be proved will be $B$. So:
\begin{verse}
$\langle${\em proof of \/}$A\wedge B\rangle \equiv$\\*
%\smallskip
\qquad $\langle${\em preliminary steps\/}$\rangle$\\*
\qquad \verb|thus |$A$\verb| by |$\ldots$\verb|;|\\*
\qquad $\langle${\em proof of \/}$B\rangle$\\*
\end{verse}
This step works modulo associativity of $\wedge$: both $A$ and
$B$ can have multiple conjuncts.
In natural deduction style this is the $\wedge$-introduction rule:
$$\infer[\mbox{\strut{\tt thus} }A]
{\strut\Gamma\,\vdash\,A\wedge B}
{\strut\Gamma\,\vdash\,A
&\strut\Gamma\,\vdash\,B}$$
Note that there are two proof obligations above the line in this rule. The justification
of the step corresponds to the left-most one.
\item \verb|per| \verb|cases| or {\em $\vee$-elimination\/}
This construction is more complicated than the previous one.
Suppose we have to prove $B$ and are able to justify the disjunction
$A_1\vee\ldots\vee A_n$. Then we can prove by cases:
\begin{verse}
$\langle${\em proof of \/}$B\rangle \equiv$\\*
%\smallskip
\qquad $\langle${\em preliminary steps\/}$\rangle$\\*
\qquad \verb|per cases by |$\ldots$\verb|;|\\*
\qquad \verb|suppose |$A_1$\verb|;|\\*
\qquad\quad $\langle${\em proof of \/}$B$ {\em from \/}$A_1\rangle$\\*
\qquad $\ldots$\\*
\qquad \verb|suppose |$A_n$\verb|;|\\*
\qquad\quad $\langle${\em proof of \/}$B$ {\em from \/}$A_n\rangle$\\*
\end{verse}
where the `\verb|by| $\ldots$' references justify $A_1\vee\ldots\vee A_n$.
Often the disjunction is provable without conditions (for instance because
it has the shape $A\vee\neg A$) and there just is a `{\tt per cases;}'.
In natural deduction this is $\vee$-elimination:
$$\infer[\mbox{\strut{\tt per} {\tt cases}}]
{\strut\Gamma\,\vdash\,B}
{\strut\Gamma\,\vdash\,A_1\vee\ldots\vee A_n
&\strut\Gamma,\,A_1\,\vdash\,B
&\ldots
&\strut\Gamma,\,A_n\,\vdash\,B}$$
Again, the left-most of the top subgoals is the justification of
the construction.
\item \verb|consider| or {\em $\exists$-elimination\/}
The \verb|consider| step is used to apply an existential fact. If one is able to
justify $\exists x\,P(x)$ one may reason about such an $x$:
\begin{verse}
$\langle${\em proof of \/}$A\rangle \equiv$\\*
%\smallskip
\qquad $\langle${\em preliminary steps\/}$\rangle$\\*
\qquad \verb|consider |$x$\verb| such that |$P(x)$\verb| by |$\ldots$\verb|;|\\*
\qquad $\langle${\em proof of \/}$A\rangle$\\*
\end{verse}
So the `\verb|by| $\ldots$' justifies $\exists x\,P(x)$.
In natural deduction, this is $\exists$-elimination:
$$\infer[\mbox{\strut{\tt consider} }x\mbox{ {\tt such} {\tt that} }P(x)]
{\strut\Gamma\,\vdash\,A}
{\strut\Gamma\,\vdash\,\exists x\,P(x)
&\strut\Gamma,\,x,\,P(x)\,\vdash\,A}$$
Once more, the top-left subgoal is the justification needed for the step.
\item \verb|take| or {\em $\exists$-introduction\/}
Finally, to prove an existential statement there's the \verb|take| step.
If the statement to be proved has the form $\exists x\,P(x)$, then after
a step `\verb|take| $a$\verb|;|' (for some term $a$), the statement to be
proved will be $P(a)$:
\begin{verse}
$\langle${\em proof of \/}$\exists x\,P(x)\rangle \equiv$\\*
%\smallskip
\qquad $\langle${\em preliminary steps\/}$\rangle$\\*
\qquad \verb|take |$a$\verb|;|\\*
\qquad $\langle${\em proof of \/}$P(a)\rangle$\\*
\end{verse}
In natural deduction this is $\exists$-introduction:
$$\infer[\mbox{\strut{\tt take} }a]
{\strut\Gamma\,\vdash\,\exists x\,P(x)}
{\strut\Gamma\,\vdash\,P(a)}$$
\end{itemize}
All the other natural deduction rules
don't need a special step of their own, but just
are instances of the \verb|by| construction.
If we use the correspondences that we just gave
to build the natural deduction tree corresponding
to the example proof, we get:
$$
\infer[\mbox{\strut{\tt let} }a]{\strut \vdash\,\forall x\,(A\wedge B\to C\wedge D)}
{\infer[\mbox{\strut{\tt assume L4:} }A\wedge B]{\strut a\,\vdash\,A\wedge B\to C\wedge D}
{\infer[\mbox{\strut{\tt thus} }C]{\strut a,\,A\wedge B\,\vdash\,C\wedge D}
{\infer[\mbox{\strut{\tt by L1,L4}}]{\strut a,\,A\wedge B\,\vdash\,C}
{\strut\ldots}
&\infer[\strut\mbox{{\tt thus} }D]{\strut a,\,A\wedge B\,\vdash\,D}
{\infer[\strut\mbox{{\tt by L2,L4}}]{\strut a,\,A\wedge B\,\vdash\,D}
{\strut\ldots}
&\infer[\strut\mbox{{\tt end}}]{\strut a,\,A\wedge B\,\vdash}{}
}}}}
$$
\section{Loose Deduction}
The language that we described in the previous sections follows Mizar
in the requirement that steps like \verb|assume| and \verb|thus| have to follow the
structure of the statement to be proved exactly: so to prove
$A_1\to\ldots\to A_m\to B_1\wedge\ldots\wedge B_n$ one has to
have the lines:
\begin{verse}
\verb|assume |$A_1$\verb|;|\\*
$\ldots$\\*
\verb|assume |$A_m$\verb|;|\\*
\verb|thus |$B_1$\verb| by |$\ldots$\verb|;|\\*
$\ldots$\\*
\verb|thus |$B_n$\verb| by |$\ldots$\verb|;|
\end{verse}
in exactly that order.
This can be loosened considerably, in the following way:
\begin{itemize}
\item In order to determine whether it is allowed to do an \verb|assume|
step, bring the statement to be proved in negative disjunctive normal form.
Then a step:
$$\mbox{{\tt assume} $A_{i_1}\wedge\ldots\wedge A_{i_m}${\tt ;}}$$
is allowed if the normal form is $\neg A_1\vee \neg A_2\vee\ldots\vee \neg A_n$.
After the step, of course, the new statement is the disjunction of the
`remaining' $\neg A_i$.
\item In order to determine whether it is allowed to do a \verb|thus|
step, bring the statement in conjunctive normal form. Then a step:
$$\mbox{{\tt thus} $B_{i_1}\wedge\ldots\wedge B_{i_m}$ {\tt by} $\ldots${\tt ;}}$$
is allowed if the conjuctive normal form is
$B_1\wedge B_2\wedge\ldots\wedge B_n$. Again, after the step the new statement
is the conjuction of the remaing $B_i$.
\end{itemize}
When using this more loose style of natural deduction, it doesn't
matter for the proof whether one phrases a proposition in the common
mathematical style like:
$$A_1\wedge\ldots\wedge A_n\to B$$
or whether one uses the Curried implication that's common in type theory:
$$A_1\to\ldots\to A_n\to B$$
Both propositions will then behave in the same way.
\paragraph{Acknowledgements} Thanks to Gertrud Bauer and Marcus Wenzel
for explanations about Isar and to Herman Geuvers and Jan Zwanenburg
for valuable discussion.
%\bibliographystyle{plain}
%\bibliography{mv}
\begin{thebibliography}{1}
\bibitem{bar:etc:95}
J.~Barwise and J.~Etchemendy.
\newblock {\em Hyperproof}.
\newblock Number~42 in CSLI Lecture Notes. CSLI Publications, Stanford, 1995.
\bibitem{bru:87}
N.G.~de Bruijn.
\newblock The mathematical vernacular, a language for mathematics with typed
sets.
\newblock In P.~Dybjer et~al., editors, {\em Proceedings of the Workshop on
Programming Languages}, Marstrand, Sweden, 1987.
\bibitem{muz:93}
M.~Muzalewski.
\newblock {\em An Outline of PC Mizar}.
\newblock Fondation Philippe le Hodey, Brussels, 1993.
\newblock URL: \verb||.
\bibitem{wen:99}
M.~Wenzel.
\newblock {\em The Isabelle/Isar Reference Manual}.
\newblock TU M{\"u}nchen, M{\"u}nchen, 1999.
\newblock URL:
\verb||.
\bibitem{wie:99}
F.~Wiedijk.
\newblock Mizar: An impression.
\newblock Unpublished,\hfill\break URL:
\verb||, 1999.
\end{thebibliography}
\end{document}