\documentstyle{article}
\tolerance=10000
\def\intermezzo{$$*\quad*\quad*$$}
\def\infer#1#2{\frac{\displaystyle\strut #1}{\displaystyle\strut #2}}
\def\iff{\leftrightarrow}
\def\ANALYZER{{\sc analyzer}}
\def\CHECKER{{\sc checker}}
\def\PRECHECKER{{\sc prechecker}}
\def\EQUALIZER{{\sc equalizer}}
\def\UNIFIER{{\sc unifier}}
\def\Subst{\sigma}
\def\abs{\mathop{\rm abs}\nolimits}
\def\EqList{{\it EqList\/}}
\title{CHECKER}
\author{written by Andrzej Trybulec \\
compiled by Freek Wiedijk}
\date{}
\begin{document}
\maketitle
\begin{abstract}
\noindent
This is a compilation of
a series of e-mail messages by Andrzej Trybulec.
The basic inference step (\verb|by|) of the Mizar
system is described.
\end{abstract}
\section{}
Let us start with two things:
We have to check an inference\footnote{%
`inference' is basically an uncountable word in English, but when I used
`an inference' nobody protested, so I think we may use it.
}
of the form
$$\infer{b_1\quad\cdots\quad b_k}{a}$$
($b_i$ are referenced premises, maybe by `\verb|then|'. \verb|Then|/\verb|hence| is only
syntactic sugar.)
The {\CHECKER} is a disprover, i.e.~it negates the conclusion, puts it among
premises and tries to get contradiction. So it has to check:
$$\infer{b_1\quad\cdots\quad b_k\quad\neg a}{\bot}$$
%
The second point that is important: Mizar works with a system of semantic
correlates. The concept of semantic correclates had been introduced by
Roman Suszko in his works on non-Fregean logic. He does it syntactically,
but on the semantic levels one can present them in the following way:
Let $F$ be algebra of formulae, let it be a universal free algebra with
usual set of connectives: $\land$, $\lor$, $\to$, $\iff$, $\neg$.
I would like to have also nullary connectives: $\top$ and $\bot$
(or in the Mizar notation maybe `\verb|contradiction|' for $\bot$, we have no
notation for $\top$). And we need quantifiers: $\forall$ and $\exists$,
I would prefer to think about them as families of operations, two
operations for any variable, one of course must explain what a variable
is, let us postpone this.
Let $L$ be corresponding Lindebaum algebra (it should be called
Tarski-Lindenbaum algebra, but usually it is called just Lindenbaum),
i.e.~the algebra of logically equivalent classes of formulae, and let
$l : F \to L$ be canonical mapping.
Then by an algebra of semantic correlates we mean a factorization of
the mapping $l$
$$
\begin{picture}(100,70)
\put(10,50){\makebox(0,0){$F$}}
\put(50,10){\makebox(0,0){$K$}}
\put(90,50){\makebox(0,0){$L$}}
\put(20,50){\vector(1,0){60}}
\put(17,43){\vector(1,-1){26}}
\put(58,18){\vector(1,1){24}}
\put(50,58){\makebox(0,0){$l$}}
\put(23,23){\makebox(0,0){$k$}}
\end{picture}
$$
Of course, as special cases we get as algebras of semantic correlates
$F$ and $L$ (with $k$ equal to the identity and $l$ resp.) Now, for practical
application we want $k$ be a computable function, with low complexity.
In Mizar, as yet, the system of semantic correlates is close to abstract
syntax, but:
All propositional connectives but $\land$, $\neg$ and $\top$, are
eliminated using well known rules (de Morgan laws etc.), the only
explanation needed: the equivalence $a \iff b$ is equal to
$$(a \to b) \land (b \to a)$$
i.e.
$$\neg(a \land \neg b) \land \neg (b \land \neg a)$$
%
The other possibility:
$$(a \land b) \lor (\neg a \land \neg b)$$
i.e.
$$\neg(\neg(a \land b) \land \neg(\neg a \land \neg b))$$
seems to be less useful.
There are additional laws.
\begin{enumerate}
\item The conjunction is associative (but not commutative).
\item $\top$ is the unity of conjunction.
\item The negation is an involution: $\neg \neg a = a$.
\end{enumerate}
We just started to experiment with the distributivity of the universal
quantifier with respect to conjunction.
I think that it is enough for the beginning, just observe that some of
logical laws are obvious, because we work with semantic correlates.
Any way, because {\CHECKER} is a disprover, and because of the semantic
correlates used, the inference
$$\infer{\forall x .\, a(x)}{a(c)}$$
is basically the same as
$$\infer{a(c)}{\exists x .\, a(x)}$$
\intermezzo
\begin{quote}\it
It is not very clear to me why it makes a difference
that conjunction is not commutative
(does it make a difference for what {\CHECKER} accepts?)
\end{quote}
Yes, the inference
$$\infer{\forall x \exists y .\, P[x,y] \land Q[x,y]}{\exists y .\, P[x_0,y] \land Q[x_0,y]}$$
is accepted
$$\infer{\forall x \exists y .\, P[x,y] \land Q[x,y]}{\exists y .\, Q[x_0,y] \land P[x_0,y]}$$
is not. Some people, at least one (Pauline N. Kawamoto), believe that
it is serious shortcoming.
\begin{quote}\it
Apparently, there is some relation between $\forall$ and $\exists$ on
the level of semantic correlates as well: apparently $\forall$
and $\neg\exists\neg$ in your $F$ map to the same element of your
$K$. So do you eliminate all the $\exists$'s as well, like the
disjunctions?
\end{quote}
Yes, I forgot to write about it.
\begin{quote}\it
Another question: do you eliminate propositional connectives
only on the outer level (so outside the quantifiers) or inside
the quantifiers as well?
\end{quote}
All of them, in the scope of a quantifier, too.
\section{}
The next step is:
Given
$$\infer{p_1\quad\cdots\quad p_k}{\bot}$$
{\CHECKER} conjugate all premises ($p_i$ -- an arbitrary formula) and creates
a disjunctive normal form of the conjunction.
Two remarks:
\begin{enumerate}
\item It is a normal form created from atomic (all kinds of them) and
universal sentences; because we needed a more general concept of normal
form that is usually used it prompted me to define it in Mizar (\verb|NORMFORM|)
and I had proved that it is a Heyting algebra (\verb|HEYTING1|); recently we
develop theory of such lattices -- we may claim that some part of
{\CHECKER} is already described in Mizar \verb|:-)|
\item Among all possible normal forms, {\CHECKER} chooses what we call the
standard normal form -- just the normal form that we get using so many
times distributivity of disjuction with respect to conjunction as necessary.
We experimented earlier with the canonical normal form (all disjuncts
of the same length) it gives a bit stronger concept of obviousness,
and the regular normal form (it is element of a Heyting algebra), but
it complicates things and the gain is comparatively small.
\end{enumerate}
%
Now we get something like
$$\infer{(q_{1,1} \land \ldots \land q_{1,k_1}) \lor \ldots \lor (q_{n,1} \land \ldots \land q_{n,k_n})}{\bot}$$
and of course {\CHECKER} tries to refute the consecutive disjuncts:
$$\infer{q_{1,1} \land \ldots \land q_{1,k_1}}{\bot} \quad\cdots\quad \infer{q_{n,1} \land \ldots \land q_{n,k_n}}{\bot}$$
In this way we get rid of the propositional calculus, if the original
inference is obvious on the propositional level, the disjunctive normal
form is empty, nothing to refute and the inference is accepted.
Of course we got more: different cases are separated now. They are checked
separately and errors are reported independently: one sometimes get many
$$\mbox{\verb|*4|}$$
or rather
$$\mbox{\verb|*4,4,4,4,4,4,4|}$$
It is an error, we should report e.g.~which disjuncts are not refuted, but
it is not easy to compute the index of the disjunct, and we need a \verb|#.LOG|
file anyway.
Of course in most cases we have just one disjunct.
\intermezzo
\begin{quote}\it
You say this like it doesn't matter for the result in
which order you apply the distributivity laws. That probably
is the case then, although it isn't completely obvious to
me.
\end{quote}
It is a consequence of the associativity. However we do more:
the absorption law is used to remove too long disjuncts. And because
in fact disjunctive normal form (in {\CHECKER}) is rather a set
(ordered collection, then not list) of disjuncts, that are boolean
valued partial functions, then on this level conjunction is commutative.
\begin{quote}\it
\begin{quote}
We experimented earlier with the canonical normal form (all
disjuncts of the same length)
\end{quote}
You mean with all atoms in all the disjuncts the same, but
depending on which disjunct it is, with or without a
negation? Some subset of the $2^n$ disjuncts?
\end{quote}
Yes, perfect.
\section{}
{\CHECKER} in fact consists of three passes, as yet we talked about the first
pass that we call the {\PRECHECKER}, the next is {\EQUALIZER} (because the main
role of it is to cope with the equality calculus), the last we call
{\UNIFIER}, that is a bad name, because we do not allow for substitution of
a variable to a variable. We have no better name. Let us move to {\UNIFIER}.
In a disjunct {\CHECKER} chooses a universal sentence (if any, if there is no
universal sentence, the inference is not accepted), which we call a `main
premise', in it removes external universal quantifiers, and change bound
variables to free.
It is convenient to think about resulting formula as a $\land$/$\lor$ tree, because
de Morgan laws hold for semantic correlates, we may push down negations to
leaves.
The remaining premises are called auxiliary premises.
In the main premise we have on leaves (possibly negated) basic formulae
(atomic or universal) with free variables. For every leaf {\CHECKER} looks
for auxiliary premises with different sign (i.e.~not negated if the
formula on the leaf is negated and non negated if it is) and the same
structure, i.e.~such auxiliary premises that differs only on places in
which in the formula on the leaf a free variable occurs.
(I hope I got the syntax correct \verb|:-)|)
So on every leaf we get a list of substitutions (partial maps from free
variables to ground terms) after which the formula of the leaf is
contradictory with an auxiliary premise.
Then we use an algebra of substitutions:
%
if $\Subst(f)$ is the list of substitutions of the formula $f$, then
\begin{eqnarray*}
\Subst(f \land g) &=& f \cup g\\
\Subst(f \lor g) &=& \{ s_1 \cup s_2 : \mbox{$s_1 \in \Subst(f)$, $s_2 \in \Subst(g)$ and
$s_1 \cup s_2$ is a function} \}
\end{eqnarray*}
The last condition means that it does not try to substitute two different
ground terms for the same variable.
Similarly as in the case of the propositional calculus list of
substitutions are standarized, bigger substitutions are removed.
I believe I forgot to write that when we create disjunctive normal form
we remove contradictory disjuncts, that corresponds to the condition that
$f \cup g$ is a function.
Then if $\Subst(m)$, where $m$ is the main formula, is non empty, the set of
premises is contradictory and the inference is accepted.
The lattice of list of substitutions is described in \verb|SUBSTLAT| and the
proof that it is a Heying algebra in \verb|HEYTING2|. In fact the lattices of
normal forms are special cases (up to isomorphism) of lattices of
substitutions.
\intermezzo
\begin{quote}\it
Something I wondered about: is {\CHECKER} a big part of Mizar's
source code?
\end{quote}
The unit {\CHECKER} is $15\%$ of the code (about $150$ kBytes), but some procedures
needed in checker, or shared with other units are in other units.
\begin{quote}\it
So I guess {\UNIFIER} just tries all universal sentences, one by
one? Is there a heuristic which one to start with, or does
{\UNIFIER} just try them in order from left to right? (I
understand that this doesn't matter for the semantics of
`\/\verb|by|', but I wondered about this.)
\end{quote}
Usually there is one universal premise, but if there is more the
order is accidental.
\section{}
Going back to the {\CHECKER}. Maybe you observed that the rules for main
premise are not precise. I wrote, purposefully, `external universal
quantifiers' but the phrase has at least two meanings:
\begin{itemize}
\item the universal quantifiers on the beginning of the proposition
\item the universal quantifiers that are not in the scope of an existential
quantifer (here we should presume that negations are pushed to leaves,
but this time leaves are nodes labelled by atomic sentences rather and
not universal)
\end{itemize}
%
In some old {\CHECKER}s we used the first approach, now the second is used.
But then (if we do nothing else) the inference
$$\infer{\forall x y .\, P[x,y]}{\forall y .\, P[a,y]}$$
is not accepted. It is in fact
$$\infer{\forall x y .\, P[x,y]\quad \exists y .\, \neg P[a,y]}{\bot}$$
Because of that we have an additional rule: {\PRECHECKER} extracts
existential sentences from disjuncts in DNF and does existential
elimination (if I recall the name), we rather say it uses the choice rule
automatically. And after introducing new constants, creates again DNF.
Then the inference above is accepted.
The second problem, {\CHECKER} standarize premises, i.e.~it uses the
distributivity of universal quantifier with respect to conjunction (and what's the
same distributivity of existential quantifier with respect to disjunction). It is a
first step to change semantic correlates in Mizar (I believe I already
wrote about that). Of course in implementation we use sort of normalized
form, and it means that quantifiers are pushed down. I mean that instead
of
$$\forall x .\, P[x] \land Q[x]$$
we have in fact
$$(\forall x .\, P[x]) \land (\forall x .\, Q[x])$$
%
After introducing this (it is comparatively new) we revised MML, but the
gain was very small. I do not remember the numbers, but it was negligable.
However some steps like
$$\infer{\forall x .\, P[x] \iff Q[x]}{\forall x .\, P[x] \to Q[x]}$$
had been eliminated. It caused also the need to eliminate fictitious
variables. This I do not like, but because of
$$\forall x y .\, P[x] \land Q[y]$$
it was necessary.
\intermezzo
\medskip\noindent
This strengthening of the {\CHECKER} caused some funny phenomena. Somebody,
I believe Markus Wenzel (the author of Isar),
asked about the use of Mizar for `pure logic', I prepared some axiomatic
files for it and wanted to prove
$$((\forall x .\, \mbox{\it Works\/}[x]) \to \mbox{\it WFS\/})
\to \exists x .\, (\mbox{\it Works\/}[x] \to \mbox{\it WFS\/})$$
(the last pair of parentheses is not necessary, I put them to avoid
misunderstanding). Before proving it (the simplest prove is just to
separate cases:
either everybody works, then from the assumption we have Well Fare State
or somebody does not work, then he is the bastard for which it is true
that if he works we have Well Fare State.
Students usually propose the prime minister as the candidate, but it is
wrong -- the tautology is not intuitionistic, and the constructive proof
is impossible \verb|:-)|.
I am lazy so first I put semicolon after the sentence, and I observed with
some horror that it had been accepted. What happens:
after distributing quantifers {\CHECKER} got
$$\infer{\neg( ((\forall x .\, \mbox{\it Works\/}[x]) \to \mbox{\it WFS\/})
\to \exists x .\, (\mbox{\it Works\/}[x] \to \mbox{\it WFS\/}) )}{\bot}$$
i.e.
$$\infer{(\neg \forall x .\, \mbox{\it Works\/}[x]) \lor \mbox{\it WFS\/} \quad
\neg \exists x .\, (\neg \mbox{\it Works\/}[x] \lor \mbox{\it WFS\/})}{\bot}$$
i.e.
$$\infer{\neg \mbox{\it Works\/}[x_0] \lor \mbox{\it WFS\/} \quad
\neg (\exists x .\, \neg \mbox{\it Works\/}[x]) \lor \neg \mbox{\it WFS\/}}{\bot}$$
Sorry, I was convinced that we have to distribute quantifers to get it
accepted. Now writing down this example I am not certain. What do you
think?
\intermezzo
\begin{quote}\it
I understand what\/ {\PRECHECKER} does (it brings things in
some canonical form, and leads to a set of conjunctions from
each of which falsity should be derived.)
\end{quote}
It is OK. However, it does a bit more. E.g.~uses the choice rule by
default, i.e.~it introduces local constants, if an existential sentence
is among premises.
\begin{quote}\it
I also think I understand\/ {\UNIFIER}: it skolemizes ($=$
introduces new function symbols for existentials) and then
takes one universally quantified formula from the conjunction
which it tries to instantiate to get a contradiction.
\end{quote}
No, we do not use skolemization, so processing basically `stops' on
existential quantifiers. To be more precise: an existential sentence is
treated as a whole and only contradiction between it (after
substantiation) we a universal sentence is checked.
\begin{quote}\it
So that leaves\/ {\EQUALIZER}. That pass does `congruence
closure' on the equalities in the conjunction. But
how does it interact with\/ {\UNIFIER}? And what happens to
equalities under the universal quantifier that's being
instantiated (are those added to the congruence too?)
\end{quote}
It is true. {\EQUALIZER} introduce what we call `aggregated constants'
or `equating classes' (we should work on the terminology) and uses them
for instantiation. An aggregated constant is characterized by the list
of its possible forms and their types (all expressed using
aggregated constants).
\begin{quote}
\begin{quote}\it
But how thus this interfere with the `calculus of
substitutions' that you told me about?
\end{quote}
\smallskip
Just these aggregated constants are substituted.
\end{quote}
\noindent
Actually it does not add the equalities (from universal sentences), but
it does a forward reasoning, it adds inequalities that are consequences
of premises, e.g., if among premises occurs
$$(a + b) + c \ne (b + a) + c$$
it adds
$$a + b \ne b + a$$
To avoid calling prechecker again, we have `one difference rule', that
says that the inequality is added, if it is the only inequality between
two unequal terms (or the only difference between two atomic sentence
with opposite valuation).
\begin{quote}
\begin{quote}\it
Now I don't understand any more. If you add\/ {\em in\/}equalities,
then you can't represent them as list of possible forms? Or
can you?
\end{quote}
\smallskip
Now I do not understand. Maybe \dots
Sentences in Mizar are not terms, we have three classes of (small
constructions): formulae, terms and types, and they are processed in a
different way. We just keep a list of sentences, we may add their
consequences to the list.
\end{quote}
\noindent
The {\EQUALIZER} (maybe it is its main role) is also a gate through which
we smuggle requirements and similar `additional but practical' rules.
\begin{quote}
\begin{quote}\it
I suppose you mean things like `symmetry' and so on?
\end{quote}
\smallskip
Yes.
\end{quote}
\intermezzo
\begin{quote}\it
So terms and types are represented by aggregated constants,
and formulae are not? (I mean: equivalent formulae are not
represented by one `aggregated formula'?)
\end{quote}
Only terms. There is not a concept of equality of type
(well, you may say that the types $\theta_1$ and $\theta_2$ are equal
if the sentence
$$\forall x : \mbox{\it set}\, .\, \left(x : \theta_1 \iff x : \theta_2\right)$$
it does not seem useful)
The equivalence calculus was discussed, but we think that the gain of
building it in is minimal.
\begin{quote}\it
So what happens to formulae?
\end{quote}
All terms in formulae and type (and of course term \verb|:-)|)
are substituted by its aggregate constants (basically by a constant I
mean here a ground term).
\begin{quote}\it
Also: how are the clusters integrated into this? Are the
type aggregates extended automatically by the clusters?
\end{quote}
Basically clusters in types are rounded up in {\ANALYZER},
but {\EQUALIZER} tries to round up them again, using the fact that
an aggregated constant has many types (inherited from the members)
\section{}
What I never wrote as yet:
We keep formulae on two lists: $P$ -- positively valued and $N$ -- negatively
valued. On this list we keep (all kinds of) atomic formulae and
universal formulae.
But:
\begin{itemize}
\item equalities are removed from the $P$ list; negatively valued\footnote{i.e.~%
`{\em un\/}equalities'? inequalities?, the same problem in Polish, inequality
means something like `$1 < k$'} are still on the $N$ list
\item negatively valued universal sentences (i.e.~existential sentence) are
removed from the $N$ list (they had been used by {\PRECHECKER} the choice
rule).
\end{itemize}
%
After creating aggregated constants {\EQUALIZER} checks if it can get a
contradiction:
\begin{enumerate}
\item\label{item:5a} maybe two differently valued formulae become equal
\item\label{item:5b} maybe we got $E \ne E$
\end{enumerate}
\paragraph{ad \ref{item:5a}}
$$\infer{P[a]\quad a=b}{P[b]}$$
after aggregation we got $E = \{a,b\}$
\begin{center}
\begin{tabular}{lcl}
the $P$ list & $P[E]$ & \\
the $N$ list & $P[E]$ & (actually $\neg P[E]$)
\end{tabular}
\end{center}
so the inference is accepted
\paragraph{ad \ref{item:5b}}
let us check
$$(a=b \lor c=d) \land (a=c \lor b=d) \to a=d \lor b=c$$
so we have to refute
$$(a=b \lor c=d) \land (a=c \lor b=d) \land
\neg(a=d \lor b=c)$$
DNF is
$$\begin{array}{l}
a=b \land a=c \land a\ne d \land b\ne c \lor {}\\
a=b \land b=d \land a\ne d \land b\ne c \lor {}\\
c=d \land a=c \land a\ne d \land b\ne c \lor {}\\
c=d \land b=d \land a\ne d \land b\ne c
\end{array}$$
after aggregation:
$$\begin{array}{l}
E_1=E_1 \land E_1=E_1 \land E_1\ne E_2 \land E_1\ne E_1,\\
E_1=E_1 \land E_1=E_1 \land E_1\ne E_1 \land E_1\ne E_2,\\
E_1=E_1 \land E_1=E_1 \land E_1\ne E_1 \land E_2\ne E_1,\\
E_1=E_1 \land E_1=E_1 \land E_2\ne E_1 \land E_1\ne E_1
\end{array}$$
so we got contradiction, and the inference is accepted.
\intermezzo
\begin{quote}\it
The commutativity keyword is taken into account when doing
the aggregation? Are there other properties like that that
affect what is aggregated?
\end{quote}
Actually it is before aggregation. The first step of {\EQUALIZER} (or maybe
the zero step) it collecting terms. It is like initialization of the
aggregation. We want terms to be kept in one copy, so it browses the
formulae and types, and terms of course to prepare a list of terms that
occur in the inference. The terms themselves are substituted by
(trivial) aggregated constants. But we must remember their structure,
so the object representing a collected term has a field that is the
list of possible forms of it.
Collecting a term
$$(a+b)+c = a+(b+c)$$
we get
$$\begin{array}{lcl}
E_1 = a && \EqList = \{ a \} \\
E_2 = b && \EqList = \{ b \} \\
E_3 = a+b && \EqList = \{ E_1+E_2, E_2+E_1 \} \\
E_4 = c && \EqList = \{ c \} \\
E_5 = (a+b)+c && \EqList = \{ E_3+E_4, E_4+E_3 \} \\
E_6 = b+c && \EqList = \{ E_2+E_4, E_4+E_2 \} \\
E_7 = a+(b+c) && \EqList = \{ E_1+E_6, E_6+E_1 \}
\end{array}$$
%
It is a trick, it hardly can be used for the associativity, and we are
in the great need to have it.
Also we want to have
\begin{itemize}
\item
involutiveness,
like $$-(-x) = x$$
% and
\item
projectivity,
like $$\abs(\abs(x)) = \abs(x)$$
\end{itemize}
but when we started to implement, we tried to do it as part of the
aggregation (the trick with the commutativity cannot be used, because
equating
$-(- E)$ to $E$
must to deal with the situation:
$$\begin{array}{l}
a = -b \\
b = -c
\end{array}$$
and to equate $a$ and $c$ we need the aggregation.)
Well, we tried but we failed, the data structures should be changed to
do it.
What affect aggregation:
\begin{enumerate}
\item explicit equalities
\item `default equalities'
(`\verb|take|' and `\verb|reconsider|' introduce new constant that is equal by
default to term that is taken or reconsidered)
\item processing structures (if two structures are equal corresponding
fields are equal)
\item requirements; it is quite irregular, but typical is equating
$x-0$ and $x$ (requirement \verb|REAL|);
not so typical is
equating a set that is empty with the empty set.
e.g.~$\emptyset\;A$ (\verb|SUBSET_1:def 3|) with $\emptyset$ (\verb|BOOLE:def 1|)
\end{enumerate}
\smallskip
\begin{quote}\it
So formulae are added to these lists by (1) `propagating
inequalities', and (2) by the properties of the predicates
(symmetry, antonym, etc.), and (3) by the requirements?
\end{quote}
Only propagating inequalities; in other cases the formula is if
constructed is virtual, usually it is not.
\smallskip
\begin{quote}\it
For a specific example: if I have a step $3+4=7$ in my proof,
how does {\EQUALIZER} prove that?
\end{quote}
In the object representing a collected term there are two fields (I do
not remember the names)
\begin{itemize}
\item a boolean field `term has an integer value' initialized of course as
false, but for numerals
\item an integer field `the integer value of it' (undefined if it has not
an integer value and initialized to the value for (small) integers
(`small' means `\verb|longint|' we doubt if we should use `\verb|int64|'))
\end{itemize}
%
The terms having the same integer value are equated. Then to refute
$$3+4 \ne 7$$
it is to refute
$$E_k \ne E_k$$
$E_k$ being the aggregated constant of $\{ 3+4, 4+3 , 7 \}$
\smallskip
\begin{quote}\it
\begin{quote}
Basically clusters in types are rounded up in {\ANALYZER},
but {\EQUALIZER} tries to round up them again
\end{quote}
Do both passes use the same `rounding up code' in the
implementation? Just curious.
\end{quote}
I wrote `tries' because there are shortcomings in the implementation.
I should rather write {\PRECHECKER} rounds up the type of ground terms
introduced by the Choice Rule. Even this seems not to work properly,
there are rare cases that
\begin{verse}
\dots\\
\verb|proof|\\
\verb| thus |\dots\verb| by |{\footnotesize ++++}\verb|;|\\
\verb|end;|
\end{verse}
cannot be changed to
\begin{verse}
\dots\verb| by |{\footnotesize ++++}\verb|;|
\end{verse}
and I suspect that the rounding up of these terms might be a problem. I
hope to fix it when back in Bialystok. To be precise, I do not know if
the problem is with rounding up, I just suspect.
The {\EQUALIZER} just `adjusts' the clusters. It means that it moves
adjectives between types of the same aggregate constant.
The real rounding up is not done, and it needs other implementation. The
problem is that the concept of `type(s)' of an aggregated constant is a
bit more complicated. If it is a functorial term, not only its aggregate
constant has `multiple type' (I mean that maybe more than one, this
correspondence is very good for me, I have to develop the terminology)
but also the arguments are aggregated classes and has multiple types.
Funny that you asked about it.
\intermezzo
\begin{quote}\it
\begin{quote}
{\dots}
equating a set that is empty with the empty set.
e.g.~$\emptyset\;A$ (\/\verb|SUBSET_1:def 3|) with $\emptyset$ (\/\verb|BOOLE:def 1|)
\end{quote}
Which requirement gives the last equality?
\end{quote}
I believe \verb|BOOLE| and \verb|SUBSET| (it was built in, but now it should require
\verb|BOOLE| and \verb|SUBSET|). The `\verb|empty|' attribute is now introduced in \verb|SUBSET_1|.
\begin{quote}\it
So how are things like symmetry, antonym, etc.\ handled? They
operate on predicates, so I guess they don't affect the
aggregation of terms, right? And if they don't add formulas
to the lists, how are they implemented then?
\end{quote}
`\verb|antonym|' and `\verb|synonym|' are only syntactic sugar, {\ANALYZER} eliminates
them.
Yes, they don't affect aggregation of terms.
`\verb|symmetry|' and so on (`\verb|reflexivity|', `\verb|irreflexivity|', `\verb|connectedness|')
are processed by both {\EQUALIZER} and {\UNIFIER}. It is done in this way
that in all procedures (I hope all) that compares two formulae
they take into account that maybe the order of arguments is different.
\end{document}