ProofCheck Manual

  1. Scripts
  2. Associated TeX Files
  3. Python Data Files
  4. ProofCheck Directives
  5. Proof Structuring Notation
  6. Syntax of Mathematical Expressions
  7. Unification
  8. Multi-Line Note Processing

A. Scripts

The most important scripts in proofcheck are parse and check. parse operates on TeX files containing ordinary text and mathematics presented using TeX and LaTeX macros and creates a file of parsed proofs which is used by check.

1. audit <source-filename> <theorem-number>

scans a ".pfs" file for both dependencies upon, and dependencies of, a proposition in that file whose number is given as the second parameter. The theorem-number has the form num.num.

2. check <source-filename> <theorem-number or pattern>

takes as a second parameter the number of a theorem whose proof is to be checked. The second parameter may also be a Python regular expression.

Example

check book [2-4]\\.

checks all theorems in a file "book.tex" with numbers beginning "2." "3.", or "4.". Because a dot is used in regular expression patterns as a wild card the slash is necessary. Including the slash dot prevents matches with theorems numbers such as "29.2", "45.6". If the pattern is just "." then all proofs in the file are checked.

The first parameter is the name of the file containing the proof or proofs to be checked.

For each step of the proof of the specified theorem check searches the rules file for a rule of inference which justifies the step and if it succeeds prints out the line number of the first such rule found, and otherwise reports that the step does not check.

-c Display cases for debugging purposes.
-i Display full text of each inference rule found.
-l linenum Use only the rule with the specified line number.
-m Report rules used in multi-line transitivity checks.
-r rulesfile Use rules from the specified file.
-t Trace premises for relevant logic.
-v Display detailed output from the unifier.

3. lsdfs <source-filename>

displays the information contained in the ".dfs" file associated with the file named by <source-filename> .

4. lspfs <source-filename>

displays the information contained in the ".pfs" file associated with the file named by <source-filename> .

5. lstrc <source-filename>

displays the information contained in the ".trc" file associated with the file named by <source-filename> .

6. parse <source-filename>

searches for TeX math expressions, propositions and proofs. Any math expression numbered using the "\prop" macro is taken as a proposition. A proof is either a simple justification or a sequence of math expressions marked as notes, using the "\note" macro, together with justifications marked using the "\By" macro, and terminated by a QED, indicated with the "\Bye" macro. If a proposition is followed by a proof then parse attempts to parse the proof as well as the proposition. parse halts when it encounters a math expression or a proof which it cannot parse. Any new definitions encountered are reported as well as the number of proofs, old definitions, and unproved propositions. If no errors are found the parsed propositions and proofs are stored in a ".pfs" file for the use of check. Because new definitions are parsed differently from old definitions, it is sometimes necessary to run parse two times before check can succeed.

In math-only mode parse ignores proof structuring and just tries to parse each TeX math expression. It halts when it encounters an expression that it cannot parse. It uses syntax information stored in the ".dfs" file to do the parse. Any new definitions encountered are reported and the updated information is stored in the ".dfs" file.

-e expression Parse the command line expression using the definitions in the source-file's ".dfs" file.
-m Use math-only mode. Ignore proof structuring information.
-q Use quiet mode. Suppress output.
-r Rebuild the ".dfs" file. Expunges old definitions.

7. renote <source-filename>

renumbers all the notes in each proof so as to make them consecutive. This allows an author to insert notes with a haphazard or arbitrary numbering while a proof is in development.

8. renum <source-filename>

renumbers all the propositions so that they are consistent with the chapter numbering (or section numbering in LaTeX). The renumbering applies to all references anywhere in the document. To avoid catching references to external documents in the renumbering the dot in these external references may be enclosed by braces: {.} renum also creates a time-stamped ".rnm" file with a map of the renumbering.

B. Associated TeX Files

Besides the source file other TeX files are needed in order for parse and check to work. Except for the TeX definition files they should all carry a ".tex" extension.

1. TeX Definition Files (.tdf or .ldf)

The primary purpose of the TeX definition files (or LaTeX definition files) is to contain TeX \def macros for the mathematical constants the author needs to define. These macros define the typesetting of the constants introduced by the author in the source file. There are typically many of these macros and it avoids clutter to put them in a file separate from the TeX source file. Each TeX source file should have an associated ".tdf" or ".ldf" file

A TeX definition file a TeX file and it should be invoked with an \input statement near the beginning of the source file. This file should also be used to store ProofCheck Directives the author needs in order to customize the operation of ProofCheck.

When other when definitions contained in another source file are implicitly relied upon, then the associated TeX definition file should be input as well.

One essential TeX definition file is utility.tdf (utility.ldf) The line

\input utility.tdf

needs to be one of the first statements in a source file containing proofs This file contains the TeX macros for Proof Structuring Notation macros needed to allow check to run.

A source file will typically therefore have at least a few TeX definition files input at the beginning. Such files are therefore a kind of header file.

2. Rules of Inference Files

Rules of Inference files are identified with the rules_file: directive.

Example

%rules_file: rules.tex

In a rules of inference file any line having the format of a rule of inference is taken as a rule of inference. The format of a rule of inference is:

<mathematical formulas and reference punctuators> \C <goal>

where the reference punctuators come the following list

( ) + , - : ; | U G H S

and the goal and other mathematical formulas are dollar delimited and \C is the turnstyle. Other lines are effectively comment lines. The modus ponens rule for example can be stated as follows:

$(\pvar \c \qvar)$;$\pvar$ \C $\qvar$

which is TeX for

(p --> q) ; p \C q

The sole reference punctuator used in this rule is the semi-colon. A note can be justified by the modus ponens rule if it carries a justification such as:

\By 1.2;.3

and the expression formed by the note followed by Theorem 1.2, the semi-colon, and note 3 can be unified with the modus ponens rule. In order for the unification to succeed Theorem 1.2 must have the form (p --> q) and where note 3 is p and the note being justified is q.

The punctuation in a justification must match that of the rule of inference it is to match exactly. The goal and the propositions referred to in the justification are matched against the corresponding formulas in the rule of inference and the unifier looks for a match. A different rule of inference file may be used if its name is supplied as the third parameter of check. Variables near the beginning of the alphabet in the list fixed_variables are fixed when given to the unifier and other variables must be replaced to match them (and not vice versa).

3. External Reference Files

Numerical references in proofs to theorems in external files are distinguished from references internal to the document in that the end with lower case letters or begin with a zero.

Example

To set up the file common.tex as the file referred to by references beginning with zero the following directive is used:

%external_ref: 0 common.tex

Given that references from this file are used in proof its notation needs to be understood by check. Consequently its TeX definition needs to be input at the top. When a ".dfs" file is initialized it is merged from the ".dfs" files associated with all the included ".tdf" (or ".ldf") files.

3. Logic Development Files

In logic development files, theorems are proved and rules of inference are derived, in the same file. Derivations of rules of inference are sequences of steps which form a template for replacing the use of a derived rule in a proof with steps not using the rule. Proofs and derivations may depend only on previously established theorems and rules of inference. Primitive rules, rules which cannot be derived, include not only rules like modus ponens, but also deduction theorems which can be used as rules of inference, both primitive and derived. External references are not used.

The purpose of these files is to close the gap introduced by using rules of inference files containing a large number (over a thousand) rules of inference. When these are introduced manually errors inevitably intrude. Logic development files are also usefully in developing non-standard logics such as constructive logic or relevant logic which can be counter-intuitive.

C. Python Data Files

These files are Python pickle files and can be read if the output from the corresponding ProofCheck ls script is scrolled. Pickle files should not, of course, be directly edited by the user. Their names are the same as that of the source file except that the ".tex" extension is replaced by ".dfs", ".pfs", or ".trc".

1. .dfs File

This file records syntax information from the mathematical definitions contained in the source file as well as other information. It is initialized by running parse on the source file. It is a Python pickle file. The information in it is stored cumulatively. To "undo" a definition stored in the ".dfs" file parse should be run using the "-r" option.

2. .pfs File

This is a large file containing a parse of each propostion and proof found in the TeX source file. It is produced by parse and used by check.

3. .trc File

This file containing tracing informations for rules of inference checked using the tracing option. It is produced by check and used by check, but only when the "-t" option is invoked.

D. ProofCheck Directives

The percent sign begins a TeX comment. ProofCheck directives use this feature to avoid display. The mathematical symbols or expressions occurring in ProofCheck directives must not be enclosed in TeX dollar signs.

A ProofCheck directive may be located in a source file or in the associated ".tdf" or ".ldf" file.

1. %associative: <symbol> [<line-number>]

instructs the unifier to treat the (infix) symbol as an associative operator. The optional line number specifies the line at which the use of the associative assumption by the unifier begins.

2. %def_precedence: <number> <handler> [<expander>]

associates to each numerical precedence value a handler which is a Python function which determines whether an infix expression, whose operators all have the given precedence, is valid. The optional expander is a re-write function.

3. %def_symbol: <user-symbol> <support-file-symbol>

allows an author to set up a symbol which will function as the equivalent of a symbol used in ProofCheck's default supporting files, such as the rules of inference or common notions files.

Example: The inverted letter 'A', written in TeX as '\forall', is generally used as the universal quantifier. The default rules of inference file uses an inverted 'V', written as '\Each'. To use '\forall' in documents an author may define:

%def_symbol: \forall \Each

4. %def_tracer: <line_number> <support-file-symbol>

In relevant logic, each rule of inference has an ancestor tracing rule. In ProofCheck these are stated using Python set operators. For derived rules of inference they are computed, but for primitive rules of inference they must be given.

Example if modus ponens occurs on line 43 of a logic development file. Then the associated ".tdf" file should contain the directive

%def_tracer: 43 (0 | 2)

These directives are used by check to initialize the ".trc" file.

5. %external_ref: <suffix> <external-filename>

defines a short suffix to be appended to numerical theorem references which refer to theorems in files other than the one whose proof is being checked. The short suffix must consist of lower case letters.

Example: Suppose that an author needs to use Theorem 3.4 from a file "limits.tex." To set up the letter 'a' as an abbreviation for this external file the following definition could be used:

%external_ref: a limits.tex

This would enable the author to invoke Theorem 3.4 in a proof using the reference "3.4a".

6. %flanker: <symbol>

Creates a symbol which is parsed like the absolute value symbol.

7. %formula_definor: <symbol>

establishes the symbol to be used to define formulas.

8. %major_unit: <latex-sectioning-unit>

is used with LaTeX to determine which LaTeX sectioning unit the major number of ProofCheck major.minor theorem number pairs should keep in step with. The default is section. Other possibilities are chapter, subsection, etc.

9. %notarian_formula: <math expression>

These expressions, codified by Morse, must begin with a unique constant and end with "x;" and two schematic expressions:

Example

%notarian_formula: \Some x ; \qbar x \pbar x

10. %notarian_term: <math expression>

These expressions are similar to the notarian formulas. Summation, for example, is a bound variable term which can be handled using notarians:

Example

%notarian_term: \sum x ; \qbar x \pbar x

11. %reductions_file: <reductions-file-name>

defines a file containing inference rules which are to be used as reductions. The directive may be used more than once thereby specifying more than one file.

12. %rules_file: <rules-file-name>

specifies a file to be used as inference rules. The directive may be used more than once thereby specifying more than one file.

13. %set_precedence: <infix-symbol> <number>

is used to assign a precedence value to an infix math-symbol.

Example: Suppose an author wanted to set up an infix operator \lessthan with a precedence value of 6 . The following assignment could be made:

%set_precedence: \lessthan 6

This would allow for the infix usage of \lessthan.

A concatenation of infix symbols used as one is called a nexus. To use \lessthan = as a nexus of precedence 7 the following assignment suffices

%set_precedence: \lessthan = 6

Symbols joined to form a nexus must singly have their own precedence values.

14. %speechpart: <infix-symbol-or-precedence-value> <kind>

Assigns a kind to infix operators, where <kind> may be verb, conjunction, or reset.

15. %term_definor: <symbol>

establishes the symbol to be used in definitions of terms as well as in assignment statements used in proofs.

16. %transitive: <rules-filename> [<line-number>-<line-number>]

establishes a file of rules of inference, or an optional range of lines in a file, as rules to be searched when transitivity steps need justification.

17. %undefined_formula: <math-expression>

is used when a formula is to be taken as primitive, for whatever reason.

18. %undefined_term: <math-expression>

is used when a term is to be taken as primitive, for whatever reason.

19. %variable: <symbol>

Makes <symbol> into a mathematical variable. This can only be done if TeX treats it as a token.

E. Proof Structuring Notation

ProofCheck needs only a few TeX macros to define a proof structuring syntax. The following macros are defined in the utility.tdf file.

1. \By <reference-expressions and reference-punctuators>

is the general form of most justifications. For more on justifications see Justifications

2. \Bye <reference-expressions and reference-punctuators>

signals the end of a proof. It is rendered with a QED.

3. \chap {<num>. <chapter-title >}

is used for chapter titles in a TeX file. In a LaTeX file, LaTeX section headers are used instead. Proposition numbers are arranged according to the chapter (or section numbers). The author need not enter these numbers consecutively since renum will renumber chapters and propositions so as to make them consecutive.

4. \linea, \lineb, \linec,..., \linez

These are the line indentation macros. The first seven of these 26 macros produce a line break followed by a prescribed indentation, 20 points for \linea, 40 points for \lineb, 60 points for \linec, etc. The others may be defined as needed by the author.

TeX takes expressions enclosed in dollar signs as mathematical expressions. Syntactically complete terms or formulas are sometimes too long to be conveniently handled as single TeX math expressions. ProofCheck allows complete terms or formulas to be broken into pieces provided these pieces are separated only by white space, these line indentation macros, and in the case of multi-line notes, justifications.

5. \noparse

turns off the parser for the remainder of the line in which it occurs. This is useful since it is sometimes necessary to use mathematical symbols or notation out of context in a way that cannot be parsed.

6. \note <note-number> <note> [<justification>]

A note-number is any positive integer with a no more than four digit representation. It is rendered with an initial dot. The note is some mathematical assertion used in the proof. A justification begins with a \By macro.

7. \prop <proposition-number> <proposition> [<justification>]

Every definition, axiom, or theorem which needs to be referred to later must be introduced in this way. A theorem may be followed by a proof.

The proposition-number must take the form: num1.num2 where num1 is the chapter number (or section number in LaTeX)

The proposition must be enclosed in TeX dollar signs. It may be broken into more than one math expression so as to span multiple lines, only if these expressions are separated by line indentation macros.

F. Syntax of Mathematical Expressions

1. Types of Symbols

The parser recognizes two kinds symbols: constants and variables. There are in turn four types of variables: term variables and formula variables of the first order and of the second order, with first order term variables, also called object variables, by far the most common. The other three types of variables are sentential variables which are formula variables of the first order, term schemators, and formula schemators. When just variables are referred to it is object variables that are referred to unless there is explicit qualification to the contrary.

a. Object Variables

At present the parser recognizes single Latin letters, LaTeX's Greek letters (except omega), and primed, subscripted and superscripted versions of these.

Examples of Variables:

'x', 'x_1', 'y\p', '\alpha^0_b'

which appear as follows:

vars

Other forms of decoration such as stars and overbars are not currently supported.

b. Sentential Variables

Sentential variables are variables which range over sentences. They are first order formula variables and are used to state theorems of logic and rules of inference.

Examples of Sentential Variables:

'\pvar', '\qvar', '\rvar',

They are italicized and underlined:

sents

The modus ponens rule of inference, for example, is stated using sentential variables as follows:

$\qvar$ <= $(\pvar \c \qvar)$; $\pvar$

c. Predicate Variables

A predicate variable is a variable which is used to initiate a predicate expression.

For example, the expression '\pbar x' represents a formula which at least potentially depends on x. The use of explicitly second order expressions of this nature allows predicate logic to be stated without the use of schemes. Symbols such as '\pbar' are not primed and have arity 1, meaning that they are followed by a single term. Symbols such as '\pbarp' and appear singly primed have arity 2 meaning that they are followed by two terms. Symbols such as '\pbarpp' are doubly primed, etc. The basic form for universal quantification is:

\Each x \pbar x

d. Function Variables

A function variable A schemator is a second order variable which is not quantified over. These are used in rules of inference and in definitions of terms or formulas containing bound variables.

Examples of Function Variables:

'\ubar', '\ubarp', '\ubarpp', '\vbar',

The number of primes is one less than the arity of the schemator. The arity is the number of terms which must follow it.

They are underlined but not italicized:

schems

Term schemators occur in basic forms that contain bound variables. The basic form for indexed union, for example, is:

\bigcup x; \pbar x \ubar x

It denotes the union of all the \ubar x such that \pbar x is true, provided that these all exist. '\ubar x' represents a term which at least potentially depends on x. As with formula schemators '\ubar' has arity 1, '\ubarp' has arity 2, etc.

e. Constants

A constant is any symbol which is not a variable.

Although some constants are existing characters such as '+' and '=' most constants must be constructed using macros. TeX and especially LaTeX and AMS-TeX already have a large number of symbols defined by means of such macros. Examples of these are '\lt' for '<' and '\sin' for 'sin'. The macro for the sine in TeX's file, plain.tex, (except for a \nolimits qualifier) is

\def\sin{\mathop{\rm sin}}

Without this definition TeX will italicize these letters. Worse yet, ProofCheck's parser might then interpret these three letters as a product of three variables! So symbols of this kind need to be established with TeX macros like the one shown for the sine function. An author will undoubtedly find the need for constants of this nature. The required TeX macros may be added to the source document itself or to an associated .tdf file.

2. Basic Forms

The basic forms are the elementary terms and formulas from which the language is generated by substitution. Basic forms result either from definitions -- the left side of any definition is automatically a basic form -- or they may be simply declared.

Because definitions are used as propositions internal to the inferential system along with axioms and theorems, the substitutions which generate the complex terms and formulas from the basic terms and formulas must necessarily be those which are inferentially valid.

ProofCheck's grammar does not accomodate contextual definition. For example one cannot define addition of classes [x] by defining ([x] + [y]). Here the expression ([x] + [y]) is a composite involving distinct basic forms (A + B) and [x]. These need to be defined separately. Similarly the commonly used definition of limit as ((limit x f(x) = L) if and only if For every epsilon, etc.) is contextual and not admissable. The left side contains previously defined basic forms for equality, (A = B), and functional evaluation, f(x). To define the limit as the number L having such and such a property one can use a definite description.

a. Origin of Basic Forms

It is required that definitions take the form

(p equiv_sym q)

or

(x ident_sym y)

We agree that a basic formula is either the left side of a definition whose definor is 'equiv_sym' or an expression explicitly declared to be an undefined formula, and that a basic term is either the left side of a definition whose definor is '=' or an expression explicitly declared to be an undefined term. A basic form is either a basic term or a basic formula.

b. Undefined Form Declarations

When a basic form is needed and not defined, either because it is truly primitive such as '(x \in y)' or just a form whose definition is inconvenient, it may be delcared using a ProofCheck macro.

%undefined_term: <new-term>

Undefined basic formulas are added using

%undefined_formula: <new-formula>

For example the file geom.tex contains the declaration:

%undefined_term: \Point

c. Schematic Expressions

A schematic expression is a schemator of arity-n followed by n not necessarily distinct (object) variables.

d. Rules for Basic Forms

Rules on what constitute acceptable basic forms are needed to prevent ambiguities and contradictions.

Occurrences of Variables in a basic form:

  1. No sentential variable occurs more than once.
  2. No schemator appears more than once.
  3. Every occurrence of a schemator is within a schematic expression (as the initial symbol).
  4. Any variable which occurs more than once occurs in some schematic expression and at least once not within a schematic expression.

The index variables of a basic form are those occurring more than once. The schematic expressions in which an index variable occurs constitute the scope of that variable. The most common basic forms contain no index variables at all. Basic forms with more than one index variable are uncommon.

There are three more rules for basic forms. These rules apply to the set of basic forms as a whole. Their statement depends on some additional terminology.

e. Signatures

We agree that the signature of a schematic expression is obtained by replacing each variable by T. If its initial symbol is a term schemator it is called a schematic term signature and if its initial symbol is a formula schemator it is called a schematic formula signature.

We agree that the signature of a basic form is the expression obtained by replacing:

  1. Each sentential variable by F,
  2. Each variable occurring just once by T,
  3. Each schematic expression by F or T according as its initial symbol is a formula schemator or a term schemator,
  4. Each remaining variable by V.

We note that the signature of a basic form contains no variables, only constants and the three symbols T, F, and V which will be used as grammatical non-terminal symbols.

f. Terms and Formulas

The terms and formulas can be specified using formal grammars or by using elementary recursion.

i) Using context free grammars:

Three non-terminal symbols are needed: T, F, and V.

Let P be the set of production rules:

T -> b for all signatures b of basic terms and all schematic term signatures b

F -> b for all signatures b of basic formulas and all schematic formula signatures b

T -> V

F -> v for all sentential variables v

V -> v for all object variables v

Let S, the set of terminal symbols, be the set of variables of all four kinds and all constants occurring in these production rules.

We then have the context free grammars:

({T,F,V}, S, P, T)

which generates the terms and:

({T,F,V}, S, P, F)

which generates the formulas.

ii) Alternate formulation

Without using the terminology of formal grammars and signatures we can define the sets of terms and formulas as follows:

The expression A is a variant of the expression B if and only if A can be obtained from B by replacing variables by variables and vice versa.

  1. Object variables, basic terms and schematic expressions whose initial symbol is a term schemator are terms.
  2. Sentential variables, basic formulas and schematic formula expressions whose initial symbol is a formula schemator are formulas.
  3. If B is a basic term and b is object variable occuring in B just once, or B is a schematic expression whose initial symbol is a term schemator, and b is replaced in B by a term, then the result is a term.
  4. If B is a basic formula and b is object variable occuring in B just once, or B is a schematic expression whose initial symbol is a formula schemator, and b is replaced in B by a term, then the result is a formula.
  5. If B is a basic term and b is sentential variable occuring in B, b is replaced in B by a formula, then the result is a term.
  6. If B is a basic formula and b is sentential variable occuring in B, and b is replaced in B by a formula, then the result is a formula.
  7. If B is a basic term and b is a schematic expression occurring in B then the result of replacing b by a term or formula, a term if the initial symbol of B is a term schemator, a formula if the initial symbol of B is a formula schemator, is a term.
  8. If B is a basic formula and b is a schematic expression occurring in B then the result of replacing b by a term or formula, a term if the initial symbol of B is a term schemator, a formula if the initial symbol of B is a formula schemator, is a formula.
  9. Any variant of a term is a term.
  10. Any variant of a formula is a formula.

iii) Unambiguity and prefix-free properties

We seek to require that the set of terms and formulas be unambiguous.

We also seek to impose the requirement that no term or formla can be an initial segment of any other term or formula.

Suppose we consider the production rules in the set P above whose right sides contain no sentential variables, no schemators, and no object variables. We then require that none of the terms or formulas resulting from the application of these rules alone be an initial segment of any other. We state this requirement as rule 7 below.

iv) Prefix Unification

Let P' be the set of production rules:

T -> b for all signatures b of basic terms

F -> b for all signatures b of basic formulas

T -> V

If A and B are signatures of basic forms then we say that A prefix unifies with B if there are expressions A' and B' such that using the rules of P' A' is derived from A, B' is derived from B and A' is an initial segment of B'.

g. Rules for the Set of Basic Forms

A basic form A subsumes a basic form B if A is B or B can be obtained from A by successively replacing index variables of A by index variables of A.

On the set of basic forms we require that

  1. If A and B are basic forms with the same signature then there is a basic form C with distinct index variables such that C subsumes A and C subsumes B.
  2. If x and y are index variables of a basic form B and A is obtained by replacing x by y in B then there is a basic form of C which is a variant of A.
  3. No signature of a basic form prefix unifies with the signature of another.

This last requirement is imposed for the sake of guaranteeing that the resulting language be unambiguous and that no term or formula begin with another.

Although the statement of these last three rules has been deferred until after the specification of terms and formulas, it should be understood that these statements in no way depend on the notions of term and formula.

h. Typical Formats

The simplest way to satisfy rule 7 is to begin every basic form with a unique constant. The result is so-called Polish notation. Although this notation is not widely adopted, many forms with a unique initial constant do exist.

Prefix Operators

In common.tex many prefix operators are also used. It should be noted that these operators are typically not followed by parentheses. For example the minimum of a set A is written '\Min A', not '\Min(A)'.

Probably the most unconventional notation here is the notation for functional evaluation:'\.fx'. Morse's prefixed dot notation is used because attempting to formalize the more conventional "infix" notation f(x) would entail the use of outer parentheses (f(x)) as well as non-standard definitions for the forms (fx) and (x).

Output character replacement

Another non-standard notation used in common.tex is 'sb x' for the set of subsets of x. To get the symbol represented by the more conventional {\mathcal{P}} to appear in the DVI file it suffices to replace the macro

\def\sb{\mathop{\rm sb}}

in the file common.tdf with the macro

\def\sb{\mathop{\mathcal {P}}}

Term Constants

A term is any mathematical expression whose role, like that of a noun in English, is to denote an object. A term constant is a term consisting of a single constant. An example of a term constant defined in common.tex is the symbol '\e' for the empty set. There are many other term constants already defined in common.tex.

Using Your own Macros

Since there is another symbol already in LaTeX for the empty set, '\emptyset', an author may prefer to use it rather than ProofCheck's '\e'. Since stored theorems relating to the empty set, at least those in the file common.tex, use '\e' it may be important that the text seen by ProofCheck use this '\e'. To set up a replacement of one symbol by another, so that an author may use a symbol which is translated to another which is seen by ProofCheck, use the ProofCheck macro:

%def_symbol: \emptyset \e

This line, ignored by TeX because it is a TeX comment, may be stored either in the document file itself or in a ".tdf" or ".ldf" include file. This is a file of Tex and ProofCheck macros which is implemented with a statement such as

\input mytext.tdf

early on in a TeX file such as mytext.tex.

Miscellaneous Constants

Forms whose constants do not fall into any of the above categories do occur in common.tex. The following are examples:

{x}, {x,y}, {x,y,z}; The singleton, the doubleton, etc.

|x|; The absolute value.

\lim x \to y \ubar x; The limit.

3. Free and Bound Variables

A variable is free in a basic form if it occurs no more than once. It is free in other words if it occurs in none of the schematic expressions of the basic form.

A variable is free in a term or formula if every occurrence is a free occurrence. This is the case if it never occurs in a location occupied by a V in the signature of a basic form. In particular a variable that does not occur at all is free.

A basic fact noted by Morse is the following: If a variable v is free in a term or formula F and v is replaced by a term or formula G then the result is a term or formula in which any variable x is free if and only if it is free in both F and G.

A variable is bound in a basic form if it does not occur exactly once.

A variable is bound in a term or formula if every occurrence is a bound occurrence.

If s is a schematic expression occurring in a basic form B and then a term or formula obtained by replacing s by a term or formula F then a variable x is bound in the result if and only if x is bound in B and if x is not bound in F then x occurs in s. (It is assumed that F is a term if and only if the intial symbol of s is a term schemator.)

4. Rules for Definitions

If L and R are the left and right sides of some definition then then

  1. Every object variable is bound in L if and only if it is bound in R.
  2. Every sentential variable occurs in L if and only if it occurs in R.
  3. Every schemator occurs in L if and only if it occurs in R.

On the set of definitions we require that:

  1. If L is the left side of definitions D and D' then D is D'.

5. The Abbreviation Layer

The grammar also has features to allow automatic removal of some parentheses, conventional abbreviations used in connection the scope of quantifiers, and some other abbreviations. These abbreviational conventions can be enabled using definitions generated by schemes.

a. Infix Operator Conventions

When the left side of a definition is enclosed in parentheses it is assumed that infix operator notation is being defined. Operators occurring in such a definition must have a precedence assigned.

i) Precedence Declarations

The precedence level of an infix operator may be set using the following ProofCheck declaration:

%set_precedence: <infix operator> <number>

This declaration must occur in either the same file as the definition or in an included ".tdf" file.

ii) Common Infix Operators

The expression set defined in common.tex includes several infix operators used in elementary logic and set theory. Most of the plain TeX macros for set theoretical expressions can be used, for example: \in, \cap, \cup, the plain TeX macros for elementhood, intersection, and union respectively. In the precedence table a numerical precedence value is assigned to each of these operators.

It is important to keep in mind that these infix operators will parse correctly only when they occur inside at least one set of parentheses. For example the expressions

(x + y = z)

(x \in y \cup z)

parse without difficulty, but neither

(x + y) = z

nor

x \in (y \cup z)

parse because of the lack of outer parentheses.

One apparent omission from this table is mention of left vs. right associativity. The parser however does not automatically group operations of equal precedence either to the left or to the right. Repeated and mixed operations of equal precedence are in general left to the user to define.

iii) Operators with a Precedence of 6

The special precedence value 6 is given to the comma and to verbs such as '<'.

Implied Conjunctions

When more than one verb is used, as in the expression

(x < y = z),

it is parsed as a conjunction:

(x < y) and (y = z).

Pluralizing Verbs

When a verb is preceded by a comma which is in turn preceded by comma separated terms as in the expression

(x,y,z,< A),

it is parsed as:

(x < A) and (y < A) and (z < A).

Triangular Modifier Expansion

When a pluarlized verb is preceded by a comma and another verb as in the expression

(x,y,z,<,in, A),

it is parsed as:

(x < y) and (x < z) and (y < z) and (x < A) and (y < A) and (z < A).

b. Bound Variable Scope Conventions

Quantifiers are the archtypical bound variable operators. In the Polish tradition other bound operators variables are given syntactically similar treatment. This means that the set forming operator, the summation operator, indexed unions and intersections, etc. all become syntactic analogs of the quantifiers. Bound variable operators that are handled in this way are, following Morse, called notarians. It is not required, however, that all bound variables occur in forms introduced by a notarian. Other bound variable forms do exist. But the syntactic variation made available by implementing a definition using a notarian is almost always a decisive argument in its favor.

Whether a form containing a index variable is notarian form or not depends on its definition.

Syntactic variation for notarians can be illustrated in the case of the universal quantifier.

So just as we can write

\Each x (x < B)

we can also write:

\Each x < A (x < B)

\Each x < A;(x \ne y) (x < B)

or:

\Each x,y,< A (x + y < B)

or:

\Each x,y,< A; (y > x) (x + y < B)

New bound variable operators may be introduced as notarians, and thereby get automatically generated variants.

i) Formula Indexing Notarians

If a bound variable operator indexes a formula, as for example a quantifier, a description, or the set-forming operator, then that operator will be treated by ProofCheck as a notarian if and only if its basic form has the following structure:

<bound-variable-operator> x \pbar x

ii) Term Indexing Notarians

If a bound variable operator indexes a term, as for example in summation or indexed union then that operator will be treated by ProofCheck as a notarian if and only if its basic form has the following structure:

<bound-variable-operator> x; \pbar x \ubar x

Basic forms containing bound variables are by no means restricted to these very specific forms. However, not to use them means laborious definition of minor syntactic variations. Suppose for example that summation were defined using the form:

\sum x \ubar x

In that case another definition would be needed for

\sum x \in A \ubar x

and another for

\sum x \in A \cup B \ubar x

etc.

For notarians which index terms (not formulas) there is also a deferred scope form as in:

\bigcup (x \cap y)\ls x \in A \rs

where \ls and \rs stand for left scope and right scope brackets.

6. Summary

Lists of symbols occurring in the common notions file are here:

Infix Operators

Prefix Operators

Bound Variable Operators

Constants

Variables

You can test expressions here: Online Expression Parser.

G. Unification

Almost all of the work of ProofCheck is done by the unifier.

1. Second order unification

The unifier handles first and second order term and formula variables. First order term variables, ordinary object variables in other words, may occur bound in expressions submitted to the unifier.

2. Variables as Constants

Variables in expressions sent to the unifier must be distinguished according to whether they are subject to substitution in order to achieve unification or whether they are to be treated as constants. Generally speaking the variables in a goal being checked are treated as constants, while those in rules of inference and theorems are not.

3. AC-unification

The unifier is an AC-unifier, in that infix operators whose associative and commutative properties have been stated in theorems stored in the properties file are matched, with these properties allowed for in the unification.

Example

(p \And q \And r) matches (r \And p \And q)

In matches between such conjunctions, a sentential variable in one may match with a subset of the conjuncts in the other.

4. Searches Refused

If a rule of inference contains a conjunction (p \And q) where p and q are not in any way distinguished, then a search for a match of this conjunction with n conjuncts would require searching over all subsets of these n conjuncts. These searches are refused by the unifier. No match will be found. Rules of inference which entail such searches are effectively ignored. The unifier is therefore incomplete as a matter of policy.

Searches in analogous situations are also refused.

Rules of inference need to be stated without the use of unnecessary variables.

H. Multi-Line Note Processing

A multi-line note is a note which occupies more than one line in the source file and at least one of its continuation lines begins with a transitive operator.

1. Transitive Operators

An operator is a transitive operator if there is a theorem stating its transitive property stored in the properties file. By default the properties file is properties.tex. According to this file the implication sign, for example, is a transitive operator.

Only infix operators can be recognized as transitive.

2. Parsing

A multi-line note is broken into subgoals called delta goals which are combined to form sigma goals.

Example

(p \And q \c x < y

< z

< w)

The first line is parsed as:

(p \And q \c x < y)

This goal is referred to as a delta goal. The second line parses into two goals, a delta goal:

(p \And q \c y < z)

and a sigma goal:

(p \And q \c x < z)

The third line also parses into a delta goal:

(p \And q \c z < w)

and a sigma goal:

(p \And q \c x < w)

When a multi-line note is invoked later in a proof it is the last sigma goal which is referred to.

3. Checking

Steps in a multi-line proof require on the average two goals to be checked instead of just one. A multi-line note with n steps produces (2n - 1) goals.

Justifications on each line of a multi-line proof are attached to the delta goals. More abstract transitivity theorems, in particular those in the properties file are invoked in order to check the sigma goals. The search for these is an exception to the general rule that ProofCheck does not search for theorems.