2. Propositional logic

2.1. What is a proposition?

A proposition is a definitive statement which we may be able to prove. In Lean we write P : Prop to express that P is a proposition.

We will later introduce ways to construct interesting propositions i.e. mathematical statements or statements about programs, but in the moment we will use propositional variables instead. We declare in Lean:

variables P Q R : Prop

This means that P Q R are propositional variables which may be substituted by any concrete propositions. In the moment it is helpful to think of them as statements like “The sun is shining” or “We go to the zoo.”

We introduce a number of connectives and logical constants to construct propositions:

  • Implication (), read P Q as if P then Q.

  • Conjunction (), read P Q as P and Q.

  • Disjunction (), read P Q as P or Q.

    Note that we understand or here as inclusive, it is ok that both are true.

  • false, read false as Pigs can fly.

  • true, read true as It sometimes rains in England.

  • Negation (¬), read ¬ P as not P.

    We define ¬ P as P false.

  • Equivalence, (), read P Q as P is equivalent to Q.

    We define P Q as (P Q) (Q P).

As in algebra we use parentheses to group logical expressions. To save parentheses there are a number of conventions:

  • Implication and equivalence bind weaker than conjunction and disjunction.

    E.g. we read P Q R as (P Q) R.

  • Implication binds stronger than equivalence.

    E.g. we read P Q R as (P Q) R.

  • Conjunction binds stronger than disjunction.

    E.g. we read P Q R as (P Q) R.

  • Negation binds stronger than all the other connectives.

    E.g. we read ¬ P Q as P) Q.

  • Implication is right associative.

    E.g. we read P Q R as P (Q R).

This is not a complete specification. If in doubt use parentheses.

We will now discuss how to prove propositions in Lean. If we are proving a statement containing propositional variables then this means that the statement is true for all replacements of the variables with actual propositions. We say it is a tautology.

Tautologies are sort of useless in everyday conversations because they contain no information. However, for our study of logic they are important because they exhibit the basic figures of reasoning.

2.2. Our first proof

In Lean we write p : P for p proves the proposition P. For our purposes a proof is a sequence of tactics affecting the current proof state which is the sequence of assumptions we have made and the current goal. A proof begins with begin and ends with end and every tactic is terminated with ,.

We start with a very simple tautology P P: If P then P. We can illustrate this with the statement if the sun shines then the sun shines. Clearly, this sentence contains no information about the weather, it is vacuously true, indeed it is a tautology.

Here is how we prove it in Lean:

theorem I : P  P :=
begin
  assume h,
  exact h,
end

We tell Lean that we want to prove a theorem (maybe a bit too grandios a name for this) named I (for identity). The actual proof is just two lines, which invoke tactics:

  • assume h means that we are going to prove an implication by assuming the premise (the left hand side) and using this assumption to prove the conclusion (the right hand side). If you look at the html version of this document you can click on Try it to open lean in a separate window. When you move the cursor before assume h You see that the proof state is:

    P : Prop
    ⊢ P → P
    

    This means we assume that P is a proposition and want to prove P P. The symbol (pronounced turnstile) separates the assumptions and the goal. After assume h, the proof state is:

    P : Prop,
    h : P
    ⊢ P
    

    This means our goal is now P but we have an additional assumption h : P.

  • exact h, We complete the proof by telling Lean that there is an assumption that exactly matches the current goal. If you move the cursor after the , you see no goals. We are done.

2.3. Using assumptions

Next we are going to prove another tautology: (P Q) (Q R) P R. Here is a translation into english:

If if the sun shines then we go to the zoo then if if we go to the zoo then we are happy then if the sun shines then we are happy

Maybe this already shows why it is better to use formulas to write propositions.

Here is the proof in Lean (I call it C for compose).

theorem C : (P  Q)  (Q  R)  P  R :=
begin
  assume p2q,
  assume q2r,
  assume p,
  apply q2r,
  apply p2q,
  exact p,
end

First of all it is useful to remember that associates to the right, putting in the invisible brackets this corresponds to:

(P → Q) → ((Q → R) → (P → R))

After the three assume we are in the following state:

P Q R : Prop,
p2q : P → Q,
q2r : Q → R,
p : P
⊢ R

Now we have to use an implication. Clearly it is q2r which can be of any help because it promises to show R given Q. Hence once we say apply q2r we are left to show Q:

P Q R : Prop,
p2q : P → Q,
q2r : Q → R,
p : P
⊢ Q

The next step is to use apply p2q to reduce the goal to P which can be shown using exact P.

Note that there are two kinds of steps in these proofs:

  • assume h means that we are going to prove an implication P Q by assuming P (and we call this assumption h) and then proving Q with this assumption.
  • apply h if we have assumed an implication h : P Q and our current goal matches the right hand side we can use this assumption to reduce the problem to showing P (wether this is indeed a good idea depends on wether it is actually easier to show P).

The apply tactic is a bit more general, it can also be used to use a repeated implications. Here is an example:

theorem swap : (P  Q  R)  (Q  P  R) :=
begin
  assume f q p,
  apply f,
  exact p,
  exact q,
end

After assume f q p (which is a short cut for writing three times assume) we are in the following state:

P Q R : Prop,
f : P → Q → R,
q : Q,
p : P
⊢ R

Now we can use f because its conclusion matches our goal but we are left with two goals:

2 goals
P Q R : Prop,
f : P → Q → R,
q : Q,
p : P
⊢ P

P Q R : Prop,
f : P → Q → R,
q : Q,
p : P
⊢ Q

After completing the first goal with exact p it disappears and only one goal is left:

P Q R : Prop,
f : P → Q → R,
q : Q,
p : P
⊢ Q

which we can quickly eliminate using exact q,.

2.4. Proof terms

What is a proof? It looks like a proof in Lean is a sequence of tactics. But this is only the surface: the tactics are actually more like editor commands which generate the real proof which is a program. This also explains the syntax p : P which is reminiscent of the notation for typing 3 :: int in Haskell (that Haskell uses :: instead of : is a regrettable historic accident).

We can have a look at the programs generated from proofs by using the #print operation in Lean:

#print I

#print C

The proof term associated to I is:

theorem I : ∀ (P : Prop), P → P :=
λ (P : Prop) (h : P), h

and the one for C is:

theorem C : ∀ (P Q R : Prop), (P → Q) → (Q → R) → P → R :=
λ (P Q R : Prop) (p2q : P → Q) (q2r : Q → R) (p : P), q2r (p2q p)

If you have studied functional programming (e.g. Haskell) you should have a dejavu: indeed proofs are functional programs. Lean exploits the propositions as types translation (aka the Curry-Howard-Equivalence) and associates to every proposition the type of evidence for this proposition. This means that to see that a proposition holds all we need to do is to find a program in the type associated to it.

Not all Haskell programs correspond to proofs, in particular general recursion is not permitted in proofs but only some limited form of recursion that will always terminate. Also the Haskell type system isn’t expressive enough to be used in a system like Lean, it is ok for propositional logic but it doesn’t cover predicate logic which we will introduce soon. The functional language on which Lean relies is called dependent type theory or more specifically The Calculus of Inductive Constructions.

Type Theory is an interesting subject but I won’t be able to say much in this course. If you want to learn more about this you can attend Proofs, Programs and Types (COMP4074) which can be also done in year 3.

2.5. Conjunction

How to prove a conjunction? Easy! To prove P Q we need to prove both P and Q. This is achieved via the constructor tactic. Here is a simple example (and since I don’t want to give it a name, I am using example instead of theorem).

example : P  Q  P  Q :=
begin
  assume p q,
  constructor,
  exact p,
  exact q,
end

constructor turns the goal:

P Q : Prop,
p : P,
q : Q
⊢ P ∧ Q

into two goals:

2 goals
P Q : Prop,
p : P,
q : Q
⊢ P

P Q : Prop,
p : P,
q : Q
⊢ Q

Now what is your next question? Exactly! How do we use a conjunction in an assumption? Here is an example, we show that is commutative.

theorem comAnd : P  Q  Q  P :=
begin
  assume pq,
  cases pq with p q,
    constructor,
    exact q,
    exact p
end

After assume pq we are in the following state:

P Q : Prop,
pq : P ∧ Q
⊢ Q ∧ P

Assume P Q is the same as assuming both P and Q. This is facilitated via the cases tactic which needs to know which assumption we are going to use (here pq) and how we want to name the assumptions which replaces it (here p q). Hence after cases pq with p q, the state is:

P Q : Prop,
p : P,
q : Q
⊢ Q ∧ P

The name cases seems to be a bit misleading since there is only one case to consider here. However, as we will see cases is applicable more generally in situations where the name is better justified.

I hope you notice that the same symmetry of tactics how to prove and how to use which we have seen in the tactics for implication aso show for conjunction. This pattern is going to continue.

It is good to know that Lean always abstracts the propositional variables we have declared. We can actually use comAnd with different instantiation to prove the following:

theorem comAndIff : P  Q  Q  P :=
begin
  constructor,
  apply comAnd,
  apply comAnd,
end

In the 2nd use of comAnd we instantiate Q with P and P with Q. Lean will find these instances automatically but in some more complicated examples it may need some help.

2.6. The Currying Equivalence

Maybe you have already noticed that a statement like P Q R basically means that R can be proved from assuming both P and Q. Indeed, it is equivalent to P Q R. We can show this formally by using . You just need to remember that P Q is the same as (P Q) (Q P). Hence a goal of the form P Q can be turned into two goals P Q and Q P using the constructor tactic.

All the steps we have already explained so I won’t comment. It is a good idea to step through the proof using Lean (just use Try It if you are reading this in a browser).

theorem curry : P  Q  R  P  Q  R :=
begin
  constructor,
  assume pqr p q,
  apply pqr,
  constructor,
  exact p,
  exact q,
  assume pqr pq,
  cases pq with p q,
  apply pqr,
  exact p,
  exact q,
end

I call this the currying equivalence, because this is the logical counterpart of currying in functional programming: i.e. that a function with several parameters can be reduced to a function which returns a function. E.g. in Haskell addition has the type Int -> Int -> Int instead of (Int , Int) -> Int.

2.7. Disjunction

To prove a disjunction P Q we can either prove P or we can prove Q. This is achieved via the appropriately named tactics left and right. Here are some simple examples:

example : P  P  Q :=
begin
  assume p,
  left,
  exact p,
end

example : Q  P  Q :=
begin
  assume q,
  right,
  exact q,
end

To use a disjunction P Q we have to show that the current goal follows both from assuming P and from assuming Q. To achieve this we are using cases again, but this time the name actually makes sense. Here is an example:

theorem case_lem : (P  R)  (Q  R)  P  Q  R :=
begin
  assume p2r q2r pq,
  cases pq with p q,
  apply p2r,
  exact p,
  apply q2r,
  exact q,
end

After assume we are in the following state:

P Q R : Prop,
p2r : P → R,
q2r : Q → R,
pq : P ∨ Q
⊢ R

We use cases pq with p q which means that we are going to use P Q. There are two cases to consider one for P and one for Q resulting in two subgoals. We indicate which names we want to use for the assumptions in each case namely p and q:

2 goals
case or.inl
P Q R : Prop,
p2r : P → R,
q2r : Q → R,
p : P
⊢ R

case or.inr
P Q R : Prop,
p2r : P → R,
q2r : Q → R,
q : Q
⊢ R

To summarise: there are two ways to prove a disjunction using left and right. To use a disjunction in an assumption we use cases to perform a case analysis and show that the current goal follows from either of the two components.

2.8. Logic and algebra

As an example which involves both conjunction and disjunction we prove distributivity. In algebra we know the law \(x(y+z) = xy+xz\) a similar law holds in propositional logic:

example : P  (Q  R)  (P  Q)  (P  R) :=
begin
  sorry
end

Sorry, but I was too lazy to do the proof so I left it to you. In Lean you can say sorry and omit the proof. This is an easy way out if you cannot complete your Lean homework: you just type sorry and it is fine. However, then I will say Sorry but you don’t get any points for this.

The correspondence with algebra goes further: the counterpart to implication is exponentiation but you have to read it backwards, that is P Q becomes Q^P. Then the translation of the law \(x^{yz} = (x^y)^z\) corresponds to the currying equivalence P Q R P Q R.

Maybe you remember that there is another law of exponentiation \(x^{y+z} = x^y x^z\). And indeed its translation is also a law of logic:

theorem case_thm : P  Q  R  (P  R)  (Q  R) :=
begin
  sorry
end

Actually the right to left direction is a curried version of the case_lem we have already shown. It turns out that it is another logical equivalence. It shouldn’t be too difficult to complete the proof.

Intrigued? I can only warn you not to study category theory to find out more because it is highly addictive.

2.9. True, false and negation

There are two logical constants true and false. I find it always difficult to translate them into english because we don’t really use them in conversations to state a proposition. Hence the best approximation for true is something which is obviously true like It sometimes rains in England and to translate false with something obviously false, like Pigs can fly.

As far is logic and proof concerned, true is like an empty conjunction and false is an empty disjunction. Hence it is easy to prove true

example : true :=
begin
  constructor,
end

Alternatively we can use the tactic trivial.

While in the case of we were left with 2 subgoals now we are left with none, ie. we are already done.

Symmetrically there is no way to prove false because we neither have left nor right and this is good so. On the other hand doing cases on false as an assumption makes the current goal go away as by magic and leaves no goals to be proven.

theorem efq : false  P :=
begin
  assume pigs_can_fly,
  cases pigs_can_fly,
end

efq is short for the latin phrase Ex falso quod libet which means from false follows everything. This confuses some people because it isn’t really a principle of everyday logic where there are different levels of impossibility. However, in logic the reasoning If pigs can fly then I am the president of America is valid.

We define ¬ P as P false which means that P is impossible. Dear gentlemen, if you ask a girl to marry you and she replies If we get married then pigs can fly, this means no.

As an example we can prove the law of contradiction: it cannot be that both P and ¬ P hold.

theorem contr: ¬ (P  ¬ P) :=
begin
  assume pnp,
  cases pnp with p np,
  apply np,
  exact p,
end

2.10. Summary of tactics

Below is a table summarising the tactics we have seen so far:

  How to prove ? How to use?
assume h apply h
constructor cases h with p q
left right cases h with p q
true constructor  
false   cases h

They are related to introduction and elimination rules in natural deduction, a system devised by the german logician Gerhard Gentzen.

The syntax for using conjunction and disjunction is the same, cases h with p q , but the effect is quite different. In particular the two assumptions are added both to the context in the case of but only one of them in each of the subproofs in the case of .

You can omit the with clause and just write cases h in those situations and Lean will generate names for you. However, this is not acceptable for solutions submitted as homework.

We also have exact h which is a structural tactic that doesn’t fit into the scheme above. Actually h could be any proof term but since we have not introduced proof terms we will use exact only to refer to assumptions. There is also an alternative - the tactic assumption checks wether any assumption matches the current goal. Hence we could have written the first proof as:

theorem I : P  P :=
begin
  assume h,
  assumption,
end

Important! There many more tactics available in Lean some with a higher degree of automatisation. Also some of the tactics I have introduced are applicable in ways I haven’t explained. When solving exercises, please use only the tactics I have introduced and only in the way I have introduced them.