# Alfred's Fixed Points

## A bit of formal order theory using Clojure and the LaTTe proof assistant

This document is copyright (C) 2019 Frederic Peschanski - Creative Commons CC-BY-SA 4.0

## Introduction

A **fixed point** of a function $f$ is simply a point $x$ such that $f(x)=x$.

A function may have many, one or zero fixed point(s).

All the points in the domain of an *identity* function are fixed points, i.e. for any $x$,$f(x)=x$. In Clojure the `identity`

function has thus *all* fixed points:

`(identity 42) ; a simple (and fundamental) fixed point`

`(identity {:firstname "Alfred"`

` :lastname "Tarksi"`

` :born 1901}) ; a more complex fixed point`

The unary minus function on integers has only one fixed point : the equation $-x=x$ is only true when $x$ is $0$.

And the `inc`

(increment) function on integers has no fixed point at all, there is no $x$ such that $x=x+1$.

We might wonder whether finding fixed point of function, or at least proving their existence using a so-called fixed point theorem would be useful. In fact, this question is central in (abstract) mathematics, which makes fixed point theory an important branch of study with dedicated journals (this one and this one at least) and conferences.

Quoting this page: *The existence of a fixed point is therefore of paramount importance in several areas of mathematics and other sciences*

In computer science, fixed point theorems are used in various areas, e.g. when reasoning about programs or verifying properties (e.g. by abstract interpretation or model checking), also in game theory (e.g. Nash equilibrium *is* a fixed point theorem).

One elementary albeit pervasively useful fixed point theorem is the famous **Knaster–Tarski theorem**. The statement of this theorem on Wikipedia is as follows:

*Let L be a **complete lattice** and let f : L → L be an **order-preserving function**. Then the **set** of **fixed points** of f in L is also a complete lattice.*

This is a very short but also very dense formulation !

A little bit below in the article, we can read :

*Since complete lattices cannot be **empty** (they must contain supremum of empty set), the theorem in particular guarantees the existence of at least one fixed point of f, and even the existence of a least (or greatest) fixed point. In many practical cases, this is the most important implication of the theorem.*

In this notebook our objective is to *dissect* this particular implication of the theorem, and establish (almost) all the foundations so that it can be stated formally (albeit less concisely). Ultimately, our goal is to obtain a formal proof of the theorem, mechanized in the LaTTe proof assistant... and of course in a *reproducible* *way* thanks to nextjournal!

**Intended audience**

you should be familiar with functional programming languages, possibly with Clojure (but in fact we will not write

*classical*Clojure code)an illustration of what you can do using a proof assistant such as LaTTe, and perhaps the feeling that it's not so remote from "normal" mathematics

a glimpse of the Lisp superpowers (Clojure inside !)

If you're curious about how this is done "inside", then the following talk might prove useful : Live Coding Mathematic Your First Clojure Proof (Euroclojure 2016)

(note that LaTTe evolved a little bit since the talk)

## Basic order theory in LaTTe

LaTTe is a proof assistant developed in the realm of the Clojure programming environment. It is better to know a little bit of Clojure to completely grasp what's following, but the formal statements we make are often not so far from "classical" mathematics (in Lisp forms) so this is not even a strong requirement.

We rely on a few libraries, especially latte-prelude (the basic library with propositions, quantifiers, etc.) and latte-sets (typed set theory).

`(ns fixed-points.alfred-fixpoints`

` (:refer-clojure :exclude [and or not set =])`

` (:require [clojure.repl :refer [source doc]]`

` [latte.core`

` :as latte`

` :refer [definition defthm deflemma`

` proof assume have pose qed`

` type-of forall]]`

` [latte-sets.core :as sets :refer [set elem set-equal forall-in]]`

` [latte-sets.rel :as rel :refer [rel]]`

` [latte-prelude.equal :as eq :refer [equal]]`

` [latte-prelude.prop :as p :refer [not and and* or]]`

` [latte-prelude.quant :as q :refer [exists]]))`

In the latte-sets.rel namespace is defined the notion of a **relation**.

`(source rel)`

For given types `T`

and `U`

, the type `(rel T U)`

means a relation between type `T`

and type `U`

. In classical set theory, this would be a set of pairs in the Cartesian product$T \times U$ but in type theory it's in fact a more like a "function" of two arguments. If `R`

is a relation of type `(rel T U)`

, in order to express that `x`

of type `T`

and `y`

of type `U`

are related in `R`

, we simply write `(R x y)`

. If they are *not* related, then we would simply write `(not (R x y))`

instead. Personally, I find these more "natural" than e.g. $(x,y)\in R$ or $(x,y)\notin R$.

### Ordering relations

In order theory we are interested in **ordering relations **(or simply *orders*), which are *endo* relations (between a type `T`

and itself) satisfying three properties.

First, an ordering relation must be **reflexive**. In day to day mathematics we would write: $\forall x\in T,~(x,x)\in R$ or $x~R~x$ but in LaTTe it's defined as follows.

`(definition reflexive`

` [?T :type, R (rel T T)]`

` (forall [x T] (R x x)))`

The question mark in `?T :type`

means that `T`

is an *implicit type parameter*. To state the reflexivity property, we do not want to write `(reflexive T R)`

(for a relation `R`

of type `(rel T T)`

) but simply `(reflexive R)`

. The support of such implicit type parameters is a recent addition to LaTTe, maybe not 100% finalized but we will take advantage of it to avoid a lot of boilerplate code.

The second property is that of **transitivity**, which is stated as follows:

`(definition transitive`

` [?T :type, R (rel T T)]`

` (forall [x y z T]`

` (==> (R x y)`

` (R y z)`

` (R x z))))`

In the traditional notation, we would write$\forall x,y,z \in T. x~R~y \land y~R~z \implies x~R~z$

In type theory, an implication `(==> H1 H2 ... Hn G)`

must be understood as the statement that from the hypotheses `H1`

, `H2`

, ..., `Hn`

we can deduce the `G`

(oal).

The final property of an ordering relation is that of **antisymmetry**.

`(definition antisymmetric`

` [?T :type, R (rel T T)]`

` (forall [x y T]`

` (==> (R x y)`

` (R y x)`

` (equal x y))))`

Ultimately here is the definition of an ordering relation :

`(definition order`

` [?T :type, R (rel T T)]`

` (and* (reflexive R)`

` (transitive R)`

` (antisymmetric R)))`

### The identity relation

In all the article we will mostly remain in the abstract, so we will not take example of specific relations (e.g. involving natural numbers). However one basic relation can be demonstrated an ordering : the **identity relation** on type `T`

.

`(definition ridentity`

` [T :type]`

` (lambda [x T] (lambda [y T] (equal x y))))`

We can check that `(ridentity T)`

is a relation.

`(type-of [T :type]`

` (ridentity T))`

The returned type is `(==> T T ✳)`

which is the same as `(==> T T :type)`

and thus `(rel T T)`

.

Let's first demonstrate, as a lemma, that `(ridentity T)`

is reflexive.

`(defthm rid-refl`

` [T :type]`

` (reflexive (ridentity T)))`

`(proof rid-refl`

` "First, we take an arbitrary `x` o type `T`."`

` (assume [x T]`

` "A basic fact is that `x` is equal to itself."`

` (have <a> (equal x x) :by (eq/eq-refl x))`

` "Said differently, `x` is related to itself in the identity..."`

` (have <b> ((ridentity T) x x) :by <a>))`

` "Given our assumption that `x` is arbitray, we reached our goal."`

` (qed <b>))`

Since it's our first proof in LaTTe in this document (which is not a LaTTe tutorial), let's add some comment. First, beyond the lispy syntax this formal proof is not so far from a "standard" mathematical proof. To "demonstrate" this, let's translate it removing all LaTTe-specific comments, writing the identity relation as $rid_T$.

**Proof** of Theorem `rid-refl`

First we take an arbitrary $x$ of type `T`

. A basic fact (step `<a>`

in the LaTTe proof) is that `x`

is equal to itself, which is by reflexivity of equality (i.e. $\forall x\in T,~x=x$). Said differently (step `<b>`

in the proof), `x`

is related to itself in the identity, i.e. $x~rid_T~x$. Given our assumption that $x$ is arbitrary, we reached our goal (QED).

There is a certain distance between an informal (textbook-like) mathematical proof and what we have to write in LaTTe. But this distance is often not as huge as we might think. This is because we can introduce abstractions and proof methods, which is exactly what we are doing in this document, in the realm of order theory.

From now on, we will not "translate" the proofs and only rely on LaTTe to explain things, rather than the conventional and highly informal mathematical notation.

Let's now prove that the identity relation is transitive.

`(defthm rid-trans`

` [T :type]`

` (transitive (ridentity T)))`

`(proof rid-trans`

` "First we make a local definition."`

` (pose R := (ridentity T))`

` "Now we make the assumptions of `transitive`."`

` (assume [x T y T z T`

` Hxy (R x y)`

` Hyz (R y z)]`

` "We have to prove that `x` and `z` are also related in `R`."`

` "We first relate our assumptions to equality."`

` (have <a> (equal x y) :by Hxy)`

` (have <b> (equal y z) :by Hyz)`

` "And by transitivity of equality we obtain the following:"`

` (have <c> (equal x z) :by (eq/eq-trans <a> <b>))`

` "Which can be restated as follows:"`

` (have <d> (R x z) :by <c>))`

` "Since `x`, `y` and `z` were taken arbitrarily, we reach our goal."`

` (qed <d>))`

And finally antisymmetry.

`(deflemma rid-antisym`

` [T :type]`

` (antisymmetric (ridentity T)))`

The proof is trivial.

`(proof rid-antisym`

` (pose R := (ridentity T))`

` (assume [x T y T`

` Hxy (R x y)`

` Hyx (R y x)]`

` (have <a> (equal x y) :by Hxy))`

` (qed <a>))`

Hence identity is an ordering relation.

`(defthm rid-order `

` [T :type]`

` (order (ridentity T)))`

`(proof rid-order`

` (qed (p/and-intro* (rid-refl T) `

` (rid-trans T)`

` (rid-antisym T))))`

### The inverse relation

For any relation $R$ an inverse relation $R^{-1}$ is such that $x~R~y$ iff $y~R^{-1}~x$. In LaTTe this can be defined as follows:

`(definition rinverse`

` [?T :type, R (rel T T)]`

` (lambda [x T] (lambda [y T] (R y x))))`

Many results about an ordering relation can be propagated, by a so-called *principle of duality*, to its inverse (and *vice versa*). This is an important source of what are called *free theorems*.

One important fact is that the inverse of an ordering relation is itself an ordering relation.

`(defthm rinv-order `

` [?T :type, R (rel T T)]`

` (==> (order R)`

` (order (rinverse R))))`

The proof is a little bit long because we need to handle the three properties of orders individually, but it's not difficult at all (and it could be decomposed in lemmas as we did previously, but it would not really help readability here).

`(proof rinv-order-thm`

` (pose iR != (rinverse R))`

` (assume [HR (order R)]`

` "First part: reflexivity"`

` (have <r> (reflexive R) :by (p/and-elim* 1 HR))`

` "This is the same as the following."`

` (have iR-refl (reflexive iR) :by <r>)`

` "Second part: transitivity"`

` (have R-trans (transitive R) :by (p/and-elim* 2 HR))`

` "The assumptions for transitivity of `iR`."`

` (assume [x T y T z T`

` Hxy (iR x y)`

` Hyz (iR y z)]`

` "The two hypotheses can be rewritten in terms of `R`."`

` (have <t1> (R y x) :by Hxy)`

` (have <t2> (R z y) :by Hyz)`

` "And we can exploit the fact that `R` is transitive."`

` (have <t3> (R z x) :by (R-trans z y x <t2> <t1>))`

` "Which is the same as the following:"`

` (have <t> (iR x z) :by <t3>))`

` (have iR-trans (transitive iR) :by <t>)`

` "Third part: antisymmetry"`

` `

` (have R-anti (antisymmetric R) :by (p/and-elim* 3 HR))`

` "Under the assumptions for antisymmetry of `iR`."`

` (assume [x T y T`

` Hxy (iR x y)`

` Hyx (iR y x)]`

` "We simply exploit the fact that e.g. `(iR x y)` is `(R y x)`."`

` (have <s> (equal x y) :by (R-anti x y Hyx Hxy)))`

` (have iR-anti (antisymmetric iR) :by <s>)`

` "Thus the inverse is an ordering relation."`

` (have iR-order (order iR) :by (p/and-intro* iR-refl iR-trans iR-anti)))`

` (qed iR-order))`

## Bounds

Now that we now what is an ordering relation, we can begin to discuss supplementary properties can be associated with it.

### Lower and upper bounds

Among important extra properties of ordering relations, **lower and upper bounds** play a fundamental role.

`(definition lower-bound`

` "`l` is a lower bound for `X` in the ordering relation `R`."`

` [?T :type, R (rel T T), S (set T), l T]`

` (forall-in [e S] (R l e)))`

In informal mathematics we would write something like $\forall e\in S,~l~R~e$. Note that the notion is defined for an arbitrary (*endo*) relation, not prescriptively an ordering one. The upper bound is symmetric.

`(definition upper-bound`

`"`u` is an upper bound for set `S` in the ordering relation `R`."`

` [?T :type, R (rel T T), S (set T), u T]`

` (forall-in [e S] (R e u)))`

The two notions are of course dual, as illustrated by the following theorem.

`(defthm low-up-dual`

` [?T :type, R (rel T T), S (set T), x T]`

` (==> (lower-bound R S x)`

` (upper-bound (rinverse R) S x)))`

`(proof low-up-dual-thm`

` (pose iR := (rinverse R))`

` "The hypothesis is that `x` is a lower bound."`

` (assume [Hlow (lower-bound R S x)]`

` "Now we take an arbitrary element `e` of the set `S`."`

` (assume [e T`

` He (elem e S)]`

` "By hypothesis `x` is lower than `e` in `R`."`

` (have <a> (R x e) :by (Hlow e He))`

` "Hence it is greated in `iR`."`

` (have <b> (iR e x) :by <a>))`

` "And we're done."`

` (have up (upper-bound iR S x) :by <b>))`

` (qed up))`

**Exercise** : prove similarly the theorem `up-low-dual`

that states exactly the converse, that an upper bound in `R`

is a lower bound in its inverse.

### Least and greatest elements

Lower and upper bounds of a given set may (and are often) not be elements of this set. If on the contrary they were, we would need the stronger notions of **least and greatest elements**.

`(definition least-element`

` [?T :type, R (rel T T), S (set T), l T]`

` (and (elem l S)`

` (lower-bound R S l)))`

`(definition greatest-element`

` [?T :type, R (rel T T), S (set T), u T]`

` (and (elem u S)`

` (upper-bound R S u)))`

**Exercise** : define and prove a theorem stating that the least and greatest elements are dual notions.

### Greatest lower bound (glb) and Least upper bound (lub)

Some ordering relations admit special bounds :

the

**greatest lower bound**(nicknamed**glb**), which is the greatest among all possible lower boundsthe

**least upper bound**(nicknamed**lub**), which is the least among all possible upper bounds

The LaTTe definitions follow.

`(definition glb`

` "The greatest lower bound `l` of a subset `S` relation `R`."`

` [?T :type, R (rel T T), S (set T), l T]`

` (and (lower-bound R S l)`

` (forall [x T]`

` (==> (lower-bound R S x)`

` (R x l)))))`

`(definition lub`

` "The least upper bound `u` of a subset `S` in relation `R`."`

` [?T :type, R (rel T T), S (set T), u T] `

` (and (upper-bound R S u)`

` (forall [x T]`

` (==> (upper-bound R S x)`

` (R u x)))))`

Of course, the two notions are dual.

`(defthm glb-lub-dual `

` [?T :type, R (rel T T)]`

` (forall [S (set T)]`

` (forall [l T]`

` (==> (glb R S l)`

` (lub (rinverse R) S l)))))`

`(proof glb-lub-dual-thm`

` (pose iR := (rinverse R))`

` "First let's state the main hypotheses."`

` (assume [S (set T)`

` l T`

` Hl (glb R S l)]`

` "From now on the inverse will be noted `R'`."`

` "A `glb` is a lower bound..."`

` (have <a1> (lower-bound R S l) :by (p/and-elim-left Hl))`

` "... and thus is is an upper bound for the inverse,`

`which is the least part of the [[lub]] conjunction we need."`

` (have <a> (upper-bound iR S l) :by ((low-up-dual R S l) <a1>))`

` "Now let's assume we have `u` an upper bound in `R'`."`

` (assume [u T`

` Hu (upper-bound iR S u)]`

` "We trivially have that `l` is lower than `u` in `iR'``

` (since it is greater in `R`)."`

` (have <b> (iR l u) :by ((p/and-elim-right Hl) u Hu)))`

` `

` "Thus we have the right part of the [[lub]] conjunction." `

` (have <c> _ :by (p/and-intro <a> <b>)))`

` `

` "Hence we can conclude"`

` (qed <c>))`

**Exercise** : prove similarly the theorem `lub-glb-dual`

that states exactly the converse, that a least upper bound in `R`

is a greatest lower bound in its inverse.

An important property of the lub and the glb is that, if they exist in a relation `R`

, then they are unique. First let's see how LaTTe defines the notion of uniqueness:

`(source q/unique)`

This means uniqueness is *existence* of some element (of type `T`

) satisfying some property P `(ex P)`

and *"singleness" *`(single P)`

. Let's begin with the latter:

`(source q/single)`

So for the greatest lower bound we want to prove the following :

`(deflemma glb-single`

` "Singleness of greateast lower bounds."`

` [?T :type, R (rel T T), X (set T)]`

` (==> (antisymmetric R)`

` (q/single (lambda [l T] (glb R X l)))))`

Note that our only assumption is that the relation is antisymmetric, we do not need all the other ordering properties (reflexivity, transitivity).

`(proof glb-single-lemma`

` (pose P := (lambda [l T] (glb R X l)))`

` "These are all our assumptions:"`

` (assume [Hanti (antisymmetric R) `

` l1 T, l2 T `

` H1 (P l1) H2 (P l2)]`

` "We much show that l1 is equal to l2."`

` "First, these are two lower bounds for `R`."`

` (have <a> (lower-bound R X l1) :by (p/and-elim-left H1))`

` (have <b> (lower-bound R X l2) :by (p/and-elim-left H2))`

` "Now let's apply the *greatest* constraints."`

` "First, `l1` is greater than `l2`."`

` (have <c> (R l2 l1) :by ((p/and-elim-right H1) l2 <b>))`

` "Second, `l2` is greater than `l1`."`

` (have <d> (R l1 l2) :by ((p/and-elim-right H2) l1 <a>))`

` "Thus, by antisymmetry (our hypothesis `H`) `l1` and `l2` must be equal."`

` (have <e> (equal l1 l2) :by (Hanti l1 l2 <d> <c>)))`

` "And this is enough to conclude"`

` (qed <e>))`

We may thus conclude our theorem since

`(defthm glb-unique`

` "Unicity of greatest lower bounds."`

` [?T :type, R (rel T T), X (set T)]`

` (==> (antisymmetric R)`

` (exists [l T] (glb R X l))`

` (q/unique (lambda [l T] (glb R X l)))))`

`(proof glb-unique-thm`

` (assume [Hanti _, Hex _]`

` (have <a> _ :by (p/and-intro Hex ((glb-single R X) Hanti))))`

` (qed <a>))`

**Exercise** : state and prove a similar theorem `lub-unique`

for least upper bound. Use the direct approach as above first. Then propose an alternative proof exploiting duality.

## Complete Lattices

The **complete lattices** are particularly well-behaved ordering relation wrt. the least upper and greatest lower bounds. They are defined by the fact that all their subsets have `glb`

's and `lub`

's. However, as we'll see, it is enough to have "only" `glb`

's or `lub`

's for the definition to hold. Because the least fixed points are somewhat more common in practice we take the `glb`

road (but this is somewhat an arbitrary choice, as we will make clear later on).

`(definition complete-lattice`

` "The definition of a relation `R` on `T` being a complete lattice."`

` [?T :type, R (rel T T)]`

` (and (order R)`

` (forall [X (set T)]`

` (exists [l T] (glb R X l)))))`

To emphasis the fact the choosing glb's instead of lub's in the definition is vacuous, we state the following property :

`(defthm all-glb-all-lub`

` [?T :type, R (rel T T)]`

` (==> (antisymmetric R)`

` (forall [X (set T)]`

` (exists [l T] (glb R X l)))`

` (forall [X (set T)]`

` (exists [u T] (lub R X u)))))`

This says that if an antisymmetric relation has all `glb'`

s then is also has all` lub`

's. The proof is a little bit more technical, but there is no trick we only have to dig into the definitions.

`(proof all-glb-all-lub-thm`

` "First we introduce our two assumptions."`

` (assume [H1 _ H2 _]`

` "Now let's assume an arbitrary set `X`."`

` (assume [X (set T)]`

` "We define `Y` the set of upper bounds of `X`."`

` (pose Y := (lambda [u T] (upper-bound R X u)))`

` "By hypothesis there is a unique greatest lower bound for `Y`."`

` (have <a> (exists [l T] (glb R Y l)) :by (H2 Y))`

` (have glbY-unique _ :by ((glb-unique R Y) H1 <a>))`

` "Let's call this `glbY`."`

` (pose glbY := (q/the (lambda [l T] (glb R Y l)) glbY-unique))`

` "And we keep the fact that it is indeed *the* greatest lower bound."`

` (have <b> (glb R Y glbY) :by (q/the-prop (lambda [l T] (glb R Y l)) glbY-unique))`

` "Our objective now is to show that `glbY` is *also* an upper bound for the set `X`."`

` "For this we assume an arbitrary element `x` of the set `X`."`

` (assume [x T`

` Hx (elem x X)]`

` `

` "In the first step we'll show that `x` is a lower bound for `Y`."`

` (assume [y T`

` Hy (elem y Y)]`

` (have <c1> (upper-bound R X y) :by Hy)`

` (have <c2> (R x y) :by (<c1> x Hx)))`

` (have <c> (lower-bound R Y x) :by <c2>)`

` "Hence since `glbY` is *greatest* we can show that `(R x glbY)`."`

` (have <d1> (forall [z T]`

` (==> (lower-bound R Y z)`

` (R z glbY))) :by (p/and-elim-right <b>))`

` `

` (have <d2> (==> (lower-bound R Y x)`

` (R x glbY)) :by (<d1> x))`

` `

` (have <d3> (R x glbY) :by (<d2> <c>)))`

` "Thus `glbY` is an upper bound for `X`."`

` (have <d> (upper-bound R X glbY) :by <d3>)`

` `

` "The second step consists in showing that `glbY` is the *least* upper bound."`

` (assume [x T`

` Hx (upper-bound R X x)]`

` "Taking an arbitrary upper bound `x`, we can show `(R glbY x)` since`

`as a `glbY` is a lower bound for `Y` and the assumed `x` is by hypothesis a member of `Y`. "`

` `

` (have <e1> (elem x Y) :by Hx)`

` (have <e2> (lower-bound R Y glbY) :by (p/and-elim-left <b>))`

` (have <e3> (R glbY x) :by (<e2> x <e1>)))`

` `

` (have <e> (forall [x T]`

` (==> (upper-bound R X x)`

` (R glbY x))) :by <e3>)`

` `

` "Hence we have our main result which is that `glbY` is a `lub` for `X`."`

` (have <f> (lub R X glbY) :by (p/and-intro <d> <e>))`

` "Hence we've just shown that there exists a lub for `X`, namely `glbY`."`

` (have <g> (exists [l T] (lub R X l))`

` :by ((q/ex-intro (lambda [l T] (lub R X l)) glbY) <f>))))`

` (qed <g>))`

Let's try an alternative definition for complete lattices.

`(definition complete-lattice-alt`

` [?T :type, R (rel T T)]`

` (and (order R)`

` (forall [X (set T)]`

` (exists [l T] (lub R X l)))))`

And thanks to the `all-glb-all-lub`

theorem we have the following :

`(defthm complete-lattice-glb-lub`

` [?T :type, R (rel T T)]`

` (==> (complete-lattice R)`

` (complete-lattice-alt R)))`

And the proof is trivial.

`(proof complete-lattice-glb-lub-thm`

` (assume [H _]`

` "Let's dig out the antisymmetric property"`

` (have <a> (antisymmetric R) `

` :by (p/and-elim* 3 (p/and-elim-left H)))`

` "As well as the *all glb's* property"`

` (pose glb-prop := (p/and-elim-right H))`

` (have <b> (forall [X (set T)]`

` (exists [l T] (lub R X l)))`

` :by ((all-glb-all-lub R) <a> glb-prop))`

` "and we're done"`

` (have <c> (complete-lattice-alt R)`

` :by (p/and-intro (p/and-elim-left H) <b>)))`

` (qed <c>))`

**Exercise** : prove the converse, i.e. that `(complete-lattice-alt R)`

implies `(complete-lattice R)`

. This can be done by exploiting duality.

**Exercise** : prove that if R is a complete lattice, then (rinverse R) is also a complete lattice.

## Monotonic functions

A (total) function from a domain `T`

to a codomain `U`

can be represented in LaTTe as a value of type `(==> T U)`

. In order theory, most functions of concerns are *endo functions* of type `(==> T T)`

, and given an ordering relation `R`

(with elements in `T`

) the important requirement is how such a function behaves wrt. `R`

. The most interesting property one can expect from such a function is that of *monotonicity*. The definition of a **monotonic function** is as follows:

`(definition monotonic`

` "The endo function `f` on `T` is monotonic wrt. `R`."`

` [?T :type, R (rel T T), f (==> T T)]`

` (forall [x y T]`

` (==> (R x y) (R (f x) (f y)))))`

A monotonic (or *monotone*) function is thus a function that "respects" a given ordering, it is said *order-preserving*. As an example if we take the real numbers, then addition is monotonous. Suppose the function $f_K(x) = x + K$ then $\forall x,y \in \mathbb{R},~x \leq y \implies f_K(x) \leq f_K(y)$. The function $g(x)=x^2$ is however non-monotonous (e.g. $-2 \leq 1$ but $g(-2)=4 \nleq 1=g(1)$).

To familiarize a little bit with the definition, let's prove the following theorem.

`(defthm inv-monotonic`

` [?T :type, R (rel T T), f (==> T T)]`

` (==> (monotonic R f)`

` (monotonic (rinverse R) f)))`

The proof is left uncommented, given its simplicity.

`(proof inv-monotonic-thm`

` (pose R' := (rinverse R))`

` (assume [H (monotonic R f)]`

` (assume [x T`

` y T`

` H' (R' x y)]`

` (have <a> (R y x) :by H')`

` (have <b> (R (f y) (f x)) :by ((H y x) <a>))`

` (have <c> (R' (f x) (f y)) :by <b>))`

` (have <d> (monotonic R' f) :by <c>))`

` (qed <d>))`

## Fixed points

We are not far from being able to state Alfred's fixed point theorem.

The final building block of the definition is that of a **fixed point**.

`(definition fixed-point`

` "A fixed point `x` of an endo function `f` on `T`."`

` [?T :type, f (==> T T), x T]`

` (equal x (f x)))`

There are derived notions of *pre-and-post fixed points*.

`(definition pre-fixed-point`

` [?T :type, R (rel T T), f (==> T T), x T]`

` (R x (f x)))`

`(definition post-fixed-point`

` [?T :type, R (rel T T), f (==> T T), x T]`

` (R (f x) x))`

Or course, if an element `x`

is both a pre and a post fixed point, then it is indeed a fixed point (by antisymmetry).

`(defthm pre-post-fixed-point`

` [?T :type, R (rel T T), f (==> T T), x T]`

` (==> (antisymmetric R)`

` (pre-fixed-point R f x)`

` (post-fixed-point R f x)`

` (fixed-point f x)))`

`(proof pre-post-fixed-point-thm`

` (assume [H1 (antisymmetric R)`

` H2 (pre-fixed-point R f x)`

` H3 (post-fixed-point R f x)]`

` (have <a> (fixed-point f x)`

` :by (H1 x (f x) H2 H3)))`

` (qed <a>))`

Also, a fixed point is by definition also a post fixed point, which we state as follows:

`(defthm fixed-point-post`

` [?T :type, R (rel T T), f (==> T T), x T]`

` (==> (reflexive R)`

` (fixed-point f x)`

` (post-fixed-point R f x)))`

The proof is straightforward.

`(proof fixed-point-post-thm`

` (pose P := (lambda [y T] (R y x)))`

` (assume [H1 (reflexive R) `

` H2 (fixed-point f x)]`

` "Reflexivity means `(R x x)` hence `(P x)`."`

` (have <a> (P x) :by (H1 x))`

` "Since `(equal x (f x))` we can substitute in `(P x)` we just obtained."`

` (have <b> (P (f x)) :by (eq/eq-subst P H2 <a>)))`

` "And we're done"`

` (qed <b>))`

**Exercise** : show that a fixed point is also a pre fixed point.

## The least fixed point theorem

We are now at the core of Alfred's fixed point theorem. A function can have zero, one or many fixed points. In the latter case, similarly to bounds, we would like to know if some fixed points is *less than* (resp. *greater than*) all the other ones. The notion of a **least fixed point** is defined as follows:

`(definition lfp`

` "The least fixed point of a function `f` wrt. a relation `R`."`

` [?T :type, R (rel T T), f (==> T T), mu T]`

` (and (fixed-point f mu)`

` (forall [y T]`

` (==> (fixed-point f y)`

` (R mu y)))))`

So a least fixed point is an element `mu`

such that `(fixed-point f mu)`

, and moreover for any other fixed point `(fixed-point f y)`

then `mu`

is less than `y`

in `R`

, i.e. `(R mu y)`

.

Now we can state the core result of the least fixed point theorem.

`(defthm lfp-theorem`

` "The least fixed-point theorem."`

` [?T :type, R (rel T T), f (==> T T)]`

` (==> (complete-lattice R)`

` (monotonic R f)`

` (q/unique (lambda [mu T] (lfp R f mu)))))`

The theorem states that given a complete lattice `R`

and a monotonic function `f`

, then there exists a unique least fixed point `mu`

of `f`

in `R`

. To demonstrate this theorem we have to proceed in two steps.

first we have to show the existence of a least fixed point

`mu`

.

This is stated as follows:

`(deflemma lfp-ex`

` "The least fixed-point theorem, existential part."`

` [?T :type, R (rel T T), f (==> T T)]`

` (==> (complete-lattice R)`

` (monotonic R f)`

` (exists [mu T] (lfp R f mu))))`

This is the most tricky part of the demonstration, and this is where much of the "mathematical creativity" is required... There's some artistic beauty in the proof that follows.

`(proof lfp-ex-lemma`

` (assume [H1 _ H2 _]`

` "We first define the set of post-fixed points of `F`.`

` This is in fact where most of the creativity of the proof`

` is concentrated. The artistic creativity is all here!"`

` (pose Y := (lambda [y T] (post-fixed-point R f y)))`

` `

` "This set has a glb since `R` is a complete lattice."`

` (have glbY-ex (exists [l T] (glb R Y l))`

` :by ((p/and-elim-right H1) Y))`

` (pose glbY-unique := ((glb-unique R Y)`

` (p/and-elim-right (p/and-elim-left H1))`

` glbY-ex))`

` "We name it `glbY`."`

` (pose glbY := (q/the (lambda [l T] (glb R Y l)) glbY-unique))`

` "And we recall the main property."`

` (have <a> (glb R Y glbY)`

` :by (q/the-prop (lambda [l T] (glb R Y l)) glbY-unique))`

` "In the first part we will show that `glbY` is a fixed point of `f`."`

` `

` "For this, we take an arbitrary elment `y` of `Y` `

`(hence an arbitrary post-fixed point)."`

` (assume [y T`

` Hy (elem y Y)]`

` "It is greater than a lower bound, e.g. `glbY`."`

` (have <b1> (R glbY y) :by ((p/and-elim-left <a>) y Hy))`

` "And thus by monotonicity:"`

` (have <b2> (R (f glbY) (f y)) :by (H2 glbY y <b1>))`

` "And it is also a post-fixed point, hence :"`

` (have <b3> (R (f y) y) :by Hy)`

` "Now, by transitivity:"`

` (have <b4> (R (f glbY) y) :by`

` ((p/and-elim-right (p/and-elim-left (p/and-elim-left H1)))`

` (f glbY) (f y) y <b2> <b3>))`

` "Thus `(f glbY)` is a lower bound of `Y`.")`

` (have <b> (lower-bound R Y (f glbY)) :by <b4>)`

` `

` "Moreover, because `glbY` is a greatest lower bound."`

` (have <c> (R (f glbY) glbY) :by ((p/and-elim-right <a>) (f glbY) <b>))`

` "And since `f` is monotonous."`

` (have <d1> (R (f (f glbY)) (f glbY)) :by (H2 (f glbY) glbY <c>))`

` (have <d> (elem (f glbY) Y) :by <d1>)`

` "Now `glbY` is still a lower bound, thus:"`

` (have <e> (R glbY (f glbY)) :by ((p/and-elim-left <a>) (f glbY) <d>))`

` "Thus `glbY` is a fixed point of `f`, since it is both a pre and a post-fixed point."`

` (have <f> (fixed-point f glbY)`

` :by ((pre-post-fixed-point R f glbY)`

` (p/and-elim-right (p/and-elim-left H1))`

` <e> <c>))`

` "For the second part, we need to show that `glbY` is the *least* fixed point."`

` (assume [z T`

` Hz (fixed-point f z)]`

` `

` "A fixed point `z` is also a post-fixed point."`

` (have <g1> (post-fixed-point R f z) `

` :by ((fixed-point-post R f z)`

` (p/and-elim* 1 (p/and-elim-left H1))`

` Hz))`

` "Hence `z` is a member of the set `Y`..."`

` (have <g2> (elem z Y) :by <g1>)`

` "... which `glbY` (our fixed point) is a (greatest) lower bound, hence the following:"`

` (have <g3> (R glbY z) :by ((p/and-elim-left <a>) z <g2>)))`

` "hence we have the *least* property."`

` (have <g> (forall [z T]`

` (==> (fixed-point f z)`

` (R glbY z))) :by <g3>)`

` "Now we obtained the `lfp` property for `glbY`."`

` (have <h> (lfp R f glbY)`

` :by (p/and-intro <f> <g>))`

` "Which leads to the existential conclusion."`

` (have <lfp> _ :by ((q/ex-intro (lambda [mu T] (lfp R f mu)) glbY) <h>)))`

` (qed <lfp>))`

**Remark** : this proof is a little bit "CPU intensive". One way to largely accelerate the process would be to decompose it in a few auxiliary lemmas. Alternatively, we may wait for a future optimized version of the LaTTe kernel.

The second part is to show

*"singleness"*, i.e. that two least fixed points are unavoidably*"the same"*.

`(deflemma lfp-single`

` [?T :type, R (rel T T), f (==> T T)]`

` (==> (antisymmetric R)`

` (q/single (lambda [mu T] (lfp R f mu)))))`

Note that the only property we need is that of antisymmetry.

Unlike the previous one, this proof doesn't require much creativity. We just have to follow the definition.

`(proof lfp-single-lemma`

` (assume [H (antisymmetric R)]`

` "We consider two least fixed point `mu1` and `mu2`."`

` (assume [mu1 T`

` mu2 T`

` Hmu1 (lfp R f mu1)`

` Hmu2 (lfp R f mu2)]`

` "They are by definition of `lfp` interrelated in `R`."`

` (have <a> (R mu1 mu2) :by ((p/and-elim-right Hmu1) mu2 (p/and-elim-left Hmu2)))`

` (have <b> (R mu2 mu1) :by ((p/and-elim-right Hmu2) mu1 (p/and-elim-left Hmu1)))`

` "Hence by antisymmetry they are equal."`

` (have <c> (equal mu1 mu2) :by (H mu1 mu2 <a> <b>))))`

` (qed <c>))`

Hence we have a complete proof for the least fixed point theorem.

`(proof lfp-theorem-thm`

` (assume [H1 _ H2 _]`

` (have <a> _ :by (p/and-intro `

` ;; existence`

` ((lfp-ex R f) H1 H2)`

` ;; unicity`

` ((lfp-single R f) (p/and-elim* 3 (p/and-elim-left H1))))))`

` (qed <a>))`

**Exercise** : state prove the dual *greatest fixed point theorem* stated below.

`(definition gfp`

` "The greatest fixed point of a function `f` wrt. a relation `R`."`

` [?T :type, R (rel T T), f (==> T T), nu T]`

` (and (fixed-point f nu)`

` (forall [y T]`

` (==> (fixed-point f y)`

` (R y nu)))))`

`(defthm gfp-theorem`

` "The greatest fixed-point theorem."`

` [?T :type, R (rel T T), f (==> T T)]`

` (==> (complete-lattice R)`

` (monotonic R f)`

` (q/unique (lambda [nu T] (gfp R f nu)))))`

**Hint** : The simplest, shortest and most interesting way to demonstrate this theorem is to exploit the duality of the notion.

## Conclusion

We started with an informal mathematical statement, which can be (re)formulated as follows :

Given a complete lattice `R`

and a function f monotonic on `R`

, then `f`

has a least fixed point `mu`

in `R`

. The exercises (if done correctly) should lead to the dual fact that `f`

also has a greatest fixed point `nu`

in `R`

.

In this (next)article, we defined formally all the involved notions using the LaTTe proof assistant. This comprises:

the notion of a

**complete lattice**, cf.`complete-lattice`

the notion of a

**monotonic**, or order-preserving function, cf.`monotonic`

the notion of a

**fixed point**, cf.`fixed-point`

These, in turn, require a few other definitions : ordering relations, least upper bounds, etc.

Our assessment of this little exercise is first that order theory can be fully formalized, and with some care the abstraction level of the involved definitions, statements and proofs *almost* matches the classical practice of *pencil&paper mathematics*.

Last but not least, the presentation of formal mathematical developments as nextjournal articles appear to the author as *optimal*: this really is Live Coding (abstract) Mathematics!