# Bisimulation: Relating reduction systems

Wen Kokke, Jeremy Siek, Philip Wadler

module plfa.part2.Bisimulation where
Shift+Enter to run
Agda

Some constructs can be defined in terms of other constructs. In the previous chapter, we saw how let terms can be rewritten as an application of an abstraction, and how two alternative formulations of products — one with projections and one with case — can be formulated in terms of each other. In this chapter, we look at how to formalise such claims.

Given two different systems, with different terms and reduction rules, we define what it means to claim that one simulates the other. Let's call our two systems source and target. Let M, N range over terms of the source, and M†, N† range over terms of the target. We define a relation

M ~ M†
Agda

between corresponding terms of the two systems. We have a simulation of the source by the target if every reduction in the source has a corresponding reduction sequence in the target:

Simulation: For every M, M†, and N: If M ~ M† and M —→ N then M† —↠ N† and N ~ N† for some N†.

Or, in a diagram:

M  --- —→ --- N
|             |
|             |
~             ~
|             |
|             |
M† --- —↠ --- N†
Agda

Sometimes we will have a stronger condition, where each reduction in the source corresponds to a reduction (rather than a reduction sequence) in the target:

M  --- —→ --- N
|             |
|             |
~             ~
|             |
|             |
M† --- —→ --- N†
Agda

This stronger condition is known as lock-step or on the nose simulation.

We are particularly interested in the situation where there is also a simulation from the target to the source: every reduction in the target has a corresponding reduction sequence in the source. This situation is called a bisimulation.

Simulation is established by case analysis over all possible reductions and all possible terms to which they are related. For each reduction step in the source we must show a corresponding reduction sequence in the target.

For instance, the source might be lambda calculus with let added, and the target the same system with let translated out. The key rule defining our relation will be:

M ~ M†
N ~ N†
--------------------------------
let x = M in N ~ (ƛ x ⇒ N†) · M†
Agda

All the other rules are congruences: variables relate to themselves, and abstractions and applications relate if their components relate:

-----
x ~ x
N ~ N†
------------------
ƛ x ⇒ N ~ ƛ x ⇒ N†
L ~ L†
M ~ M†
---------------
L · M ~ L† · M†
Agda

Covering the other constructs of our language — naturals, fixpoints, products, and so on — would add little save length.

In this case, our relation can be specified by a function from source to target:

(x) †               =  x
(ƛ x ⇒ N) †         =  ƛ x ⇒ (N †)
(L · M) †           =  (L †) · (M †)
(let x = M in N) †  =  (ƛ x ⇒ (N †)) · (M †)
Agda

And we have

M † ≡ N
-------
M ~ N
Agda

and conversely. But in general we may have a relation without any corresponding function.

This chapter formalises establishing that ~ as defined above is a simulation from source to target. We leave establishing it in the reverse direction as an exercise. Another exercise is to show the alternative formulations of products in Chapter More are in bisimulation.

### Imports

We import our source language from Chapter More:

open import plfa.part2.More
Shift+Enter to run
Agda

### Simulation

The simulation is a straightforward formalisation of the rules in the introduction:

infix  4 _~_
infix  5 ~ƛ_
infix  7 _~·_
data _~_ : ∀ {Γ A} → (Γ ⊢ A) → (Γ ⊢ A) → Set where
  ~ : ∀ {Γ A} {x : Γ ∋ A}
     ---------
   →  x ~  x
  ~ƛ_ : ∀ {Γ A B} {N N† : Γ , A ⊢ B}
    → N ~ N†
      ----------
    → ƛ N ~ ƛ N†
  _~·_ : ∀ {Γ A B} {L L† : Γ ⊢ A ⇒ B} {M M† : Γ ⊢ A}
    → L ~ L†
    → M ~ M†
      ---------------
    → L · M ~ L† · M†
  ~let : ∀ {Γ A B} {M M† : Γ ⊢ A} {N N† : Γ , A ⊢ B}
    → M ~ M†
    → N ~ N†
      ----------------------
    → let M N ~ (ƛ N†) · M†
Shift+Enter to run
Agda

The language in Chapter More has more constructs, which we could easily add. However, leaving the simulation small let's us focus on the essence. It's a handy technical trick that we can have a large source language, but only bother to include in the simulation the terms of interest.

##### Exercise _† (practice)

Formalise the translation from source to target given in the introduction. Show that M † ≡ N implies M ~ N, and conversely.

-- Your code goes here
Shift+Enter to run
Agda

### Simulation commutes with values

We need a number of technical results. The first is that simulation commutes with values. That is, if M ~ M† and M is a value then M† is also a value:

~val : ∀ {Γ A} {M M† : Γ ⊢ A}
  → M ~ M†
  → Value M
    --------
  → Value M†
~val ~           ()
~val (~ƛ ~N)      V-ƛ  =  V-ƛ
~val (~L ~· ~M)   ()
~val (~let ~M ~N) ()
Shift+Enter to run
Agda

It is a straightforward case analysis, where here the only value of interest is a lambda abstraction.

##### Exercise ~val⁻¹ (practice)

Show that this also holds in the reverse direction: if M ~ M† and Value M† then Value M.

-- Your code goes here
Shift+Enter to run
Agda

### Simulation commutes with renaming

The next technical result is that simulation commutes with renaming. That is, if ρ maps any judgment Γ ∋ A to a judgment Δ ∋ A, and if M ~ M† then rename ρ M ~ rename ρ M†:

~rename : ∀ {Γ Δ}
  → (ρ : ∀ {A} → Γ ∋ A → Δ ∋ A)
    ----------------------------------------------------------
  → (∀ {A} {M M† : Γ ⊢ A} → M ~ M† → rename ρ M ~ rename ρ M†)
~rename ρ (~)          =  ~
~rename ρ (~ƛ ~N)       =  ~ƛ (~rename (ext ρ) ~N)
~rename ρ (~L ~· ~M)    =  (~rename ρ ~L) ~· (~rename ρ ~M)
~rename ρ (~let ~M ~N)  =  ~let (~rename ρ ~M) (~rename (ext ρ) ~N)
Shift+Enter to run
Agda

The structure of the proof is similar to the structure of renaming itself: reconstruct each term with recursive invocation, extending the environment where appropriate (in this case, only for the body of an abstraction).

### Simulation commutes with substitution

The third technical result is that simulation commutes with substitution. It is more complex than renaming, because where we had one renaming map ρ here we need two substitution maps, σ and σ†.

The proof first requires we establish an analogue of extension. If σ and σ† both map any judgment Γ ∋ A to a judgment Δ ⊢ A, such that for every x in Γ ∋ A we have σ x ~ σ† x, then for any x in Γ , B ∋ A we have exts σ x ~ exts σ† x:

~exts : ∀ {Γ Δ}
  → {σ  : ∀ {A} → Γ ∋ A → Δ ⊢ A}
  → {σ† : ∀ {A} → Γ ∋ A → Δ ⊢ A}
  → (∀ {A} → (x : Γ ∋ A) → σ x ~ σ† x)
    --------------------------------------------------
  → (∀ {A B} → (x : Γ , B ∋ A) → exts σ x ~ exts σ† x)
~exts ~σ Z      =  ~
~exts ~σ (S x)  =  ~rename S_ (~σ x)
Shift+Enter to run
Agda

The structure of the proof is similar to the structure of extension itself. The newly introduced variable trivially relates to itself, and otherwise we apply renaming to the hypothesis.

With extension under our belts, it is straightforward to show substitution commutes. If σ and σ† both map any judgment Γ ∋ A to a judgment Δ ⊢ A, such that for every x in Γ ∋ A we have σ x ~ σ† x, and if M ~ M†, then subst σ M ~ subst σ† M†:

~subst : ∀ {Γ Δ}
  → {σ  : ∀ {A} → Γ ∋ A → Δ ⊢ A}
  → {σ† : ∀ {A} → Γ ∋ A → Δ ⊢ A}
  → (∀ {A} → (x : Γ ∋ A) → σ x ~ σ† x)
    ---------------------------------------------------------
  → (∀ {A} {M M† : Γ ⊢ A} → M ~ M† → subst σ M ~ subst σ† M†)
~subst ~σ (~ {x = x})  =  ~σ x
~subst ~σ (~ƛ ~N)       =  ~ƛ (~subst (~exts ~σ) ~N)
~subst ~σ (~L ~· ~M)    =  (~subst ~σ ~L) ~· (~subst ~σ ~M)
~subst ~σ (~let ~M ~N)  =  ~let (~subst ~σ ~M) (~subst (~exts ~σ) ~N)
Shift+Enter to run
Agda

Again, the structure of the proof is similar to the structure of substitution itself: reconstruct each term with recursive invocation, extending the environment where appropriate (in this case, only for the body of an abstraction).

From the general case of substitution, it is also easy to derive the required special case. If N ~ N† and M ~ M†, then N [ M ] ~ N† [ M† ]:

~sub : ∀ {Γ A B} {N N† : Γ , B ⊢ A} {M M† : Γ ⊢ B}
  → N ~ N†
  → M ~ M†
    -----------------------
  → (N [ M ]) ~ (N† [ M† ])
~sub {Γ} {A} {B} ~N ~M = ~subst {Γ , B} {Γ} ~σ {A} ~N
  where
  ~σ : ∀ {A} → (x : Γ , B ∋ A) → _ ~ _
  ~σ Z      =  ~M
  ~σ (S x)  =  ~
Shift+Enter to run
Agda

Once more, the structure of the proof resembles the original.

### The relation is a simulation

Finally, we can show that the relation actually is a simulation. In fact, we will show the stronger condition of a lock-step simulation. What we wish to show is:

Lock-step simulation: For every M, M†, and N: If M ~ M† and M —→ N then M† —→ N† and N ~ N† for some N†.

Or, in a diagram:

M  --- —→ --- N
|             |
|             |
~             ~
|             |
|             |
M† --- —→ --- N†
Agda

We first formulate a concept corresponding to the lower leg of the diagram, that is, its right and bottom edges:

data Leg {Γ A} (M† N : Γ ⊢ A) : Set where
  leg : ∀ {N† : Γ ⊢ A}
    → N ~ N†
    → M† —→ N†
      --------
    → Leg M† N
Shift+Enter to run
Agda

For our formalisation, in this case, we can use a stronger relation than —↠, replacing it by —→.

We can now state and prove that the relation is a simulation. Again, in this case, we can use a stronger relation than —↠, replacing it by —→:

sim : ∀ {Γ A} {M M† N : Γ ⊢ A}
  → M ~ M†
  → M —→ N
    ---------
  → Leg  M† N
sim ~              ()
sim (~ƛ ~N)         ()
sim (~L ~· ~M)      (ξ-·₁ L—→)
  with sim ~L L—→
...  | leg ~L′ L†—→                 =  leg (~L′ ~· ~M)   (ξ-·₁ L†—→)
sim (~V ~· ~M)      (ξ-·₂ VV M—→)
  with sim ~M M—→
...  | leg ~M′ M†—→                 =  leg (~V ~· ~M′)   (ξ-·₂ (~val ~V VV) M†—→)
sim ((~ƛ ~N) ~· ~V) (β-ƛ VV)        =  leg (~sub ~N ~V)  (β-ƛ (~val ~V VV))
sim (~let ~M ~N)    (ξ-let M—→)
  with sim ~M M—→
...  | leg ~M′ M†—→                 =  leg (~let ~M′ ~N) (ξ-·₂ V-ƛ M†—→)
sim (~let ~V ~N)    (β-let VV)      =  leg (~sub ~N ~V)  (β-ƛ (~val ~V VV))
Shift+Enter to run
Agda

The proof is by case analysis, examining each possible instance of M ~ M† and each possible instance of M —→ M†, using recursive invocation whenever the reduction is by a ξ rule, and hence contains another reduction. In its structure, it looks a little bit like a proof of progress:

• If the related terms are variables, no reduction applies.

• If the related terms are abstractions, no reduction applies.

• If the related terms are applications, there are three subcases:

• The source term reduces via ξ-·₁, in which case the target term does as well. Recursive invocation gives us

L  --- —→ ---  L′
|              |
|              |
~              ~
|              |
|              |
L† --- —→ --- L′†
Agda

from which follows:

 L · M  --- —→ ---  L′ · M
   |                   |
   |                   |
   ~                   ~
   |                   |
   |                   |
L† · M† --- —→ --- L′† · M†
Agda
• The source term reduces via ξ-·₂, in which case the target term does as well. Recursive invocation gives us

M  --- —→ ---  M′
|              |
|              |
~              ~
|              |
|              |
M† --- —→ --- M′†
Agda

from which follows:

 V · M  --- —→ ---  V · M′
   |                  |
   |                  |
   ~                  ~
   |                  |
   |                  |
V† · M† --- —→ --- V† · M′†
Agda

Since simulation commutes with values and V is a value, V† is also a value.

• The source term reduces via β-ƛ, in which case the target term does as well:

 (ƛ x ⇒ N) · V  --- —→ ---  N [ x := V ]
      |                           |
      |                           |
      ~                           ~
      |                           |
      |                           |
(ƛ x ⇒ N†) · V† --- —→ --- N† [ x :=  V† ]
Agda

Since simulation commutes with values and V is a value, V† is also a value. Since simulation commutes with substitution and N ~ N† and V ~ V†, we have N [ x := V ] ~ N† [ x := V† ].

• If the related terms are a let and an application of an abstraction, there are two subcases:

• The source term reduces via ξ-let, in which case the target term reduces via ξ-·₂. Recursive invocation gives us

M  --- —→ ---  M′
|              |
|              |
~              ~
|              |
|              |
M† --- —→ --- M′†
Agda

from which follows:

let x = M in N --- —→ --- let x = M′ in N
      |                         |
      |                         |
      ~                         ~
      |                         |
      |                         |
(ƛ x ⇒ N) · M  --- —→ --- (ƛ x ⇒ N) · M′
Agda
• The source term reduces via β-let, in which case the target term reduces via β-ƛ:

let x = V in N  --- —→ ---  N [ x := V ]
      |                         |
      |                         |
      ~                         ~
      |                         |
      |                         |
(ƛ x ⇒ N†) · V† --- —→ --- N† [ x := V ]
Agda

Since simulation commutes with values and V is a value, V† is also a value. Since simulation commutes with substitution and N ~ N† and V ~ V†, we have N [ x := V ] ~ N† [ x := V† ].

##### Exercise sim⁻¹ (practice)

Show that we also have a simulation in the other direction, and hence that we have a bisimulation.

-- Your code goes here
Shift+Enter to run
Agda
##### Exercise products (practice)

Show that the two formulations of products in Chapter More are in bisimulation. The only constructs you need to include are variables, and those connected to functions and products. In this case, the simulation is not lock-step.

-- Your code goes here
Shift+Enter to run
Agda

### Unicode

This chapter uses the following unicode:

†  U+2020  DAGGER (\dag)
⁻  U+207B  SUPERSCRIPT MINUS (\^-)
¹  U+00B9  SUPERSCRIPT ONE (\^1)`
Agda