-- Normalization for Intuitionistic Propositional Logic
-- using hereditary substitutions.

-- Andreas Abel, 21 May 2018

module NormalizationIPL where

-- We assume a set of propositional variables aka atomic propositions.

  Base : Set

-- Propositional formulas

data Form : Set where
  Atom  : (P : Base)  Form    -- Atomic proposition
  True  : Form                 -- Truth
  False : Form                 -- Absurdity
  _∧_   : (A B : Form)  Form  -- Conjunction   -- \ wedge
  _∨_   : (A B : Form)  Form  -- Disjunction   -- \ vee
  _⇒_   : (A B : Form)  Form  -- Implication   -- \ r 2

-- Binding strength of connective in decreasing order

infixl 8 _∧_
infixl 7 _∨_
infixr 6 _⇒_

-- The propositional calculus defines how we can derive the truth
-- of a proposition C from a list of hypotheses Γ.
-- C is called the conclusion and Γ is also called the context
-- or the list of assumptions.
-- The propositional calculus allows us to make judgments
--   Γ ⊢ C
-- meaning "if all the assumptions in Γ are true, then C must be true as well".
-- To define this judgement and formal derivations of it, we first
-- formalize hypotheses.

-- Hypotheses are snoc-lists of formulas: most recent assumption is last.
-- We can also consider hypotheses as stacks of formulas.

data Cxt : Set where
  ε : Cxt                           -- Empty list  -- \ G e  ALT: \ epsilon
  _∙_ : (Γ : Cxt) (A : Form)  Cxt  -- Extension   -- \ .   and  \ G G for Γ

infixl 4 _∙_

-- The indexed type Hyp Γ A gives evidence that formula A is element
-- of the list of hypotheses Γ

data Hyp : (Γ : Cxt) (A : Form)  Set where
  top : ∀{Γ A}                  Hyp (Γ  A) A  -- Found A as the top element.
  pop : ∀{Γ A B} (x : Hyp Γ A)  Hyp (Γ  B) A  -- Remove the top element, search on.

-- Derivations for the judgement Γ ⊢ C

-- These are the proof rules of intuitionistic propositional logic.
-- In type theory, we can define Γ ⊢ C as a set whose inhabitants
-- are the derivations of Γ ⊢ C.

-- Each element of the following list of constructors of Γ ⊢ C
-- represents one proof rule for IPL.  Each rule has a number of premises
-- (in our case, between 0 and 3), and a consequence.
-- For instance, the rule
--   Γ ⊢ A    Γ ⊢ B
--   -------------- andI
--   Γ ⊢ A ∧ B
-- has two premises, that A is derivable from Γ and B as well, and allows us
-- to conclude that A ∧ B is derivable as well.

-- We distinguish introduction rules, postfixed I, from elimination rules, postfixed E.
-- Introduction rules introduce a logical connective in the consequence, while
-- elimination rules eliminate a logical connective in one of the premises.
-- Above rule "andI" is an instance of an introduction rule, it constructs
-- evidence for a conjunction from evidence for each of the conjuncts.
-- An example for an elimination rule is:
--   Γ ⊢ A ∧ B
--   --------- andE₁
--   Γ ⊢ A
-- It allows us to conclude the derivability of the first conjunct, A, from
-- the derivability of the conjunction A ∧ B.
-- If an elimination rule has several premises, we call the one with
-- the eliminated connective the principal premise or main premise,
-- and its formula the principal or main formula, and the connective
-- the principal connective.
-- For example:
--   Γ ⊢ A ⇒ B    Γ ⊢ A
--   ------------------- impE
--   Γ ⊢ B
-- This rule, traditionally called "modus ponens", has principle connective ⇒,
-- principle formula A ⇒ B, and principal premise Γ ⊢ A ⇒ B.
-- The other premise, here, Γ ⊢ A, is called "side premise".
-- Some rules add a hypothesis to the context of a premise, for example,
-- the implication introduction rule:
--   Γ ∙ A ⊢ B
--   ---------- impI
--   Γ ⊢ A ⇒ B
-- This rule states that an implication A ⇒ B is derivable from hypotheses Γ,
-- if its conclusion B is derivable from the extended context Γ ∙ A.
-- Let's look at the Agda representation, and get explanations of
-- the remaining rules then.

infix 2 _⊢_  -- \ vdash

data _⊢_ (Γ : Cxt) : (A : Form)  Set where
  hyp    : ∀{A}     (x : Hyp Γ A)                Γ  A
  trueI  : Γ  True
  andI   : ∀{A B}   (t : Γ  A)     (u : Γ  B)  Γ  A  B
  andE₁  : ∀{A B}   (t : Γ  A  B)              Γ  A
  andE₂  : ∀{A B}   (t : Γ  A  B)              Γ  B
  impI   : ∀{A B}   (t : Γ  A  B)              Γ  A  B
  impE   : ∀{A B}   (t : Γ  A  B) (u : Γ  A)  Γ  B
  orI₁   : ∀{A B}   (t : Γ  A)                  Γ  A  B
  orI₂   : ∀{A B}   (t : Γ  B)                  Γ  A  B
  orE    : ∀{A B C} (t : Γ  A  B) (u : Γ  A  C) (v : Γ  B  C)  Γ  C
  falseE : ∀{C}     (t : Γ  False)  Γ  C

-- Explanation:
-- hyp:    Γ ⊢ A is derivable since A ∈ Γ.
-- trueI:  Truth is trivially derivable (empty conjunction).
-- orI:    If one of the disjuncts is derivable, so is the disjunction.
-- orE:    If a disjunction A ∨ B is derivable, we can use it to derive
--         an arbitrary formula C if we can derive C both from A and from B.
-- falseE: The empty disjunction.  Once we have derived False, everything
--         is true (ex falsum quod libet).

-- In the remainder of this file, we define normal forms
-- and prove the normalization theorem.

-- The following Flag indicates whether we deal with a
--   lin)    straight-line elimination using only impE and andE
--   branch) an elimination that ends in a orE or falseE

data Flag : Set where
  lin    : Flag  -- Just impE and andE
  branch : Flag  -- lin followed by a single orE or falseE

-- The following grammar defines normal derivations (Nf),
-- which are constructed from introductions and then
-- possibly an elimination from a hypothesis.


  -- Normal derivations (Nf = normal form).
  -- We can recursively apply introduction rules
  -- and then switch to elimination from a hypothesis.

  data Nf (Γ : Cxt) : (A : Form)  Set where
    trueI  : Nf Γ True
    impI   : ∀{A B}   (t : Nf (Γ  A) B)         Nf Γ (A  B)
    andI   : ∀{A B}   (t : Nf Γ A) (u : Nf Γ B)  Nf Γ (A  B)
    orI₁   : ∀{A B}   (t : Nf Γ A)               Nf Γ (A  B)
    orI₂   : ∀{A B}   (t : Nf Γ B)               Nf Γ (A  B)
    hyp    : ∀{A C f} (x : Hyp Γ A) (es : Elims Γ A C f)  Nf Γ C

  -- Elims Γ A C f eliminates proposition A into C, using
  -- possibly several impE and andE steps first, and then
  -- possibly a final falseE or orE step.
  -- All side derivations are in Nf.

  data Elims (Γ : Cxt) : (A C : Form) (f : Flag)  Set where
    []     : ∀{A}                                              Elims Γ A A lin
    _∷_    : ∀{A B C f} (e : Elim Γ A B) (es : Elims Γ B C f)  Elims Γ A C f
    falseE : ∀{C}                                              Elims Γ False C branch
    orE    : ∀{A B C} (t₁ : Nf (Γ  A) C) (t₂ : Nf (Γ  B) C)  Elims Γ (A  B) C branch

  -- Elim Γ A C contains single a single impE or andE elimination.

  data Elim (Γ : Cxt) : (A C : Form)  Set where
    impE : ∀{A B} (t : Nf Γ A)  Elim Γ (A  B) B
    andE₁ : ∀{A B}  Elim Γ (A  B) A
    andE₂ : ∀{A B}  Elim Γ (A  B) B

-- Normal derivations are trivially consistent:
-- There is no normal proof of False in the empty context.

nf-consistent : Nf ε False  ∀{A : Set}  A
nf-consistent (hyp () _)

-- The challenge is to show that Nf is still a complete proof system for IPL.
-- That means that whenever t : Γ ⊢ C, there exists also a normal derivation v : Nf Γ C.
-- This theorem is also called normalization, because it takes a proof t
-- and returns its normal form.

-- In the remainder of this file, we prove the normalization theorem.
-- Proving the normalization theorem amounts to supply the "missing rules" for Nf.
-- For instance, there is a rule impI on normal derivations, but no rule
--   impE : (t : Nf Γ (A ⇒ B)) (u : Nf Γ A) → Nf Γ B
-- We will show that this rule is admissible.  It can be shown by
-- lexicographic induction on (A, t).  However, the inductive argument
-- is not trivial, it requires us to define a normalizing substitution.
-- First, we show the admissibility of conjucntion and absurdity elimination
-- which are straightforward.

-- Conjunction elimination (andE₁!).

-- For admissibility of andE₁ on normal derivations, consider the cases
-- for t : Nf Γ (A ∧ B).
-- Either t is (andI t₁ t₂) with t₁ : Nf Γ A and t₂ : Nf Γ B.
-- In this case we just return t₁.
-- Otherwise t is (hyp x es) which means that the conjunction is
-- proven from hypothesis (x : Hyp Γ X) by eliminations
-- es : Elims Γ X (A ∧ B) f.
-- In this case, we would simply wish to append a andE₁ elimination
-- to es.  This is possible if f = lin.  In case f = Branch, we
-- have to distribute andE₁ to the branches.
-- For falseE, there are zero branches, and for orE, there
-- are two branches of type Nf (Γ ∙ _) (A ∧ B).
-- We turn these into Nf (Γ ∙ _) A by recursively applying andE₁!.
-- Since the branches are subterms of the original term t,
-- the recursion is valid (i.e., does not lead to infinite regression).


  andE₁! : ∀{Γ A B}  Nf Γ (A  B)  Nf Γ A
  andE₁! (andI t₁ _) = t₁
  andE₁! (hyp x es) = hyp x (andElims₁ es)

  andElims₁ :  {Γ X A B f} (es : Elims Γ X (A  B) f)  Elims Γ X A f
  andElims₁ (e  es)    = e  andElims₁ es
  andElims₁ []          = andE₁  []
  andElims₁ falseE      = falseE
  andElims₁ (orE t₁ t₂) = orE (andE₁! t₁) (andE₁! t₂)

-- Admissibility of andE₂ is analogous.


  andE₂! : ∀{Γ A B}  Nf Γ (A  B)  Nf Γ B
  andE₂! (andI _ t₂) = t₂
  andE₂! (hyp x es) = hyp x (andElims₂ es)

  andElims₂ :  {Γ X A B f} (es : Elims Γ X (A  B) f)  Elims Γ X B f
  andElims₂ (e  es)    = e  andElims₂ es
  andElims₂ []          = andE₂  []
  andElims₂ (falseE)    = falseE
  andElims₂ (orE t₁ t₂) = orE (andE₂! t₁) (andE₂! t₂)

-- Absurdity elimination (falseE!).

-- The only normal derivation of False is from a hypothesis X
-- using eliminations es : Elims Γ X False f.
-- To eliminate False into a proposition A, we append falseE
-- to es if f = Lin, or distribute it over the brances if
-- f = branch.


  falseE! :  {Γ A}  Nf Γ False  Nf Γ A
  falseE! (hyp x es) = hyp x (falseElims es)

  falseElims :  {Γ X A f}  Elims Γ X False f  Elims Γ X A branch
  falseElims (e  es)    = e  falseElims es
  falseElims []          = falseE
  falseElims falseE      = falseE
  falseElims (orE t₁ t₂) = orE (falseE! t₁) (falseE! t₂)

-- First we have to prove some basic properties, for instance
-- that normal derivations are stable under context extensions
-- and permutations.

-- Renaming

-- If we can prove C from a context Δ we can also prove it
-- from Γ if every hypothesis in Δ can also be found in Γ.
-- Thus, Γ can be a permutation of Δ or extension of Δ or both.

-- A "renaming" r : Ren Γ Δ is a map from the hypotheses of Δ
-- to the hypotheses of Γ.
-- If we apply r to an address x : Hyp Δ A of hypothesis A in Δ,
-- we get the address r x : Hyp Γ A of this hypothesis in Γ.

Ren : (Γ Δ : Cxt)  Set
Ren Γ Δ = ∀{A}  Hyp Δ A  Hyp Γ A

-- A renaming r from Δ to Γ can be lifted to a renaming
-- liftr r from Δ ∙ A to Γ ∙ A which leaves the position
-- of the new hypothesis A unchanged on the top.

liftr : ∀{Γ Δ A} (r : Ren Γ Δ)  Ren (Γ  A) (Δ  A)
liftr r top     = top
liftr r (pop x) = pop (r x)

-- The following mutual functions prove that Nf, Elim, and Elims
-- are stable under renaming.

-- This is a routine induction.  Whenever we go under a binder,
-- like in impI or orE, we need to lift the renaming
-- to account for the new hypothesis.


  ren : ∀{Γ Δ A} (r : Ren Γ Δ) (t : Nf Δ A)  Nf Γ A
  ren r (hyp x es) = hyp (r x) (renElims r es)
  ren r trueI      = trueI
  ren r (impI t)   = impI (ren (liftr r) t)
  ren r (andI t u) = andI (ren r t) (ren r u)
  ren r (orI₁ t)   = orI₁ (ren r t)
  ren r (orI₂ t)   = orI₂ (ren r t)

  renElims : ∀{Γ Δ A B f} (r : Ren Γ Δ) (es : Elims Δ A B f)  Elims Γ A B f
  renElims r []          = []
  renElims r (e  es)    = renElim r e  renElims r es
  renElims r falseE      = falseE
  renElims r (orE t₁ t₂) = orE (ren (liftr r) t₁) (ren (liftr r) t₂)

  renElim : ∀{Γ Δ A B} (r : Ren Γ Δ) (e : Elim Δ A B)  Elim Γ A B
  renElim r (impE t) = impE (ren r t)
  renElim r andE₁    = andE₁
  renElim r andE₂    = andE₂

-- We will need the following three instances of renaming,
-- called weakenings.  These add an unused hypothesis on top
-- (weak and weakElim) or just below the top (weak').

weak : ∀{Γ A C} (t : Nf Γ C)  Nf (Γ  A) C
weak = ren pop

weakElim : ∀{Γ A B C} (e : Elim Γ B C)  Elim (Γ  A) B C
weakElim = renElim pop

weak' : ∀{Γ A B C} (t : Nf (Γ  B) C)  Nf (Γ  A  B) C
weak' = ren (liftr pop)

-- Hereditary substitution.

-- When showing the admissibility of
--   impE : (t : Nf Γ (A ⇒ B)) (u : Nf Γ A) → Nf Γ B
-- we have to treat the case that t = impI t' with
-- t' : Nf (Γ ∙ A) B.   How do we get a Nf Γ B?
-- We substitute all uses of the top hypothesis in t' by u.
-- However, if A is not an atomic proposition,
-- this hypothesis might be under eliminations es,
-- thus, we have to normalize the elimination of u by es.
-- This might in turn trigger a substitution, for instance,
-- if u is itself an impI u' and the first elimination of es
-- is an impE v.
-- Thus, a substitution might trigger new substitutions,
-- and this process is called "hereditary substitution(s)".
-- How can we be sure this process ever terminates?
-- It turns out at all subsequent substitutions triggered
-- by a substitution of u : Nf Γ A for a hypothesis x : Hyp Γ A,
-- are substitutions of a u' : Nf Γ' A' for an a' : Hyp Γ' A',
-- where A' is a subproposition of A.
-- Thus, we can define substitution of u : Nf Γ A into t' : Nf (Γ ∙ A) B
-- by an outer recursion on A and an inner recursion on t'.

-- As a first step, we define the type of substitutions
-- Sub A Γ Δ of a Nf Γ A into something in context Γ.
-- Such a substitution replaces each hypothesis Hyp Δ C
-- by a "result" Res A Γ C where the result can be a
-- Nf Γ A if A = C and must be a hypothesis otherwise.

data Res A Γ : (C : Form)  Set where
  rnf  : (t : Nf Γ A)  Res A Γ A
  rhyp : ∀{C} (x : Hyp Γ C)  Res A Γ C

Sub : (A : Form) (Γ Δ : Cxt)  Set
Sub A Γ Δ = ∀{C}  Hyp Δ C  Res A Γ C

-- Routinely, we prove that results can be weakened.

weakRes : ∀{Γ A B C} (t : Res A Γ C)  Res A (Γ  B) C
weakRes (rnf  t) = rnf (weak t)
weakRes (rhyp x) = rhyp (pop x)

-- We construct substitutions by the following two methods.

-- A singleton substitution sg t is constructed from a single normal form t.
-- It replaces the top hypothesis by t and all other hypotheses by themselves.

sg :  {Γ A} (t : Nf Γ A)  Sub A Γ (Γ  A)
sg t top     = rnf t
sg t (pop x) = rhyp x

-- We can lift a substitution to one more hypothesis which remains fixed.

lift : ∀{A Γ Δ B} (s : Sub A Γ Δ)  Sub A (Γ  B) (Δ  B)
lift s top     = rhyp top
lift s (pop x) = weakRes (s x)

-- Substitution of (s : Sub A Γ Δ) into normal forms (t : Nf Δ C)
-- is now defined by lexicographic induction on (A, t),
-- mutually with many auxiliary functions like substitution
-- into eliminations, elimination of a normal form,
-- and admissiblity of impE and orE for normal forms.

-- Thanks to the dependent typing, most cases are forced.
-- And Agda's termination checker scrutinizes our definition,
-- to ensure correct use of the induction hypotheses.


  -- Substitution is homeomorphic except for the case (hyp x es)
  -- in which we have to apply the substitution s to hypothesis x
  -- and eliminate the result by es.

  subst :  A {Γ Δ} (s : Sub A Γ Δ) {C} (t : Nf Δ C)  Nf Γ C
  subst A s trueI        = trueI
  subst A s (impI t)     = impI (subst A (lift s) t)
  subst A s (andI t₁ t₂) = andI (subst A s t₁) (subst A s t₂)
  subst A s (orI₁ t)     = orI₁ (subst A s t)
  subst A s (orI₂ t)     = orI₂ (subst A s t)
  subst A s (hyp x es)   = elimsRes A (s x) (substElims A s es)

  substElims :  A {Γ Δ} (s : Sub A Γ Δ) {B C f} (es : Elims Δ B C f)  Elims Γ B C f
  substElims A s []          = []
  substElims A s (e  es)    = substElim A s e  substElims A s es
  substElims A s falseE      = falseE
  substElims A s (orE t₁ t₂) = orE (subst A (lift s) t₁) (subst A (lift s) t₂)

  substElim :  A {Γ Δ} (s : Sub A Γ Δ) {B C} (e : Elim Δ B C)  Elim Γ B C
  substElim A s (impE t) = impE (subst A s t)
  substElim A s andE₁    = andE₁
  substElim A s andE₂    = andE₂

  -- If the result of substituting x is just an hypothesis y,
  -- we can just put the eliminations es back.
  -- Otherwise, if the result is a normal form t, we have
  -- to eliminate it recursively by es.
  -- This might trigger new substitutions in cases impE or orE.

  elimsRes :  A {Γ B C f} (r : Res A Γ B) (es : Elims Γ B C f)  Nf Γ C
  elimsRes A (rhyp y) es = hyp y es
  elimsRes A (rnf t) es = elims A t es

  -- "elims" is the place where types are actually getting smaller
  -- but terms might get bigger by invokation of impE! and orE!.

  elims :  A {Γ C f} (t : Nf Γ A) (es : Elims Γ A C f)  Nf Γ C
  elims A        t []                    = t
  elims .(A  B) t (impE {A} {B} u  es) = elims B (impE! A t u) es
  elims .(A  B) t (andE₁ {A} {B}   es) = elims A (andE₁! t) es
  elims .(A  B) t (andE₂ {A} {B}   es) = elims B (andE₂! t) es
  elims .False   t falseE                = falseE! t
  elims .(A  B) t (orE {A} {B} t₁ t₂)   = orE! A B t t₁ t₂

  -- The admissible eliminations of implication and disjunction,
  -- impE! and orE!, are invoked by "elims" and in turn invoke
  -- substitution.  It is crucial that they are called with
  -- a strictly smaller proposition than the original call to subst had.

  -- Implication elimination

  -- Triggers new substitutions in case of (impI t).  In the other case,
  -- we append the (impE u) to the eliminations es, distributing it to the
  -- branches in case of falseE or orE.

  impE! :  A {Γ B} (t : Nf Γ (A  B)) (u : Nf Γ A)  Nf Γ B
  impE! A (impI t)   u = subst A (sg u) t
  impE! A (hyp x es) u = hyp x (impElims A es u)

  impElims :  A {Γ B X f} (es : Elims Γ X (A  B) f) (u : Nf Γ A)  Elims Γ X B f
  impElims A (e  es) u    = e  impElims A es u
  impElims A [] u          = impE u  []
  impElims A (falseE) u    = falseE
  impElims A (orE t₁ t₂) u = orE (impE! A t₁ (weak u)) (impE! A t₂ (weak u))

  -- Disjunction elimination

  -- Triggers new substitutions in case of orI.  Otherwise, append
  -- an orE to the eliminations, possibly distributing it to the branches.

  orE! :  A B {Γ C} (t : Nf Γ (A  B)) (u : Nf (Γ  A) C) (v : Nf (Γ  B) C)  Nf Γ C
  orE! A B (orI₁ t)   u v = subst A (sg t) u
  orE! A B (orI₂ t)   u v = subst B (sg t) v
  orE! A B (hyp x es) u v = hyp x (orElims A B es u v)

  orElims :  A B {Γ C X f} (es : Elims Γ X (A  B) f)
    (u : Nf (Γ  A) C)
    (v : Nf (Γ  B) C)  Elims Γ X C branch
  orElims A B (e  es)    u v = e  orElims A B es u v
  orElims A B []          u v = orE u v
  orElims A B (falseE)    u v = falseE
  orElims A B (orE t₁ t₂) u v = orE (orE! A B t₁ (weak' u) (weak' v))
                                    (orE! A B t₂ (weak' u) (weak' v))

-- Normalization.

-- Finally, we can show that each derivation of Γ ⊢ A
-- also has a normal derivation.
-- The proof is by induction on the derivation, using
-- the admissible elimination rules to compose the
-- results of the induction hypotheses.

nf : ∀{Γ A} (t : Γ  A)  Nf Γ A
nf (hyp x)       = hyp x []
nf trueI         = trueI
nf (andI t u)    = andI (nf t) (nf u)
nf (andE₁ t)     = andE₁! (nf t)
nf (andE₂ t)     = andE₂! (nf t)
nf (impI t)      = impI (nf t)
nf (impE t u)    = impE! _ (nf t) (nf u)
nf (orI₁ t)      = orI₁ (nf t)
nf (orI₂ t)      = orI₂ (nf t)
nf (orE t t₁ t₂) = orE! _ _ (nf t) (nf t₁) (nf t₂)
nf (falseE t)    = falseE! (nf t)

-- QED -}

-- Acknowledgements.
-- I learned the trick of doing hereditary substitutions by just
-- /structural/ recursion on types from Conor McBride.
-- He sent me some Agda code that accomplished this for the STLC using
-- the left-to-right elimination vectors we extended here to sum types.
-- I could have learned it from Felix Joachimski and Ralph Matthes
-- as well.  The normalization algorithm presented here should be
-- the computational content of their strong normalization proof
-- for lambda-calculus with permutative conversions (AML 2003).
-- However, I did not have the formalization skills back then
-- I have now to translate their generous vector notation into
-- the statically well-typed Elims.

-- Literature.

-- Dag Prawitz:
--   Natural Deduction: A Proof-Theoretical Study
--   Dover Publications (1965)

-- Felix Joachimski, Ralph Matthes:
--   Short proofs of normalization for the simply-typed lambda-calculus,
--   permutative conversions and Gödel's T.
--   Arch. Math. Log. 42(1): 59-87 (2003)

-- Kevin Watkins, Iliano Cervesato, Frank Pfenning, David Walker:
--   A concurrent logical framework {I}: Judgements and properties.
--   Technical report CMU-CS02-101,
--   School of Computer Science, Carnegie Mellon University, Pittsburgh (2003)

-- Andreas Abel:
--   Implementing a normalizer using sized heterogeneous types.
--   J. Funct. Program. 19(3-4): 287-310 (2009)

-- Chantal Keller, Thorsten Altenkirch:
--   Hereditary Substitutions for Simple Types, Formalized.
--   MSFP@ICFP 2010: 3-10

-- Harley D. Eades III, Aaron Stump
--   Exploring the Reach of Hereditary Substitution.
--   International Conference on Types for Proofs and Programs, TYPES 2011, Bergen, Norway.
--   Abstract: https://sites.google.com/site/types2011/program/types2011-talkswabstracts#Eades
--   Talk:     http://metatheorem.org/includes/talks/2012-TYPES.pdf
--   Paper:    http://metatheorem.org/includes/pubs/TYPES-2012.pdf