MiniAgda Home Page
MiniAgda is a toy implementation of a dependently typed
functional programming language. It is purpose is to try out
extensions to Type Theory, such as: coinductive types, sized types for
termination, extensionality for record types, erasure, subtyping.
Read

Wellfounded Recursion with Copatterns:
A Unified Approach to Termination and Productivity

Andreas Abel, Brigitte Pientka (2013)
The 18th ACM SIGPLAN International Conference on Functional Programming (ICFP 2013), Boston, MA, USA, 2527 September 2013.

TypeBased Termination, Inflationary FixedPoints,
and Mixed InductiveCoinductive Types

Andreas Abel (2012)
Invited talk at the Workshop on Fixedpoints in Computer Science (FICS 2012), March 24, 2012, Tallinn, Estonia. Satellite workshop to ETAPS 2012.
.pdf
.ps.gz
.dvi
.bib

Termination Checking: Comparing Structural
Recursion and Sized Types by Examples

David Thibodeau (2011)
Student Research Report, McGill University.

MiniAgda: Integrating Sized and Dependent Types

Andreas Abel (2010)
Workshop on Partiality and Recursion in Interactive Theorem Provers (PAR 2010), 15 July 2010, Federated Logic Conference (FLoC 2010), Edinburgh, UK.
.pdf
.ps.gz
.dvi
.bib

MiniAgda on the Haskell Communities and Activities Report

Termination Checking for a Dependently Typed Language

Karl Mehltretter's diploma thesis (2007)
Slides (in German).
Features and bugs

Sized inductive and coinductive types.

Termination check combining sized types and sizechange termination a la Neil Jones.

Coinductive definitional equality from Conor McBride’s Ripley cabinet. Known bug: Equality is not transitive.

Erased arguments in the style of the Implicit Calculus of
Constructions by Alexandre Miquel and Bruno Barras / Bruno Bernardo.

Etaexpansion for Tuple types.
20090920: Extended to nonrecursive constructors of nonoverlapping pattern inductive families. Examples: the Identity type, the vectors of length 0.
 20090703: Case for inductive types (not families) which adds rewrite rules in the branches.
Report a bug!
Missing features
 Coverage check for pattern matching.
 Hidden arguments and unification.
 Module system.
 Holes and user interaction.
Download
New: From now (20140109), get the latest release of
MiniAgda on hackage.
cabal install MiniAgda
Old instructions: MiniAgda is available as source distribution and
has been tested on X86Linux and Mac OS X. It requires a recent GHC, I am using 6.12.1, 7.0.1, and 7.4.1.
Download one of the versions below, unpack, and then make
or,
thanks to Nils Anders Danielsson, cabal install
.
Have fun!
Blog with examples
Families need not store their indices
20130408: I implemented Edwin Brady's TYPES 2003 proposal that
one can erase index arguments in constructors if the family indices
are patterns. Concretely, I now allow a parameter to
be instantiated by a pattern, similar to IDRISstyle constructors,
who do not know the distinction of data parameters and data indices
in the first place.
Here is what the hello world of dependent types,
vectors and append function, may look like now:
data Vec ++(A : Set) (n : Nat)
{ vnil : Vec A zero
; vcons (vhead : A) (vtail : Vec A n) : Vec A (suc n)
} fields vhead, vtail
fun append : [A : Set] [n : Nat] [m : Nat] > Vec A n > Vec A m > Vec A (plus n m)
{ append A .zero m vnil ys = ys
; append A (.suc n) m (vcons x xs) ys = vcons x (append A n m xs ys)
}

Since the constructor vcons
no longer quantifies over n
, it must be
computed from the type Vec A (suc n)
of the fully applied
constructor. This is possible thanks to bidirectional
typechecking.
On the lhs of append
, we need to get hold
of n
somehow. Since it is absent from the
constructed vcons x xs
, we need to get it from the
erased argument [n : Nat]
. However, we cannot match
on this argument with suc n
, since matching on erased
things is impossible (they are not there at runtime). Instead, we
use the new feature of dotted constructor
pattern (.suc n)
which says that the only
possibility here is a successor so we can project its
argument n
out without needing to match. Since
the n
is only used at an erased position on the rhs,
this is fine.
We continue on the hello world! track:
data Fin (n : Nat)
{ fzero : Fin (suc n)
; fsuc (i : Fin n) : Fin (suc n)
}
fun lookup : [A : Set] [n : Nat] (i : Fin n) (xs : Vec A n) > A
{ lookup A .zero () vnil
; lookup A (.suc n) fzero (.vcons x xs) = x
; lookup A (.suc n) (fsuc i) (.vcons x xs) = lookup A n i xs
}

After we matched on the i : Fin n
, the only case for
the vector is a vcons x xs
, thus, the pattern can be dotted.
A nice example are simplytyped lambda terms (see below). Note
the nonlinear patterns in the type of
vzero : Var (cons a cxt) a
.
data Ty { nat ; arr (a, b : Ty) }
let Cxt = List Ty
data Var (cxt : Cxt) (a : Ty)
{ vzero : Var (cons a cxt) a
; vsuc (x : Var cxt b) : Var (cons a cxt) b
}
data Tm (cxt : Cxt) (a : Ty)
{ var (x : Var cxt a) : Tm cxt a
; app [a : Ty]
(r : Tm cxt (arr a b))
(s : Tm cxt a) : Tm cxt b
; abs (t : Tm (cons a cxt) b) : Tm cxt (arr a b)
}

Evaluation looks like this:
fun Sem : Ty > Set
{ Sem nat = Nat
; Sem (arr a b) = Sem a > Sem b
}
fun Env : Cxt > Set
{ Env nil = Unit
; Env (cons a as) = Sem a & Env as
}
fun val : [cxt : Cxt] [a : Ty] > Var cxt a > Env cxt > Sem a
{ val (.cons a cxt) .a vzero (v, vs) = v
; val (.cons a cxt) b (vsuc x) (v, vs) = val cxt b x vs
}
fun sem : [cxt : Cxt] [a : Ty] > Tm cxt a > Env cxt > Sem a
{ sem cxt a (var x) rho = val cxt a x rho
; sem cxt b (app a r s) rho = (sem cxt (arr a b) r rho) (sem cxt a s rho)
; sem cxt (.arr a b) (abs t) rho v = sem (cons a cxt) b t (v, rho)
}

Instantiation to infinity
20130405: A bounded existential quantifier [i < #] & F i
can now be
instantiated to #
if F
is lower
semicontinuous, for instance, if it is antitone or an inductive type
of size i
.
data Maybe ++(A : Set) { nothing ; just (a : A) }
cofun Nat : +Size > Set
{ Nat i = [j < i] & Maybe (Nat j)
}
pattern pzero i = i , nothing
pattern psucc i n = i , just n
let zeroInf : Nat # = pzero #
let succInf (n : Nat #) : Nat # = psucc # n

Analogously, a bounded universal quantifier [i < #] > F i
can now be
instantiated to #
if F
is upper
semicontinuous, for instance, if it is monotone or a coinductive type
of size i
.
record Prod ++(A, B : Set) { pair (fst : A) (snd : B) } fields fst, snd
cofun Stream : ++(A : Set) > (i : Size) > Set
{ Stream A i = [j < i] > Prod A (Stream A j)
}
pattern cons x xs = pair x xs
let head [A : Set] (s : Stream A #) : A = s # .fst
let tail [A : Set] (s : Stream A #) : Stream A # = s # .snd

A paradox connected to careless handling of infinite size
20130402: We can encode all of untyped lambda calculus
exploiting MiniAgda' careless handling of infinity.
cofun D : Size > Set
{ D i = [j < i] > i < $$j > D j > D j
}

The constraints effectively express that i = $j
.
However, they also exclude i = #
.
We will exploit the fact that MiniAgda does not require
i < #
to infer i < $i
to work around the constraints and instantiate
i
to
#
nevertheless.
fun app_ : [i : Size] > D $i > D i > D i
{ app_ i f = f i
}

Here, f of type
[j < $i] > $i < $$j > D j > D j
is instantiated with i < $i
,
which is invalid with for i = #
under # = $#
, to
$i < $$i > D i > D i
which is just D i > D i
.
Thus app_
should only have type [i < #] > D $i > D i > D i
.
In this case, we could not make the following instantiation i = #
.
let app : D # > D # > D # = app_ #

Now, towards abstraction...
fun abs_ : [i : Size] > (D i > D i) > D $i
{ abs_ i f j = f
}

This typechecks because the constraints j < $i
and $i < $$j
guarantee i = j
.
However, MiniAgda should not accept the constraints on
j
unless
i < #
.
The rest is easy:
let abs : (D # > D #) > D # = abs_ #
let delta : D # = abs (\ x > app x x)
let Omega : D # = app delta delta

There are two ways out: The first, already implemented but
deactivated, distinguishes size variables strictly below infinity from
those that can also be instantiated with infinity. However, I am
tending more towards giving up on $# = #
, this might be
clearer to the user. Of course, we need to identify situations in
which we can still treat # + n
as #
.
20130405: A third alternative is to forbid type D
because it is discontinuous. Precisely, D #
is
distinct from D i
for all ordinals i < #
.
Such types should probably be rejected by MiniAgda.
Harmful bounded lambda abstractions
20130330: Here is a use of a bounded lambdaabstraction that
destroys strong normalization.
data Unit { unit }
fun sabotage : [i : Size] > ([j < i] > Unit) > Unit
{ sabotage i f = unit
}
fun notSN : [i : Size] > i > Unit
{ notSN i = sabotage i (\ j > notSN j)
}

Apparently, notSN
is not strongly normalizing.
The lambda on the r.h.s. introduces a size variable j < i, however,
it is not guaranteed that this is always possible, e.g., when i = 0.
Such lambdas are
henceforth forbidden, unless you use cofun
which does not
aim at strong normalization.
Mixed inductivecoinductive types
20120128: Before diving into today's topic, let us give an
example how to define a data type instead of declaring it.
 * Booleans
data Bool { true ; false }
fun If : Bool > ++(A, B : Set) > Set
{ If true A B = A
; If false A B = B
}
 * disjoint sum
let Either ++(A, B : Set) = (b : Bool) & If b A B
pattern left a = (true , a)
pattern right b = (false , b)

Like in
PiSigma,
we can bootstrap data types from function type (>
), pair
type (&
), labels or tags
(like true
and false
), and definition by
cases (If
) and recursion (see below).
pattern
declarations are a convenience to make code
easier to read and write, they are nothing but macros, yet only
consisting of constructors.
The type of streams is defined by recursion:
cofun Stream : ++(A : Set) > (i : Size) > Set
{ Stream A i = [j < i] > A & (Stream A j)
}
pattern cons x xs = x , xs

The definition says that when you apply a stream
s : Stream A i
to a size j
which is
below i
, then you get a pair consisting of the head of
type A
and the tail of type Stream A j
.
The guardedness level j
of the tail is strictly below the
guardedness level i
of the whole stream.
An s : Stream A #
is a stream which can be unrolled
infinitely often (#
being concrete syntax for ∞),
this is the beast we are interested in. We can
always take the head of such a stream:
let head [A : Set] (s : Stream A #) : A
= case (s 0) { (cons x xs) > x }

Note that by applying s
to 0 we get an unguarded stream
xs : Stream A 0
, but we do not care, since it is thrown away.
We can always take the tail of an ∞stream as well.
However, this needs a little justification.
Basically, we need to convince the type checker that
Stream A #
is the greatest fixedpoint of
F X = A & X
. Such a fixedpoint exists because the operator
F
is strictlypositive.
However, MiniAgda never checked this, since the definition
of Stream
was by recursion, and the only
requirement for recursive definition is termination.
(Termination holds since the recursive call Stream A j
has decreased the second argument compared to the l.h.s.
Stream A i
.)
To define the tail operation, we use the fact that j < ∞
implies j+1 < ∞.
fun tail : [A : Set] > Stream A # > Stream A #
{ tail A s (j < #) = case (s $j) { (cons x xs) > xs j }
}

Unrolling the stream tail A s : Stream A #
at guardedness level j
is performed by first
unrolling stream s
at guardedness level
j+1
. This yields a pair cons x xs
whose second component xs : Stream A $j
can be unrolled at level j
to produce the desired
result, the unrolled tail.
Now to our topic, mixed inductivecoinductive data types.
Stream processors
SP A B
are a representation of continuous functions from streams over A
to streams over B
.
Here is a Haskell representation:
data SP a b = Get (a > SP a b)  Put b (SP a b)
run :: SP a b > [a] > [b]
run (Get f) as = run (f (head as)) (tail as)
run (Put b sp) as = b : run sp as

Function run
, called eat in the linked publication,
executes a stream processor, i.e., turns it into an actual
function from streams to streams.
In general, run sp
might not be productive, i.e., consume
stream as
forever without outputting a single
b
. To prevent this, we require that stream processors
only have finitely many Get
s between
two Put
s. This can be achieved by nesting an inner,
inductive, fixedpoint which handles the gets into an outer,
coinductive fixedpoint which accunts for the puts..
In MiniAgda, we can define a mixed coinductiveinductive type
by using both bounded existential (induction) and bounded universal
quantification (coinduction).
cofun SP : (A : Set) > ++(B : Set) > (i : Size) > +(j : Size) > Set
{ SP A B i j = Either ([j' < j] & (A > SP A B i j'))  get
(B & ([i' < i] > SP A B i' #))  put
}
pattern get j f = left (j , f)
pattern put b sp = right (b , sp)

The first constructor, get, is inductive, while the second, put, is
coinductive. The two size indices i and j form a lexicographic
product, where the index i for coinduction is the primary and the
index j for induction the secondary component. The termination
checker infers this order from the recursive calls, but we can also
explicitly specify it:
cofun SP : (A : Set) > ++(B : Set) > (i : Size) > +(j : Size) > i,j > Set

The run function terminates by the same order. This time, we have to
specify it explicitly.
cofun run : [A, B : Set] > [i, j : Size] > i,j > SP A B i j > Stream A # > Stream B i
{ run A B i j (get j' f) as = run A B i j' (f (head A as)) (tail A as)
; run A B i j (put b sp) as (i' < i) = cons b (run A B i' # (sp i') as)
}

MiniAgda answers that it understood the following:
term run : .[A : Set] > .[B : Set] > .[i : Size] > .[j : Size] > SP A B i j > Stream A # > Stream B i
{ run [A] [B] [i] [j] (true, ([j'], f)) as = run [A] [B] [i] [j'] (f (head [A] as)) (tail [A] as)
; run [A] [B] [i] [j] (false, (b, sp)) as [i' < i] = (b , run [A] [B] [i'] [#] (sp [i']) as)
}

If you drop everything enclosed in [brackets], you get the Haskell program
(modulo encoded constructors)! Great, isn't it?
term run : SP A B i j > Stream A # > Stream B i
{ run (true, f) as = run (f (head as)) (tail as)
; run (false, (b, sp)) as = (b , run sp as)
}

New telescope syntax for let
20120127: Telescopes are now allowed in let definitions. Also, the type of a let can
be left out if it is inferable (in the sense of bidirectional checking). Examples:
 Telescopes in letdeclarations

 instead of
let two : [A : Set] > (f : A > A) > (a : A) > A
= \ A f a > f (f a)
 one can now write
let two1 [A : Set] (f : A > A) (a : A) : A
= f (f a)
 since the type A of the letbody f (f a) is inferable
 we can omit it
let two2 [A : Set] (f : A > A) (a : A)
= f (f a)
 Untyped local let

 inferable types of local let declarations can also be omitted
let twice [F : Set > Set] (f : [A : Set] > A > F A)
[A : Set] (a : A) : F (F A)
= let [FA] = F A in
let fa = f A a in f FA fa
 local lets can also use telescopes
let localLetTel : Size =
let two1 [A : Set] (f : A > A) (a : A)
= f (f a)
in 0
 and can still be made irrelevant
let localLetIrr [A : Set] (f : [A > A] > Size) [a : A] : Size =
let [g] (x : A) = a
in f g
 alternative with . instead of brackets
let localLetIrr1 [A : Set] (f : .(A > A) > Size) .(a : A) : Size =
let .g (x : A) = a
in f g

New constructor syntax II
20120127: Telescopes are now allowed in constructor declarations, and the target can
be omitted if it is not a family (GADT). Thus, the Maybe type can be declared as:
data Maybe (A : Set)
{ nothing
; just (a : A)
}

New constructor syntax
20120123: I decided to remove parameter arguments from constructors, as it is in Agda. Now, there is a bit fewer boilerplate to type. New syntax:
 data declarations as before
data Maybe (A : Set)
{ nothing : Maybe A
; just : A > Maybe A
}
 parameter A no longer needed in constructor patterns and terms
fun mapMaybe : [A, B : Set] > (A > B) > Maybe A > Maybe B
{ mapMaybe A B f nothing = nothing
; mapMaybe A B f (just a) = just (f a)
}

Greatest common divisor (example)
20111216: Here is a useful example with bounded quantification in an
auxiliary function:
sized data Nat : Size > Set
{ zero : [i : Size] > Nat ($ i)
; suc : [i : Size] > Nat i > Nat ($ i)
}
 subtracting two numbers with minus yields the difference
 plus a bit indicating the bigger number of the two
data Either : +Size > +Size > Set
{ left : [i,j : Size] > Nat i > Either i j
; right : [i,j : Size] > Nat j > Either i j
}
fun minus : [i,j : Size] > Nat i > Nat j > Either i j
{ minus i j (zero (i > i')) m = right i j m
; minus i j (suc (i > i') n) (zero (j > j')) = left i j (suc i' n)
; minus i j (suc (i > i') n) (suc (j > j') m) = minus i' j' n m
}
 greated common divisor (gcd) in "Agdawith" style
 uses an auxiliary function distinguishing over the result of subtraction
 note how bounded quantification is used to maintain size relations
mutual {
fun gcd : [i,j : Size] > Nat i > Nat j > Nat #
{ gcd i j (zero (i > i')) m = m
; gcd i j (suc (i > i') n) (zero (j > j')) = suc i' n
; gcd i j (suc (i > i') n) (suc (j > j') m) =
gcd_aux i j i' j' n m (minus i' j' n m)
}
fun gcd_aux : [i,j : Size] > [i' < i] > [j' < j] > Nat i' > Nat j' >
Either i' j' > Nat #
{ gcd_aux i j i' j' n m (left .i' .j' n') = gcd i' j n' (suc j' m)
; gcd_aux i j i' j' n m (right .i' .j' m') = gcd i j' (suc i' n) m'
}
}

Bounded size quantification
20101112: Bounded size quantification is now experimentally supported.
Intended use is
[j : Size] > j <= i >
or
[j : Size] > j < i >
for i
a size
variable. If you tread beyond these pathes, weird and buggy stuff might
happen.
Here some useless examples:
sized data Nat : Size > Set
{ zero : [i : Size] > Nat $i
; succ : [i : Size] > Nat i > Nat $i
}
let mySucc : [i : Size] > [j : Size] > j < i > Nat j > Nat i
= \ i j n > succ j n
let boundedId : [i : Size] > [j : Size] > j <= i > Nat j > Nat j
= \ i j n > n
fun explicitCast : [i : Size] > [j : Size] > j <= i > Nat j > Nat i
{ explicitCast i j n = n
}

3 Function spaces: irrelevant, parametric, recursive
20101109: Working towards a representation of parametric functions
in MiniAgda. This is sometimes called the LF function space. I'd
call it parametric function space. It contains the functions that do
not analyze their argument but may store it in some structure. (In
contrast, irrelevant functions discard their argument.) Functions
that do analyze their argument I call recursive. The prototypical
parametric functions are the pure (typed) lambdacalculus and data
constructors. Anything that involves (non irrefutable)
patternmatching is computational (recursive) and not parametric.
The characterization of function spaces is done via polarities. In
general, function space is written p(x : A) > B
. The following polarities are available.
. Constant/irrelevant function
++ Strictly positive function (types only)
+ Monotone function (types only)
 Antitone function (types only)
^ Parametric (lambda) function
* Recursive function

If no polarity is given, MiniAgda tries to put some reasonable default
(work in progress). Constructor types are parametric by default (the
only other sensible polarity here is "e;irrelevant"e;). Data
type telescopes are parametric in the data type and irrelevant in the
constructor types. Sometimes a data type parameter needs to be
declared as recursive, e.g.
data AccPar (A : Set) (Lt : A > A > Set) *(b : A) : Set
{ accPar : (accParOut : (a : A) > Lt a b > AccPar A Lt a) > AccPar A Lt b
}

Since Lt : *A > *A > Set
(default is recursive!)
is a predicate which may analyse its arguments, AccPar
needs to be
declared recursive in b
, otherwise the call Lt a b
would be rejected.
Here, an example where all polarities are given explicitly.
let Const : ++Set > .Set > Set
= \ A > \ X > A
let DNeg : ^ Set > +Set > Set
= \ B > \ A > * (* A > B) > B
data Empty : Set {}
data Cont +(A : Set) : Set
{ cont : (uncont : DNeg Empty A) > Cont A
}
sized data Nat : + Size > Set
{ zero : [i : Size] > Nat ($ i)
; succ : [i : Size] > Nat i > Nat ($ i)
}
let cast : [i : Size] > ^ Cont (Nat i) > Cont (Nat #)
= \ i > \ x > x
fun Id : * Nat # > ++Set > Set
{ Id (zero .#) A = A
; Id (succ .# n) A = A
}
let kast : [i : Size] > [n : Nat i] > ^ Id n (Nat i) > Id n (Nat #)
= \ i > \ n > \ x > x

Now, [x : A] > B
is identified with .(x : A) > B
.
MiniAgda currently prints it as .[x : A] > B
.
Size addition
20101101: Size addition has been added to MiniAgda, but is not
systematically supported yet. Especially size constraint solving is
more difficult with addition, becoming a MAXPLUS constraint problem.
A few simple examples work already, for instance the evens
function suggested to me by Florent Balestrini:
sized codata Stream ++(A : Set) : Size > Set
{ cons : [i : Size] > (head : A) > (tail : Stream A i) > Stream A $i
}
cofun evens : [A : Set] > [i : Size] > Stream A (i + i) > Stream A i
{ evens A ($i) (cons .A .(i + i + 1) a (cons .A .(i + i) b as)) =
cons A i a (evens A i as)
}
cofun map2 : [A, B : Set] > (A > B) >
[i : Size] > Stream A (i + i) > Stream B (i + i)
{ map2 A B f ($ i) (cons .A .$(i + i) a1 (cons .A .(i + i) a2 as)) =
cons B $(i + i) (f a1) (cons B (i + i) (f a2) (map2 A B f i as))
}

Heterogeneous Algorithmic Equality
20101001: When you combine dependent irrelevant quantification with
large eliminations, the equality test needs to be heterogeneous in
some cases, e.g., the compared expressions need to come with their own
type in their own context. The following example was presented to me by Ulf Norell during AIM 11:
fun T : Bool > Set
{ T true = Nat
; T false = Bool
}
fun bad :
[F : Nat > Set] >
[f : [x : Bool] > T x > Nat] >
(g : (n : Nat) > F (f true n)) >
(h : F (f false false) > Bool) >
Bool
{ bad F f g h = h (g zero)
}

h expects an argument of type F (f false false) but
gets one of type F (f true zero). Comparing the spines
of f we skip the first argument of type Bool, since
it is declared irrelevant. The second argument is now false :
Bool on the one hand and zero : Nat on the other hand.
Thus, we come to compare expressions at different types. MiniAgda
thus has a heterogeneous algorithmic equality, which fails as soon as
it has to compare expressions at types of a different shape. For
instance, comparing a function to a number will fail because the types
cannot be reconciled. In the above example, Bool
and Nat are two differend data types which are not
compatible.
This is not terribly inefficient:
To check shapes for compatibility, the weak head normal
form of the involved types is sufficient.
Proof Monad
20100922: Today's improvement of deep matching in irrelevant context
let's me type check the multiplication (join) of the
proof monad (also called Squash)
without trustme.
data Prf ++(A : Set) : Set
{ prf : [A] > Prf A
}
fun elimPrf : [A : Set] > [P : Prf A > Set] >
(f : [a : A] > P (prf A a)) >
[x : Prf A] > P x
{ elimPrf A P f (prf .A a) = f a
}
 monad operations for Prf
fun mapPrf : [A, B : Set] > (A > B) > Prf A > Prf B
{ mapPrf A B f (prf .A a) = prf B (f a)
}
fun joinPrf : [A : Set] > Prf (Prf A) > Prf A
{ joinPrf A (prf .(Prf A) (prf .A a)) = prf A a
}
let bindPrf : [A, B : Set] > Prf A > (A > Prf B) > Prf B
= \ A B pa f > joinPrf B (mapPrf A (Prf B) f pa)

Expression classification
20100920: MiniAgda now classifies expressions into terms, type
(constructors)s, and universe(constructor)s. This is an important
step towards program extraction, but it also improves the analysis
which arguments of a function can contribute to the computation of the
result. A simple example:
f : (A : Set) > (x : A) > A

No matter how f is defined, the type argument A
cannot contribute to the result, since types cannot be matched on (no
typecase!), and A cannot be returned as part of the result
since A is not an element of A.
B : Set
g : (A : Set) > (A > B) > B

Here, the first argument of g is again irrelevant, since
result type B can hold terms, not types like A.
MiniAgda also marks index arguments in constructors irrelevant, as
described in Edwin Brady's thesis (and his TYPES 2003 paper).
data Vec (A : Set) : Nat > Set
{ nil : Vec A zero
; cons : (n : Nat) > A > Vec A n > Vec A (succ n)
}
{ MiniAgda output
type Vec : (A : Set) > Nat > Set
term nil : [A : Set] > Vec A zero
term cons : [A : Set] > [n : Nat] > A > Vec A n > Vec A (succ n)
}

MiniAgda classifies Vec as type (this is not hard, it lives
in Set). Constructors are always classified as terms. Not
only the parameter A : Set is marked as irrelevant in the
constructors, but also the index n : Nat in cons.
This is soundsince if you want to know the length of a vector, the
type system requires you to already have it, so you do not need to
look inside of cons.
fun length : (A : Set) > (n : Nat) > Vec A n > Nat
{ length A n v = n
}
{ MiniAgda output
term length : [A : Set] > (n : Nat) > Vec A n > Nat
{ length [A] n v = n
}
}

MiniAgda's erasure could be more aggressive here, also dump the
vector v : Vec A n. Future work...
In the destructors, you can see erasure in action.
fun head : (A : Set) > (n : Nat) > Vec A (succ n) > A
{ head .A .m (cons A m x xl) = x
}
{ MiniAgda output
term head : [A : Set] > (n : Nat) > Vec A (succ n) > A
{ head [.?0] .?1 (cons [A] [m] x xl) = x
}
}

Currently, you have to be careful where you place the dots in the patterns.
fun append : (A : Set) > (n, m : Nat) > Vec A n > Vec A m > Vec A (add n m)
{ append .A .zero m (nil A) v = v
; append .A .(succ n) m (cons A n x xl) v = cons A (add n m) x (append A n m xl v)
}
{ MiniAgda output
term append : [A : Set] > (n : Nat) > (m : Nat) > Vec A n > Vec A m > Vec A (_add n m)
error during typechecking:
append
/// clause 2
/// right hand side
... variable n : Nat may not occur
/// , because it is marked as erased
}

Since MiniAgda is not smart enough that n, m : Nat are
irrelevant, it complains that we extracted n from an
irrelevant argument of cons and try to use it on the
r.h.s. in a computational meaningful way (so it thinks). The
following dotting works.
fun append : (A : Set) > (n, m : Nat) > Vec A n > Vec A m > Vec A (add n m)
{ append A .zero m (nil .A) v = v
; append A (succ n) m (cons .A .n x xl) v = cons A (add n m) x (append A n m xl v)
}
{ MiniAgda output
term append : [A : Set] > (n : Nat) > (m : Nat) > Vec A n > Vec A m > Vec A (_add n m)
{ append [A] .?1 m (nil [.?3]) v = v
; append [A] (succ n) m (cons [.?3] [.?4] x xl) v = cons [A] [_add n m] x (append [A] n m xl v)
}
}

Classification and irrelevance inference does not yet work for
universepolymorphic things.
Mutual data types
20100828: MiniAgda now supports mutual data types.
mutual {
data Even : Set
{ ev0 : Even
; evS : Odd > Even
}
data Odd : Set
{ oddS : Even > Odd
}
}
data Nat : Set
{ zero : Nat
; suc : Nat > Nat
}
mutual {
fun evenToNat : Even > Nat
{ evenToNat ev0 = zero
; evenToNat (evS o) = suc (oddToNat o)
}
fun oddToNat : Odd > Nat
{ oddToNat (oddS e) = suc (evenToNat e)
}
}

Measures
20100727: Following ideas of Hongwei Xi (LiCS 2001), one can now explicitely
state the termination argument using measures s1,...,sn
which are compared lexicographically.
data Bool : Set
{ true : Bool
; false : Bool
}
sized data Nat : Size > Set
{ zero : [i : Size] > Nat ($ i)
; succ : [i : Size] > Nat i > Nat ($ i)
}
mutual {
fun even : [i : Size] > i,$0 > Nat i > Bool
{ even i n = even' i n
}
fun even' : [i : Size] > i,0 > Nat i > Bool
{ even' i (zero (i > j)) = true
; even' i (succ (i > j) n) = odd' j n
}
fun odd' : [i : Size] > i,0 > Nat i > Bool
{ odd' i (zero (i > j)) = false
; odd' i (succ (i > j) n) = even j n
}
}

Sometimes we need bounds to explain termination.
Below we use bounded size quantification
[j : Size] > j < i > Stack A j > A > Stack A j
in function pushFunc to show that pushFix is
productive. Note that the recursive call
pushFix A on the right hand side has the abovementioned bounded type.
data Maybe (A : Set) : Set
{ nothing : Maybe A
; just : A > Maybe A
}
 stack object
sized codata Stack (A : Set) : Size > Set
{ stack : [i : Size] >
(top : Maybe A) >
(pop : Stack A i) >
(push : A > Stack A i) > Stack A $i
}
 functional to construct push action
cofun pushFunc : [A : Set] > [i : Size] > i >
([j : Size] > j < i > Stack A j > A > Stack A j) >
Stack A i > A > Stack A i
{ pushFunc A ($ i) f s a = stack A i (just A a) s (f i (pushFunc A i f s a))
}
 tying the knot
cofun pushFix : [A : Set] > [i : Size] > i > Stack A i > A > Stack A i
{ pushFix A ($ i) = pushFunc A ($ i) (pushFix A)
}
 constructing the empty stack
cofun empty : [A : Set] > [i : Size] > i > Stack A i
{ empty A ($ i) = stack A i (nothing A) (empty A i) (pushFix A i (empty A i))
}

Universe polymorphism
Set : Set has been replaced by a predicative hierarchy Set 0 <= Set 1 <= ...
data List [i : Size](A : Set i) : Set i
{ nil : List i A
; cons : A > List i A > List i A
}

Polarity annotations
20100620: Polarities are now firstclass. Functions can carry a
polarity in front of their domain to characterize their monotonicity
behavior.
This is how they are used:
 p  meaning of polarity 

 ++  strictly positive 
 +  positive (covariant,monotone) 
   negative (contravariant,antitone) 
 /  constant (invariant) 
 (none)  arbitrary (mixed variant) 
 Alabelled Bbranching Trees
data Tree (B : Set) ++(A : Set) : Set
{ leaf : Tree B A
; node : A > (B > Tree B A) > Tree B A
}
 sized inductive types
sized data Mu ++(F : ++Set > Set) : +Size > Set
{ inn : [i : Size] > (out : F (Mu F i)) > Mu F ($ i)
}
 iteration (universal property of initial Falgebra Mu)
fun iter : [F : ++Set > Set] >
(mapF : [A : Set] > [B : Set] > (A > B) > F A > F B) >
[G : Set] > (step : F G > G) >
[i : Size] > Mu F i > G
{ iter F mapF G step i (inn .F (i > j) t) =
step (mapF (Mu F j) G (iter F mapF G step j) t)
}
 natural numbers
data Unit : Set
{ unit : Unit
}
data Sum ++(A : Set) ++(B : Set) : Set
{ inl : A > Sum A B
; inr : B > Sum A B
}
let NatF : ++Set > Set = \ X > Sum Unit X
let Nat : +Size > Set = Mu NatF
let zero : [i : Size] > Nat ($ i)
= \ i > inn NatF i (inl Unit (Nat i) unit)
let succ : [i : Size] > Nat i > Nat ($ i)
= \ i > \ n > inn NatF i (inr Unit (Nat i) n)

Corecursive Functions Returning a Tuple
20100608: Jones and Gibbons (1993) present a circular program that
relables a possibly infinite binary tree in breathfirst order, taking
labels from an infinite stream. Conor McBride showed me this example
on a flight back from Tallinn in 2007 (if I remember correctly), and
we conjectured that it can be handled with sized types. Nils Anders
Danielsson treats it in his paper Beating the Productivity
Checker presented at the workshop on Partiality and Recursion
in Interactive Theorem Provers (PAR'10). Now I have modified
MiniAgda to accept this example.
Let us first define the types of Cartesian product, streams and
possibly infinite trees. Prod is a record with two
fields fst and snd.
data Prod (+ A : Set)(+ B : Set) : Set
{ pair : (fst : A) > (snd : B) > Prod A B
}
sized codata Stream (+ A : Set) : Size > Set
{ cons : [i : Size] > (head : A) > (tail : Stream A i) > Stream A ($ i)
}
sized codata Tree (+ A : Set) : Size > Set
{ leaf : [i : Size] > Tree A ($ i)
; node : [i : Size] > A > Tree A i > Tree A i > Tree A ($ i)
}

The heart of the example is the corecursive function lab
which returns a pair of the relabled tree and the remaining
streamstream of labels. If we split it in two functions, one
returning the relabelled tree, and one returning the remaining labels,
we have two "e;proper"e; corecursive function, returning
something of coinductive type, but then all efficiency is lost.
However, as verified in my TLCA 2003 paper, a corecursive function can
also return a cartesian product of coinductive types. It is crucial
that both types are indexed by the same size variable i.
cofun lab : [i : Size] > [A : Set] > [B : Set] >
Tree A i > Stream (Stream B #) i >
Prod (Tree B i) (Stream (Stream B #) i)
{
lab ($ i) A B (leaf .A .i) bss =
pair (Tree B ($ i)) (Stream (Stream B #) ($ i)) (leaf B i) bss
; lab ($ i) A B (node .A .i x l r)
(cons .(Stream B #) .i (cons .B .# b bs) bss) =
 recursive call on left subtree
let pl : Prod (Tree B i) (Stream (Stream B #) i)
= lab i A B l bss
 recursive call on right subtree, threading the label streamstream
in let pr : Prod (Tree B i) (Stream (Stream B #) i)
= lab i A B r (snd (Tree B i) (Stream (Stream B #) i) pl)
in pair (Tree B ($ i)) (Stream (Stream B #) ($ i))
(node B i b (fst (Tree B i) (Stream (Stream B #) i) pl)
(fst (Tree B i) (Stream (Stream B #) i) pr))
(cons (Stream B #) i bs
(snd (Tree B i) (Stream (Stream B #) i) pr))
}

Following Danielsson, we represent the cycle in the original program
by a corecursive function label2. The main
function label calls lab with an initial
streamstream generated by label2.
cofun label2 : [i : Size] > [A : Set] > [B : Set] >
Tree A i > Stream B # > Stream (Stream B #) i
{ label2 ($ i) A B t bs = snd (Tree B ($ i)) (Stream (Stream B #) ($ i))
(lab ($ i) A B t (cons (Stream B #) i bs (label2 i A B t bs)))
}
fun label : [i : Size] > [A : Set] > [B : Set] >
Tree A i > Stream B # > Tree B i
{ label i A B t bs = fst (Tree B i) (Stream (Stream B #) i)
(lab i A B t (cons (Stream B #) i bs (label2 i A B t bs)))
}

Here is some code to test the program.
data Unit : Set
{ unit : Unit
}
data Nat : Set
{ Z : Nat
; S : Nat > Nat
}
cofun nats : [i : Size] > Nat > Stream Nat i
{ nats ($ i) n = cons Nat i n (nats i (S n))
}
fun finTree : Nat > Tree Unit #
{ finTree Z = leaf Unit #
; finTree (S n) = node Unit # unit (finTree n) (finTree n)
}
eval let t0 : Tree Nat # = label # Unit Nat (finTree Z) (nats # Z)
eval let t1 : Tree Nat # = label # Unit Nat (finTree (S Z)) (nats # Z)
eval let t2 : Tree Nat # = label # Unit Nat (finTree (S (S Z))) (nats # Z)
eval let t3 : Tree Nat # = label # Unit Nat (finTree (S (S (S Z)))) (nats # Z)

It was necessary to change the evaluation strategy of MiniAgda from
callbyvalue to callbyname, since eager constructors destroy strong
normalization even if cofuns are only unfolded by demand. To see this, consider the same program in Haskell (to save ourselves from tiring type arguments).
data P a b = P !a !b;
data Stream a = C !a (Stream a);
data Tree a = L
 N !a (Tree a) (Tree a);
fst (P a b) = a;
snd (P a b) = b;
lazy lab
lab :: Tree a > Stream (Stream b) > P (Tree b) (Stream (Stream b))
lab L bss = P L bss;
lab (N x l r) (C (C b bs) bss) =
let pl = lab l bss
in let pr = lab r (snd pl)
in P (N b (fst pl) (fst pr))
(C bs (snd pr));
lazy label2
label2 :: Tree a > Stream b > Stream (Stream b)
label2 t bs = snd (lab t (C bs (label2 t bs)));
label :: Tree a > Stream b > Tree b
label t bs = fst (lab t (C bs (label2 t bs)));

The cofuns lab and label2 are only unfolded if their
result is analyzed by pattern matching. Let t = N x t t an
infinite tree and bs = C y bs and infinite stream and
consider computing the weak head normal form of label t bs.
It unfolds to a fst (lab ...) which forces unfolding of
cofun lab. This, in case the constructors N
and C are strict, again computes fst (lab ...) which
again forces lab. This continues forever, never returning
the weak head normal form.
Thus, I need to depart from the strategy of my thesis which only
required lazy cofun unfolding. Now constructors must also be lazy,
hence, we need to depart from callbyvalue.
Large Eliminations and Sized Types
20100508: MiniAgda allows the following sized typing of the binary
maximum function on natural numbers.
sized data Nat : Size > Set
{ zero : [i : Size] > Nat ($ i)
; succ : [i : Size] > Nat i > Nat ($ i)
}
fun max2 : [i : Size] > Nat i > Nat i > Nat i
{ max2 i (zero (i > j)) n = n
; max2 i m (zero (i > j)) = m
; max2 i (succ (i > j) m) (succ (i > k) n) =
let [h : Size] = max j k
in succ h (max2 h m n)
}

In this example, we introduce a new irrelevant variable h
through a let binding.
An nary maximum can also be defined via a recursively defined
sized type Maxs.
fun Maxs : Nat # > Size > Set
{ Maxs (zero .# ) i = Nat i
; Maxs (succ .# n) i = Nat i > Maxs n i
}
fun maxs : (n : Nat #) > [i : Size] > Nat i > Maxs n i
{ maxs (zero .#) i m = m
; maxs (succ .# n) i m = \ l > maxs n i (max2 i m l)
}

Coinductive types II
20100312: Getting coinduction right!
The following looks ok, but is flawed.
(Thanks to Nils Anders Danielsson for this example.)
sized codata Stream (+ A : Set) : Size > Set {
cons : (i : Size) > A > Stream A i > Stream A ($ i)
}
cofun map2 : (i : Size) > (Nat > Nat) > Stream Nat i > Stream Nat i
{
map2 .($ ($ i)) f (cons .Nat .($ i) u (cons .Nat i x xl)) =
cons Nat _ (f u) (cons Nat _ (f x) (map2 _ f xl))
}

The type of map2 promises if you give me a stream which
can be consumed to at least depth i, I return you a stream
with the same guarantee. Returning a stream of guaranteed depth
0 is never a problem, since that is a contentless promise (like
I promise to give you 0 EUR). But the code of map2
fails to fulfill the promise for depth 1. Given a stream of depth 1
only, it diverges when trying to match to depth 2, instead of
returning a stream of depth 1.
Consequently, MiniAgda rejects map2. Technically, this is
realized by only accepting dotpatterns for the principal size
argument of a coinductive constructor. This makes sense, because in
order to match coinductive data, I need to know beforehand that it
is guarded, meaning that is has nonzero depth. Thus, I
cannot just try to match like for inductive data, because matching
coinductive data triggers evaluation which might not return if it
fails to produce a guard.
Thus, an attempt to define map2 needs to look like:
cofun map2 : (i : Size) > (Nat > Nat) > Stream Nat i > Stream Nat i
{
map2 ($ ($ i)) f (cons .Nat .($ i) u (cons .Nat .i x xl)) =
cons Nat _ (f u) (cons Nat _ (f x) (map2 _ f xl))
}

Now, we are matching on the input size using pattern $ ($ i).
It is clear that this is not a complete match, since the size could be
just 1, then, there is no matching clause.
Consequently this is also rejected in MiniAgda. Only the size pattern
$ i is accepted.
Matching to depth 2 would require the type
Stream Nat (2i) > Stream Nat (2i) but MiniAgda does not support it
(although such an extension is thinkable).
For deep coinductive matching, one can use
case i { ($ j) > ... }.
Such a case (as the pattern ($ i)) is legal when
constructing a term whose type ends in a codata type with
depth i. This is sound since the case i = 0 is an
empty promise which is trivially fulfilled. The only other case is nonzero
depth which can be handled by a successor pattern.
cofun fib : (i : Size) > Stream Nat i
{
fib i = case i
{ ($ j) > cons Nat j 0 (case j
{ ($ k) > cons Nat k 1 (zipWith Nat Nat Nat add k
(fib k)
(tail Nat k (fib ($ k))))})}
}

Here the case i is ok since we are producing a Stream Nat i.
Inside the first coconstructor cons Nat j 0 ... we a producing a
Stream Nat j. Thus, we can case j. Under the two constraints
i > j and j > k the two recursive calls to fib
typecheck!
Usually, the $ i pattern is sufficient and case
can be avoided.
cofun zipWith : (A : Set) > (B : Set) > (C : Set) >
(A > B > C) > (i : Size) >
Stream A i > Stream B i > Stream C i
{
zipWith A B C f ($ i) (cons .A .i a as) (cons .B .i b bs) =
cons C i (f a b) (zipWith A B C f i as bs)
}

Correct deep matching
20100312: CORIAS meeting in Val d'Ajol (thanks to Cody Roux for
getting me invited and to Gilles Dowek for inviting me). Solved the
treatment of deep matching. When matching against a variable
x : D i of sized type D i with constructor
c : [j : Size] > D j > D ($ j) we do not instantiate
i to $ j but add the constraint i > j instead.
data Nat : Set
{ zero : Nat
; succ : Nat > Nat
}
sized data Ord : Size > Set
{ ozero : [i : Size] > Ord ($ i)
; osucc : [i : Size] > Ord i > Ord ($ i)
; olim : [i : Size] > (Nat > Ord i) > Ord ($ i)
}
fun idO : [i : Size] > Ord i > Ord i
{ idO i (ozero (i > j) ) = ozero j
; idO i (osucc (i > j) p) = osucc j (idO j p)
; idO i (olim (i > j) f) = olim j (\ n > idO j (f n))
}

(i > j) is a new size pattern that matches size
variable j against the actual size, recording the
constraint i > j for type checking. The counterexample below
cannot be replayed with these size patterns.
The maximum function can still be precisely typed using a max
on sizes.
sized data Nat : Size > Set
{ zero : [i : Size] > Nat ($ i)
; succ : [i : Size] > Nat i > Nat ($ i)
}
fun maxN : [i : Size] > Nat i > Nat i > Nat i
{ maxN i (zero (i > j) ) (zero (i > k) ) = zero j
; maxN i (zero (i > j) ) (succ (i > k) m) = succ k m
; maxN i (succ (i > j) n) (zero (i > k) ) = succ j n
; maxN i (succ (i > j) n) (succ (i > k) m) = succ (max j k)
(maxN (max j k) n m)
}

Nontermination with deep matching
20100202: With Cody Roux I found the following counterexample
to termination checking with deep matching.
data Nat : Set
{ zero : Nat
; succ : Nat > Nat
}
sized data O : Size > Set
{ Z : [i : Size] > O ($ i)
; S : [i : Size] > O i > O ($ i)
; L : [i : Size] > (Nat > O i) > O ($ i)
; M : [i : Size] > O i > O i > O ($ i)
}
fun emb : Nat > O #
{ emb zero = Z #
; emb (succ n) = S # (emb n)
}
let pre : [i : Size] > (Nat > O ($$ i)) > Nat > O ($ i)
= \ i > \ f > \ n > case (f (succ n))
{ (Z .($ i)) > Z i
; (S .($ i) x) > x
; (L .($ i) g) > g n
; (M .($ i) a b) > a
}
fun deep : [i : Size] > O i > Nat > Nat
{ deep .($$$$ i) (M .($$$ i)
(L .($$ i) f)
(S .($$ i) (S .($ i) (S i x)))) n
= deep _ (M _ (L _ (pre _ f)) (S _ (f n))) (succ (succ (succ n)))
; deep i x n = n
}
let four : Nat
= succ (succ (succ (succ zero)))
eval let loop : Nat = deep # (M # (L # emb) (emb four)) four

The pattern (L .($$ i) f) in deep is the problem.
The correct pattern is (L .i f). By subtyping these two terms are
both of the required type O ($$$ i), but the least size must
be chosen to be on the safe side.
Thus, the implementation of dot patterns has to be changed to account
for subtyping.
Forced matching
20100121: A partial interpreter for untyped lambda calculus with a
termination proof in the style of Bove/Capretta.
data Nat : Set
{ zero : Nat
; succ : Nat > Nat
}
data List (+A : Set) : Set
{ nil : List A
; cons : A > List A > List A
}
 de Bruijn terms
data Exp : Set
{ var : Nat > Exp
; abs : Exp > Exp
; app : Exp > Exp > Exp
}
 set of values
data D : Set
{ clos : Exp > (List D) > D
}
 environment operations
let Env : Set
= List D
let empty : Env
= nil D
let update : Env > D > Env
= \ rho > \ d > cons D d rho
let dummy : D
= clos (var zero) empty
fun lookup : Env > Nat > D
{ lookup (nil .D) n = dummy
; lookup (cons .D d rho) zero = d
; lookup (cons .D d rho) (succ n) = lookup rho n
}
 inductive graph of the evaluation function
data Eval : Exp > Env > D > Set
{ evVar : (k : Nat) > (rho : Env) >
Eval (var k) rho (lookup rho k)
; evAbs : (e : Exp) > (rho : Env) >
Eval (abs e) rho (clos e rho)
; evApp : (f : Exp) > (e : Exp) > (rho : Env) >
(f' : Exp) > (rho' : Env) > (d : D) > (d' : D) >
Eval f rho (clos f' rho') >
Eval e rho d >
Eval f' (update rho' d) d' >
Eval (app f e) rho d'
}
 evaluation as a partial function
{ after erasure, the function will take the form
evaluate : Exp > Env > D
}
mutual {
fun evaluate : (e : Exp) > (rho : Env) >
[d : D] > [Eval e rho d] >
{ evaluate (var k) rho .(lookup rho k) (evVar .k .rho) = lookup rho k
; evaluate (abs e) rho .(clos e rho) (evAbs .e .rho) = clos e rho
; evaluate (app f e) rho .d' (evApp .f .e .rho f' rho' d d' evF evE evF')
= apply f' rho' (evaluate f rho (clos f' rho') evF)
(evaluate e rho d evE) d' evF'
}
fun apply : [f' : Exp] > [rho' : Env] > >
(d : D) > [d' : D] > [Eval f' (update rho' d) d'] >
{ apply .f' .rho' (clos f' rho') d d' p = evaluate f' (update rho' d) d' p
}
}

Note that in evaluate we match against [Eval e rho d]
even though it is erased. It is possible since there is only one
matching constructor.
Counting constructors in termination checking
20100114: Now the following code is accepted.
data Nat : Set
{ zero : Nat
; succ : Nat > Nat
}
mutual {
fun f : Nat > Nat
{ f zero = zero
; f (succ zero) = zero
; f (succ (succ n)) = g n
}
fun g : Nat > Nat
{ g zero = zero
; g (succ n) = f (succ (succ n))
}
}

Since f takes off two constructors before calling g, g might put one
back before calling f without jeopardizing termination. This is
especially useful in combination with sized types. It makes the
following implementation of merge terminationcheck.
data Bool : Set
{ true : Bool
; false : Bool
}
data Nat : Set
{ zero : Nat
; suc : Nat > Nat
}
sized data List : Size > Set
{ nil : (i : Size) > List ($ i)
; cons : (i : Size) > Nat > List i > List ($ i)
}
fun leq : Nat > Nat > Bool {}
 merge as would be represented with "with" in Agda
mutual {
fun merge : (i : Size) > List i > (j : Size) > List j > List #
{ merge .($ i) (nil i) j l = l
; merge i l .($ j) (nil j) = l
; merge .($ i) (cons i x xs) .($ j) (cons j y ys) = merge_aux i x xs j y ys (leq x y)
}
fun merge_aux : (i : Size) > Nat > List i > (j : Size) > Nat > List j > Bool > List #
{ merge_aux i x xs j y ys true = cons # x (merge i xs ($ j) (cons j y ys))
; merge_aux i x xs j y ys false = cons # y (merge ($ i) (cons i x xs) j ys)
}
}

Inconsistency from injective data type constructors
20100112: This is an implementation of ChungKil Hur's and Hugo Herbelin's
derivation of absurdity. It makes use of fancy case.
data Empty : Set {}
data Unit : Set
{ unit : Unit
}
data Or (A : Set) (B : Set) : Set
{ inl : A > Or A B
; inr : B > Or A B
}
data Eq (A : Set)(a : A) : A > Set
{ refl : Eq A a a
}
fun cong : (F : Set > Set) > (G : Set > Set) > (A : Set) >
Eq (Set > Set) F G > Eq Set (F A) (G A)
{ cong F .F A (refl .(Set > Set) .F) = refl Set (F A)
}
fun castLR : (A : Set) > (B : Set) > Eq Set A B > A > B
{ castLR A .A (refl .Set .A) a = a
}
fun castRL : (A : Set) > (B : Set) > Eq Set A B > B > A
{ castRL A .A (refl .Set .A) a = a
}
data I (F : Set > Set) : Set {}
fun injI : (F : Set > Set) > (G : Set > Set) > Eq Set (I F) (I G) > Eq (Set > Set) F G
{ injI F .F (refl .Set .(I F)) = refl (Set > Set) F
}
data InvI (A : Set) : Set
{ inv : (X : Set > Set) > Eq Set (I X) A > InvI A
}
let EM : Set
= (A : Set) > Or A (A > Empty)
fun em : EM {}  postulate excluded middle
fun cantor : Set > Set
{ cantor A = case (em (InvI A))
{ (inl .(InvI A) .(InvI A > Empty) (inv .A X p)) > X A > Empty
; (inr .(InvI A) .(InvI A > Empty) bla) > Unit
}
}
{ now define (modulo casts and irrelevant cases)
no = \ f > f f
yes = \ f > f f
}
let no : cantor (I cantor) > Empty
= \ f > case (em (InvI (I cantor)))
{ (inl .(InvI (I cantor)) .(InvI (I cantor) > Empty)
(inv .(I cantor) X p)) >
f (castRL (X (I cantor))
(cantor (I cantor))
(cong X cantor (I cantor) (injI X cantor p))
f)
; (inr .(InvI (I cantor)) .(InvI (I cantor) > Empty) g) >
g (inv (I cantor) cantor (refl Set (I cantor)))
}
let yes : cantor (I cantor)
= case (em (InvI (I cantor)))
{ (inl .(InvI (I cantor)) .(InvI (I cantor) > Empty)
(inv .(I cantor) X p)) >
\ f > (castLR (X (I cantor))
(cantor (I cantor))
(cong X cantor (I cantor) (injI X cantor p))
f) f
; (inr .(InvI (I cantor)) .(InvI (I cantor) > Empty) g) > unit
}
eval let omega : Empty
= no yes

"Resurrection" of Irrelevant Erased Parts
data Sigma (A : Set) (B : A > Set) : Set
{ pair : (fst : A) > (snd : B fst) > Sigma A B
}
data Nat : Set
{ zero : Nat
; succ : Nat > Nat
}
data Empty : Set
{
}
 eliminator for Empty: zero cases
fun abort : (A : Set) > Empty > A
{
}
 magic is ok, since all inhabitants of Empty are equal
let magic : (A : Set) > [Empty] > A
= \ A > \ p > abort A p
data Unit : Set
{ unit : Unit
}
fun Vec : (A : Set) > (n : Nat) > Set
{ Vec A zero = Empty
; Vec A (succ n) = Sigma A (\ z > Vec A n)
}
fun Leq : (n : Nat) > (m : Nat) > Set
{ Leq zero m = Unit
; Leq (succ n) zero = Empty
; Leq (succ n) (succ m) = Leq n m
}
 lookup A n m p v does not depend on the proof p : m < n
fun lookup : (A : Set) > (n : Nat) > (m : Nat) > [Leq (succ m) n] > Vec A n > A
{ lookup A zero m p v = abort A p
; lookup A (succ n) zero p v = fst A (\ z > Vec A n) v
; lookup A (succ n) (succ m) p v = lookup A n m p (snd A (\ z > Vec A n) v)
}

Etaexpansion for Propositional Equality
data Id (A : Set) (a : A) : A > Set
{ refl : Id A a a
}
fun subst : (A : Set) > (a : A) > (b : A) > Id A a b >
(P : A > Set) > P a > P b
{ subst A a .a (refl .A .a) P x = x
}
 this demonstrates eta expansion at the identity type
let bla : (A : Set) > (a : A) > (p : Id A a a) >
(P : A > Set) > (x : P a) >
Id (P a) x (subst A a a p P x)
= \ A > \ a > \ p > \ P > \ x > refl (P a) x

Case over Bool
data Nat : Set
{ zero : Nat
; suc : Nat > Nat
}
data Bool : Set
{ true : Bool
; false : Bool
}
fun leq : Nat > Nat > Bool
{ leq zero n = true
; leq (suc m) zero = false
; leq (suc m) (suc n) = leq m n
}
data Id (A : Set) (a : A) : A > Set
{ refl : Id A a a
}
fun True : Bool > Set
{ True b = Id Bool b true
}
let triv : True true
= refl Bool true
fun False : Bool > Set
{ False b = Id Bool b false
}
let triv' : False false
= refl Bool false
fun leFalse : (n : Nat) > (m : Nat) > False (leq n m) > True (leq m n)
{ leFalse n zero p = triv
; leFalse (suc n) (suc m) p = leFalse n m p
 ; leFalse zero (suc m) ()  IMPOSSIBLE
}
data SList : Nat > Set
{ snil : SList zero
; scons : (m : Nat) > (n : Nat) > True (leq n m) > SList n > SList m
}
fun max : Nat > Nat > Nat
{ max n m = case leq n m
{ true > m
; false > n
}
}
fun maxLemma : (n : Nat) > (m : Nat) > (k : Nat) >
True (leq n k) > True (leq m k) > True (leq (max n m) k)
{ maxLemma n m k p q = case leq n m
{ true > q
; false > p
}
}
fun insert : (m : Nat) > (n : Nat) > SList n > SList (max n m)
{ insert m .zero snil = scons m zero triv snil
; insert m .n (scons n k p l) = case leq n m
{ true > scons m n triv (scons n k p l)
; false > scons n (max k m) (maxLemma k m n p (leFalse n m triv'))
(insert m k l)
}
}

Sized coinductive types
This is a MiniAgda implementation of the hamming stream which is
productive by definition.
 Nat 
data Nat : Set
{
zero : Nat;
succ : Nat > Nat
}
fun add : Nat > Nat > Nat
{
add zero = \y > y;
add (succ x) = \y > succ (add x y)
}
let double : Nat > Nat
= \ n > add n n
let triple : Nat > Nat
= \ n > add n (double n)
fun leq : Nat > Nat > (C : Set) > C > C > C
{
leq zero y C tt ff = tt;
leq (succ x) zero C tt ff = ff;
leq (succ x) (succ y) C tt ff = leq x y C tt ff
}
 Stream 
sized codata Stream (+ A : Set) : Size > Set
{
cons : (i : Size) > A > Stream A i > Stream A ($ i)
}
cofun map : (A : Set) > (B : Set) > (i : Size) >
(A > B) > Stream A i > Stream B i
{
map .A B ($ i) f (cons A .i x xl) = cons B _ (f x) (map A B _ f xl)
}
cofun merge : (i : Size) > Stream Nat i > Stream Nat i > Stream Nat i
{
merge ($ i) (cons .Nat .i x xs) (cons .Nat .i y ys) =
leq x y (Stream Nat _)
(cons Nat _ x (merge _ xs (cons Nat _ y ys)))
(cons Nat _ y (merge _ (cons Nat _ x xs) ys))
}
 Hamming function 
cofun ham : (i : Size) > Stream Nat i
{
ham ($ i) = cons Nat _ (succ zero)
(merge i (map Nat Nat i double (ham i))
(map Nat Nat i triple (ham i)))
}
