Abstracts for TYPES 98


Martín Abadi

A Type System for the Java Virtual Machine

  Java is typically compiled into an intermediate language, JVML, 
  that is interpreted by the Java Virtual Machine. Because mobile 
  JVML code is not always trusted, a bytecode verifier enforces static 
  constraints that prevent various dynamic errors. Given the importance 
  of the bytecode verifier for security, its current descriptions 
  are inadequate. We consider the use of typing rules to describe 
  the bytecode verifier because they are more precise than prose, 
  clearer than code, and easier to reason about than either. We explore 
  the viability of this approach by developing a sound type system 
  for a subset of JVML. Although this subset is small, it includes 
  two of the main sources of complexity for the bytecode verifier: 
  subroutines and object initialization. 
 
  (This talk describes joint work with Raymie Stata, and significant 
  extensions of this work due to Stephen Freund and John Mitchell.
  Part of this material has just been presented at POPL 98.)

Peter Aczel

On relating extensions of the calculus of constructions with extensions of CZF

There is an obvious interpretation of the calculus of
constructions in classical set theory ZF, with types being modelled as
sets and the type Prop being modelled as a two element type.  The same
idea for an interpretation carries over to give an interpretation in the
fully impredicative intuitionistic set theory IZF when the type Prop is
modelled as the powerset of the singleton set 1={0}.  In order to carry
over the interpretation to an interpretation in the generalised
predicative constructive set theory CZF it is necessary to add some
strongly impredicative axioms.  We consider the following axiom scheme
(P-scheme) to be added to CZF, using a new individual constant P.

P is a set of subsets of 1 that is closed under intersections of
arbitrary subclasses; i.e. for each formula A(x)

exists a in P [ 0 in a <--> forall x in P[A(x) --> 0 in x]

Now we can use P to model Prop to interprete the calculus of
constructions in the set theory CZF + (P-scheme).  Using the
construction of my type theoretic interpretation of CZF in Martin-Lof
Type Theory there is a converse interpretation that models
CZF+(P-scheme) in a suitable extension of the Calculus of Constructions.  
We will explore how to get an extension of each kind of theory so that
there are interpretations both ways.

Henk Barendregt

Capturing aspects of mathematical intuition

Formalizing proofs in order to automatically verify them is a
non-trivial task. Proof-assistants are interactively helping us with
it. The performance of these tools will be enhanced if we know how to
capture parts of our mathematical intuition. For a modest but often
occurring class of reasoning patterns our intuition can be captured
indeed: we recognize in a mathematical context that a pattern belongs
to a formal language and have a proof that the semantics of the
linguistic expression has a certain property.

Gilles Barthe

The relevance of proof-irrelevance

The Calculus of Constructions (CC) is a powerful typed
lambda-calculus. Yet CC is minimal in the sense that the generalised
function space is its only type constructor. The minimality of CC
imposes strict limits to its applicability and has given rise to a
spate of proposals to include new term/type constructors. While most
of these calculi, which we call Generalised Calculi of Constructions
(GCCs), are known to enjoy meta-theoretical properties similar to CC
itself, there are no general techniques to guide the meta-theoretical
study of GCCs.


We develop a general technique, inspired from proof-irrelevance,
for the meta-theoretical study---especially as regards strong
normalisation and consistency results---of GCCs. In a nutshell, the
technique proceeds in three steps:

1.  we define a proof-irrelevant Calculus of Constructions
(PICC), in which all objects are identified in the conversion rule;

2. we prove that PICC enjoys essentially the same properties as CC,
including beta-strong normalisation and consistency;

3.  we define a class of sound GCCs that may be viewed as subsystems
of PICC.

By combining those steps, we conclude that sound GCCs are strongly
normalising and, under suitable conditions, consistent. We can then
show that several GCCs of interest, including the above mentioned
ones, are sound.

Marc Bezem

Using Otter in Coq

The proof generation capabilities of proof construction systems such
as Coq, based on type theory, could still be improved.
Resolution based theorem provers such as Otter are more powerful in this
respect, but have the drawback that they work with normal forms of
formulas, so-called clausal forms. Clauses are prenex-Skolem-conjunctive
normals forms which are are not exactly logically equivalent to the
original formulas. This makes resolution proofs hard to read and
understand, and makes the navigation of the theorem prover through
the search space very difficult. Type theory is much better in this
respect, but the proof generation capabilities suffer from the small
granularity of the inference steps and the corresponding astronomic
size of the search space. Typically, one hyperresolution step requires
a few dozens of inference steps in type theory. The idea of this project
is to combine the best of both worlds.

Adriana Compagnoni

Decidability of Higher-Order Subtyping via Logical Relations

This is joint work with Healfdene Goguen.

  This paper uses logical relations for the first time to study the
  decidability of type-checking and subtyping.  The result is proved
  for F-Omega-Sub, a language with higher-order subtyping and bounded
  operator abstraction not previously known to be decidable.  The
  proof is via an intermediate system called a typed operational
  semantics, leading to a powerful and uniform technique for showing
  metatheoretic results of type systems with subtyping, such as strong
  normalization, subject reduction and decidability of subtyping.

  This work shows how techniques originally developed to study
  dependent type theories, namely Typed Operational Semantics, can be
  adapted to study a higher order calculus with subtyping of interest 
  to the programming languages community.

David Delahaye

Information retrieval in a Coq proofs library using type and modulo isomorhisms

We propose a method to search for an identifier in a Coq proofs library using
the type (total specification here) as pattern. It is based on the notion of
type isomorphism  developped within the framework of functionnal programmation.
We introduce a theory characterizing a certain class of Coq isomorphisms and
which is a generalization of the axiomatization for the typed lambda-calculus
associated with the Closed Cartesian Categories taking into account
polymorphism and dependent types. There'after, we show some basic properties
such the soundness, the non-completness (compared to the inductive calculus of
constructions), the decidability and underline the necessity of spreading the
conversion terms in the isomorphisms proofs (up to context passage's standard).
Thus, our implementation, to be highly efficient, characterizes the congruent
subset of this theory allowing not to know the converters.

Jean Duprat

Proof of correctness of the Mazoyer's solution to the firing squad synchronization problem

Giving a finite - but arbitrarily long - line of identical finite-state
machines, all but one in quiescent state at time O, the firing squad
synchronization problem consists in defining states and transitions of these
machines for that in a time t, all the machines reaches for the first time the
same state. J. Mazoyer gave a minimal time solution with only 6 states in 1987
with a long paper proof of correctness based upon discrete geometrical
considerations.
This proof has been fully translated in COQ and works perfectly. This work
opens a reflection on the benefits brought by proof assistant to discrete
geometry research.

Andrzej Filinski

Logical Relations in a Logical Framework

The Edinburgh Logical Framework does not directly admit definition
or reasoning by induction on terms over a signature.  A simple
modularity extension with explicit realizations of signatures does
allow a limited form of such induction, but still falls short for
reasoning about signatures representing languages with non-trivial
binding structure.  With a further generalization of realizations,
however, it becomes possible to represent a wide variety of standard
logical-relations arguments (e.g., normalization proofs or
equivalence of semantics) as formal objects within the framework.
The construction has both a simple syntactic characterization and a
natural interpretation in terms of Kripke models.

Daniel Fridlender

A Lambda Calculus Model of Martin-Löf's Theory of Types with Explicit Substitution

The aim is to define a proof-irrelevant model of Martin-Loef's theory
of types with explicit substitution, that is, a model in the style of
[Smi88] in which types are interpreted as truth values and objects (or
proofs) are irrelevant.

The main difference here is the need to cope with a formal system
which in addition to types has sets and substitutions.  This
difference leads to a reformulation of the model which consists in
defining an interpretation in terms of the untyped lambda calculus.
From this interpretation a proof-irrelevant model and a realizability
model are obtained as particular cases.


reference:

[Smi88] Jan Smith, "The Independence of Peano's Fourth Axiom from
Martin-Loef's Type Theory without Universes".  Journal of Symbolic
Logic, Vol 53.

Jean-Christophe Filliatre

Proof of Imperative Programs in Type Theory

  Proofs of correctness of imperative programs are traditionally done
  in first order frameworks derived from Hoare logic~\cite{Hoare69}.
  On the other hand, correctness proofs of purely functional programs
  are almost always done in higher order logics. In particular, the
  realizability~\cite{Kleene52} allow to extract correct functional
  programs from constructive proofs of existential formulae.  In this
  paper, we establish a relation between these two approaches and show
  how proofs in Hoare logic can be interpreted in type theory,
  yielding a translation of imperative programs into functional ones.
  Starting from this idea, we propose an interpretation of correctness
  formulae in type theory for a programming language mixing imperative
  and functional features. One consequence is a good and natural
  solution to the problems of procedures and side-effects in
  expressions.


Healfdene Goguen

An abstract formulation of memory management

We present an abstract model of computer memory, in the form of
directed, labelled graphs with simple transitions corresponding to
valid memory operations. We also introduce a refinement of this basic
model of memory that incorporates features of memory management. After
giving some examples of how this refined model captures various
garbage collection algorithms, we discuss a proof that the two
representations are behaviorally equivalent.

  This will be an informal presentation of proofs that have been
carried out in the theorem prover Coq.

  The talk is an overview of work on memory management by the LEGO
project, including John Longley, Rod Burstall, Paul Jackson and me,
together with Richard Brooksby and David Jones at Harlequin Ltd.

Jean Goubault-Larrecq

On the Simplical Structure of S4 Proofs

It is not surprising (and is now well-known) that
minimal S4 logic, just like
intuitionistic logic, has Curry-Howard interpretations.  More
surprising is the fact that proof-terms for S4, modulo
convertibility, form a simplicial set.  Contrarily to most
simplicial sets arising from algebraic topology, it does not
satisfy the Kan extension condition.  However, homotopy has a
natural interpretation: while conversion allows to equate
any S4 proof with a cut-free one, homotopy provides additional
conversion rules between proofs, which arise quite naturally
in the study of cut-elimination but are hard to code as
reduction rules.  This is conjectured to extend (easily) to
linear logic proof nets.

Samin Ishtiaq

A formal semantics of the $\lambda\Lambda$-calculus type theory.

We describe a formal semantics of the $\lL$-calculus, a system of
first-order linear dependent function types which uses a function
$\critical$ to describe the degree of sharing of variables between
functions and their arguments. The semantics are motivated by the
functorial Kripke semantics of the intuitionistic $\lP$-calculus.

Florian Kammueller

Modular Structures as Dependent Types in Isabelle

This paper describes the construction of dependent types in the theorem prover Isabelle.
We use Isabelle's higher order logic augmented with set theory. The dependent types 
are used to  represent modular structures. It is a semantical embedding of a module concept designed 
for abstract algebraic reasoning. 
Abstract Algebraic Structures are represented in a one-to-one manner by dependent types.
They remain first class citizens of the logic.

Application Examples drawn from Abstract Algebra and Lattice Theory
(the full version of Tarski's fixpoint theorem) validate the concept.

Pierre Leleu

A Modal Lambda Calculus with Iteration and Case Constructs

An extension of the simply-typed lambda calculus allowing
iteration and case reasoning over terms defined by means of higher order
abstract syntax has recently been introduced by Joelle Despeyroux, Frank
Pfenning and Carsten Schuermann.  This thorny mixing is achieved thanks
to the help of the operator `Box' of modal logic IS4.  Here we give a
new presentation of their system, with reduction rules, instead of
evaluation judgments, that compute the canonical forms of terms.  Our
presentation is based on a modal lambda-calculus that is better from the
user's point of view, is more concise and we do not impose a particular
strategy of reduction during the computation. Our system enjoys the
decidability of typability, soundness of typed reduction with respect to
typing rules, the Church-Rosser and strong normalization
properties. Finally it is a conservative extension of the simply-typed
lambda calculus.

Luigi Liquori

On Object Extension and Subtyping

The last few years have seen the development of statically
typed object based (also called prototype-based) programming
languages. Two proposals, namely Abadi & Cardelli's and Fisher &
Honsell & Mitchell's, have focused the attention of the scientific
community on object calculi, as a foundations for more traditional
class based calculi and as an original and safe style of programming.
In this talk, we apply four type systems to a functional object based
calculus, which is very close to the Lambda Calculus of Objects of
[FHM-94]: 
(a) the Original type system of [FHM-94]; 
(b) the Fisher's PhD type system of [Fish96]; 
(c) the Bruce's {\em Matching-based} type system of [BB-97]; 
(d) the {\em Bounded Polymorphic-based} type system of [Liquori], 
and we then compare these type systems with respect to
the following points: 
- Curry style versus Church style; 
- Static Type Checking versus Run Time Type Checking; 
- Explicit versus Implicit polymorphism; 
- Covariance versus contravariance of {\tt mytype}; 
- Big-step versus Small-step Semantics; 
- Binary Methods.  
- Subtyping



Zhaohui Luo

Coercive Subtyping

We shall give an introduction to Coercive Subtyping, as a general
approach to subtyping and inheritance for dependent type theories.  We
shall consider coercive subtyping in comparison with other approaches
to subtyping and inheritance, its implications with respect to
proof-theoretic treatment of subtyping, and its possible applications.

Ralph Matthes

Full primitive recursion on monotone inductive types

We study lambda calculi with monotone inductive types. Beta-reduction
is extended by primitive recursion and iteration. Monotonicity is
expressed by a term of a type expressing monotonicity. There are two
essentially different approaches to define the term systems. Both
systems are strongly normalizing which may be shown by embeddings into
a variant of Mendler's LICS 1987 system and an embedding into a "dual"
of this variant. In the presence of impredicative quantification an
embedding of monotone into non-interleaved (strictly) positive and
into non-interleaved generally positive inductive types - respectively
- is possible. 

Conor McBride

Towards Dependent Pattern Matching in LEGO

Pattern matching in ALF provides a concise and intuitive formalism for
programs and proofs. I shall demonstrate work in progress to implement
this style of development for LEGO, justified by the existing
elimination rules for inductive families and the additional assumption
that identity proofs are unique.

I shall exhibit a general tactic for applying these rules to
subfamilies by introducing equational constraints.  Uniqueness of
identity proofs is then sufficient to permit the solution of these
constraints for coverings of first-order constructor patterns, even in
the presence of dependent types.
    

Marino Miculan

Higher-order encoding of the $\pi$-calculus in (Co)Inductive Type Theories

We investigate the applicability of HOAS in dealing both with the
theory and the metatheory of process calculi.
In particular we discuss the representation of $\pi$-calculus in the
inductive and coinductive higher-order dependent types theories of
CC-(Co)Ind.

Martijn Oostdijk

Proof Automation in Type Theory by Internalization

In this talk we introduce a method to automatically prove statements
within a type theoretical proof assistant. The method lifts a statement
(A: Prop) to the syntactical level (p_A: form), from where it can be translated
back to the original statement and to a boolean expression. (That is, we have 
mappings E: form->Prop and C: form->bool such that (E p_A) is convertible to
A.)
By using inductive types to implement the syntactical statements (the
type form), a correctness proof can be constructed by induction, stating that
the boolean expression equals true, if and only if the original statement
holds.
(That is, we have a proof of \forall x: form.(E x)<->((C x)=true).) For
a concrete expression a: form, the boolean expression (C a) will just
reduce to true, so a proof of the equality (C a)=true is trivial by the conversion rule.
Combining this with the correctness proof yields a proof of the original
statement.  Two examples are given in the system Coq: A tautology prover for
 classical first order propositional logic" and a method to prove statements 
from "primitive recursive arithmetic".  

Henrik Persson

Groebner Bases in Type Theory

We describe how the theory of Gr=F6bner bases, an important part of
computational algebra, can be developed inside Martin-L=F6f's type
theory. In particular, we aim for an {\em integrated} development of
the algorithms for computing Gr=F6bner bases, i.e. we want to prove
constructively inside type theory the existence of Gr=F6bner bases.
One novelty is that we use short constructive proofs of Dickson's
lemma and Hilbert's basis theorem which are extracted from classical
proofs using the techniques of [Coquand 92] based on open induction
[Raoult 88].

Frank Pfenning

Automated Deduction in a Simple Meta-Logic for LF

Higher-order representation techniques allow elegant encodings of
logics and programming languages in the logical framework LF, but
unfortunately they are fundamentally incompatible with induction
principles needed to reason about them.  In this talk we sketch a
simple meta-logic M2 which allows inductive reasoning over such LF
encodings, and describe its implementation in Twelf, a special-purpose
automated theorem prover for properties of logics and programming
languages.  We have used Twelf to automatically prove a number of
non-trivial theorems, including type preservation for Mini-ML and the
deduction theorem for intuitionistic propositional logic.


Olivier Pons

Dependencies study in the Coq System

In this paper we work in the area of theory management. We study
the dependencies between theorems and definitions  in a theory  and how 
they can be  exploited in the development and the maintenance of theory.
We present some problems that we have met in our implementation of theories
managing tools for the Coq System and a new algorithm for an interactive
propagation of modifications. 

Anton Setzer

A closed formalization for inductive recursive definitions

Peter Dybjer has introduced the concept of inductive recursive definitions in
order to formalize the general pattern behind all constructions for introducing
new sets in Martin-L"of's Type Theory. Apart from some definitional
extensions which don't increase the strength of the theory it seems that
this concept covers everything which is at the moment by the 
majority of researchers in intuitionistic type theory regarded as predicatively
justified and even without definitional extensions one yields  directly a large
portion of the sets which are actually needed. 
We will give a complete closed formalization of inductive recursive 
definitions, which makes it possible to implement and to analyze them.


Gunnar Stålmarck

Industrial applications of propositional logic

 During the last five years a methodology has been developed in order to
 verify railway interlocking software in a joint project between
 Logikkonsult NP AB and Adtranz Signal (former ABB Signal). The methodology
 includes modelling of generic requirements on rail yards and interlocking
 software in propositional logic and automated theorem proving. Two tools
 have been developed that are used in the regular development and
 verification process at Adtranz Signal.
 
The methodology will be presented at the users level and the formal basis,
 including the principles underlying the theorem proving procedure, will be
 presented.

Martin Strecker

Proof search in fragments of Type Theory

The starting point of this talk is a calculus with metavariables which are
'secure' in the sense that they cannot be instantiated with type-incorrect
terms. This calculus (in a traditional natural-deduction style presentation)
is transformed into a sequent calculus which is appropriate for proof
search. It is shown how the mechanisms that ensure type-correctness of
metavariable instantiations permit to deal with the eigenvariable conditions
of the sequent calculus.

Thomas Streicher

A Topos for Computable Analysis

Recently D. Scott has advocated in various lectures to use the category
${\mathsf PER}({\cal P}\omega)$ as a category into which most classical spaces
embed in a full and faithful way. It appears as the category of modest sets
in the realisability topos over the lambda calculus model ${\cal P}\omega$.\\
Accordingly, all functions from the unit interval $[0,1]$ to the reals are
(uniformly) continuous. This fails in the effective version of the model where
${\cal P}\omega$ is replaced by $RE$. This failure can be traced back to the 
existence of the so-called {\em Kleene tree\/}, an unbounded binary tree which
is well-founded w.r.t.\ recursive paths.\\
We define a new topos ${\cal N}$ which is constructed in analogy to 
$RT({\cal P}\omega)$, the realisability topos over ${\cal P}\omega$ but 
``where all realisers have to be effective'', i.e.\ have to live in $RE$, 
together with a logical morphism from ${\cal N}$ to $RT({\cal P}\omega)$.
Moreover, $RT(RE)$ appears as a {\em quotient\/} of ${\cal N}$, 
i.e. as a topos of coalgebras on ${\cal N}$ 
for some modality operator $\#$ on ${\cal N}$ which
expresses ``restriction to computable elements''.\\
Thus, in ${\cal N}$ all morphisms are computable but the types contain 
nonstandard, i.e.\ non-effective, elements. The existence of the latter avoids
pathologies as non-continuous maps from $[0,1]$ to the reals. Nevertheless,
all functions constructed by the internal logic of ${\cal N}$ are
effective. 

Alastair Telford

Guarded Definitions in ESFP

It is  our aim  to produce an  elementary strong  functional programming
(ESFP)  system.  In  such  a  system we  need  to  ensure that  function
definitions   are  well-defined;   that   recursive  definitions   (over
finite  structures,  which  we   call  data)  are  strongly  normalising
and  co-recursive definitions  (over  infinite  structures, codata)  are
productive  (i.e. the  projection of  any data  element from  the codata
structure is strongly normalising). Moreover,  since the language has to
be elementary,  these checks should  be performed automatically  and not
presented as a proof obligation upon the programmer.

We have  extended, using  abstract interpretation  techniques, Coquand's
check  for  guardedness  that  has   been  used  with  definitions  over
co-inductive types  in Martin-Lof's type  theory, in order  to establish
the  productivity  of   functions  in  ESFP.   The   dual  analysis  can
then  be  used  to  check whether  recursive  function  definitions  are
strongly normalising. This  extends work by  Gimenez in the  calculus of
constructions.  We believe that this work thus places fewer restrictions
on programmers in constructing definitions within a strongly normalizing
functional language.

Silvio Valentini

The formal topology induced on closed subsets

After the introduction of formal topologies, which are
a constructive, i.e. intuitionistic and predicative,
approach to topology, fully developed inside Martin-Loef's
intuitionistic type theory, the problem to deal with
closed subsets remained open for a long time. In fact, while
it was immediately obviuos that the strightforward identification
of closed subsets with the complements of the open sets cannot
work in an intuitionistic framework, it was only the recent
introduction of the notion of basic pair by Sambin and
Gebellato which allowed to achieve the result to deal with
closed subsets in a formal way, and hence within
intuitionistic type theory.
The problem which was not solved in the Sambin-Gebellato approach
was that some conditions, which are naturally valid in any basic pair,
could not be proved. Moreover they were able to find the
correct definitions of the formal counterpart of the notion
of closed subset but without a general topological explanation
of their meaning.
In my talk I want to recall all the main definitions of the
basic pairs and then to show that a natural generalization
of the notion of cover relation is all what is needed in
order to be able to give them a very natural explanation in
term of topology induced on a closed subset. After these
definitions I will show that most of the known results
for formal topologies can easily be adapted to this new
framework.


Back to home-page TYPES'98
Bernhard R eus (19.03.98)