THEMES


- Coordinator: D. Remy
- Description:
Real-world software projects are hampered by the increasing complexity of the code involved, increasing both the development and maintenance costs. Software Engineers typically employ some form of program structuring as an attempt to tackle this problem. This is achieved by building large systems from other smaller code components, which are parameterized, either explicitly or implicitly, so that they can be specialized or refined while being reused. To ensure safety, every unit must specify precisely its accessible features and their properties through an interface. Then, units can be implemented and modified independently of one another while preserving their interfaces; they can also be safely assembled together by only checking their interfaces.

However, this general approach still lacks powerful mechanisms for both building separate units by combining several forms of parameterizations together (abstraction, subtyping, recursion, or object identity, which are only well-understood in isolation) and for assembling them together. Moreover, beyond well-typedness, which has been the focus of most previous works, reasoning principles and mechanical proofs and analyzes for object-oriented programs are still to be settled. Our aim is to tackle these shortcomings, and develop a better understanding of program structuring. We hope to bring together researchers in various areas (modules, objects, types, analyzes), ultimately aiming to transfer our results to real-world programming languages through our industrial partners. We have identified the following three main topics for research.
Bringing modules and objects closer will be tackled by studying, in the first case, several extensions of up-to-date modular and object-oriented systems: (1) polymorphic extensions of structures (2) Mixing modules (2) packagings, and (3) overloading. We then plan a comparative study of these resulting approaches, with the aim of developing a unified framework.

Mechanical proofs and analyzes for object-oriented programs will include the verification and inference of unit's interfaces. We shall also develop lightweight, core object calculi (containing e.g. assignments, block-structure, and object identity) into which we can compile higher-level object-oriented languages.

Reasoning principles for object-oriented languages will be developed. This includes adapting reasoning techniques, such as operational characterizations of behavioral equivalences, now well-understood for functional calculi, to our core object calculi. In particular, we shall explore semantic methods using recently-developed resource models. One important application is to use these principles to reason about compilation and linking.
- Coordinator: T. Coquand
- Description:
The unifying theme of this research strand is the Curry-Howard analogy between proofs and programs, and its refinement due to Martin-Löf (Stockholm), Nordström, and Smith (Chalmers) revealing a close correspondence between key concepts in functional programming (constructors, evaluation to weak-head normal form, program, ...) and key notions in proof theory (proof in introduction form, reduction to introduction form, admissible rule, ...). This theme enables a number of applications to programming and computer-aided threorem proving.

The work to be carried out within APPSEM-II under this theme is structured in six topics:
  • Termination of functional programs and of proofs.
  • Subtyping
  • Record types
  • Constructor subtyping
  • Typing of object-oriented languages
  • Polytypic programming
  • Normalisation of terms with variables
  • Continuation-passing style
  • Dependent types in programming
- Coordinator: N. Jones
- Description:
APPSEM-I already gathered a substantial expertise in program analysis and transformation. The following topics include areas in which previous accomplishments will be further advanced under APPSEM-II, plus several promising new directions that lie squarely within the area \ananame.
  • Aspect-oriented programming
  • A calculus for meta-computations
  • Compiling
  • Continuations
  • Efficient programs from proofs
  • Normalisation by evaluation
  • Partial evaluation and program generation
  • Programming the temporal dimension of computation
  • Resource boundedness and non-interference
  • Taxonomy and forestation within the world of programs
  • Termination analysis
  • Type specialisation
- Coordinator: U. Reddy
- Description:
An important application of programming language semantics is the development and validation of reasoning principles for programs. A close study of semantics suggests novel methods for specifying the behaviour of software systems and components which in turn gives rise to new programming logics and program verification methods. The following topics will be further investigated under APPSEM-II
  • Reasoning principles based on the
  • logic of Bunched Implications (Pym, O'Hearn)
  • Hoare-style logics with implicit
  • representations of store
  • Reasoning about information hiding and data refinement
  • Specification and verification using dependent types
  • Modelling languages and model-checking algorithms for probabilistic systems
  • abstraction of liveness properties in the temporal logic CTL
- Coordinator: F. Henglein
- Description:
Types in programming languages have, over the last 40 years, become such an integral part of programming, that it is hard to isolate them from other aspects of programming and programming languages. Types are a conceptual and technical substrate in programming languages with multiple facets: (Static) type systems provide early detection, localization and prevention of errors, which nowadays is considered a condition sine qua non in complex software system construction. They provide enforced abstraction with explicit and well-defined interfaces that is key to disciplined and predictable composition and reuse of system components. They enable reasoning --- both informally and through automated tools --- about software at a higher level, using other models than bits and bytes. Also, because they not only capture how software is constructed but also how it may be used, they provide the substrate for reasoning in a modular fashion about systems in a manner where the intended or possible uses are taken into account. Powerful type inference techniques have turned out to be an important technology that minimizes a programmer's burden for providing explicit type annotations in programs and provides useful, nontrivial feedback to programmers about the programs they or others have written. Type-based program analysis is also used increasingly in program analysis, since types are a natural vehicle for modular analysis of programs with higher-order features. The use of languages with solid type systems can increase --- or rather provide --- basic security guarantees on the Internet; e.g., the majority of reported security breaches on the Internet could have been avoided if strongly typed programming languages had been used. Type-related research will go on in several APPSEM themes, as well as outside APPSEM, notably in collaboration with EU Working Group TYPES. In this theme we will focus on important aspects of core issues regarding type systems and type inference in programming:
  • How to design new type systems.
  • How to increase the scope of type systems.
  • How to increase the (logical) expressiveness of type systems and extend type inference.
Specifically we propose to pursue the following activities.
  • Semantics-based design of type systems
  • Type and effect systems
  • Parametricity and type inference
- Coordinator: P.L. Curien
- Description:
The semantic understanding of sequentiality has been considerably further developed during the period of the APPSEM-I project. The main tools for this purpose were various notions of games, corresponding abstract machines and also to some extent realizability models. During the APPSEM-II period, we intend to further develop this research both on a foundational level and towards applications to program verification. We plan to further this research in the following directions
  • Positional games played on graphs to obtain finer-grained analysis of strategies
  • Game semantics and continuations; relationships to Girard's ludics
  • Game-theoretic analysis of probabilistic computation
  • Enlarging the scope of model-checking techniques via game semantics
- Coordinator: G. Winskel
- Description:
In this theme we will develop mathematical tools for the formalization and analysis of distributed computation. We plan to meet the challenge of the lack of a global mathematical setting for distributed computation through the development of a domain theory for concurrency which can handle the intricate structure (e.g. of event structures and name generation) and equivalences (e.g. bisimulation) of dsitributed computation. By moving to new semantic frameworks we can accommodate the richer structure of distributed computation, characterise its operations abstractly, and systematise operational semantics through the study of rule formats for structured operational semantics. Our aim will be the development and investigation of a process language with the following features: higher-order processes, static scoping, dynamic creation of names. The design of the language will be based on that of existing languages but crucially guided by a denotational model, the importance of which will be established by studying its relationship to operational and logical semantics.
The models we develop are bound to suggest new type disciplines for distributed computation. Types for distributed computation will almost certainly help in developing logics for distribution (cf. themes 4 and 5). (One type-based approach could proceed by extracting logics in the manner of logic for domains.)
The challenge of security in distributed computation has recently led to event-based techniques in the verification of security protocols. The event-based proof techniques are rather low-level however, so we will face the problems of how to support higher-level reasoning through appropriate equational theories, logics and typing judgements to ensure security properties. (In particular, symbolic model checking approaches including minimization algorithms for bisimilarity in the presence of name generation will be explored.) We will work to identify and prove the relations between analyses of security protocols based on cryptography as a ''black box'' and those based on true cryptography and probability, connecting to Topic 9.
- Coordinator: P. O'Hearn, P. Gardner
- Description:
This theme centres on techniques for managing resource both locally and across widely-distributed systems such as the Web. We focus on three contrasting but related sections: spatial logics, implicit complexity, and resource access and security policies all of which have emerged during APPSEM-I. We will further pursue this work in the following directions:
  • Fundamental theory behind spatial logics
  • Theory and applications of pointer logic
  • Creating query languages and programming languages for the Web
  • Developing type checking, theorem proving and model checking technologies that can be used in applications to both resource modelling and the Web
  • Applications of spatial logic to distributed and mobile code
  • Developing the resource-based approach to implicit computational complexity, and investigate its connection to the account of resource allocation and deallocation provided by spatial logics
  • Static estimation of the consumption of specific resources such as heap space, stack size, cache use, system calls
  • Static analysis of security policies
  • Relationships with aspect-oriented programming
- Coordinator: A. Jung
- Description:
The work in this topic is situated at the boundary between Computing and Physics. Traditionally, the former deals with discrete structures, while physical systems are described with real and complex numbers. The research will contribute to this interface by studying the problem of computing with exact real numbers and probabilistic system evaluation.
Regarding the former, we will build on substantial insight that was gained during the lifetime of APPSEM-I. This work has led to a number of semantic models which allow a closer analysis of the inherent
efficiency problems with exact real computation. In APPSEM-II, we will use this knowledge to define a refined model which allows for a controlled amount of intensional information being used during computations. We thus hope to be able to provide substantial help for groups which attempt to integrate exact computation into computer algebra packages. Links already exist with St Andrews to this end and we intend to use this connection to also reach their industrial partners specialising in computer algebra.
Within probabilistic system modelling and evaluation we propose to provide a semantic foundation for the work by Kwiatkowska and her group in Birmingham in probabilistic model checking. Their model checker PRISM is being used to evaluate industrial designs and this will give us an opportunity to reach out and transfer theoretical results to the market place.

Click on a theme to get detailed information.
Martin Hofmann