Lehr- und Forschungseinheit für Theoretische Informatik,
Institut für Informatik der Ludwig-Maximilians-Universität München

Oberseminar der LFE Theoretische Informatik
Rückblick auf bisherige Vorträge

Sommersemester 2009

Das Oberseminar "Theoretische Informatik" findet während des Semesters statt:

Freitag, von 14:15 Uhr bis 16 Uhr
im Raum Z1.09 in der Oettingenstr. 67.

Eine zweite Serie von Vorträgen wird erstmals im Sommersemester 2009 eingerichtet. Diese findet Dienstags von 10-12 Uhr ebenfalls im Raum Z1.09 statt.

Bei Fragen: MIT MAUSZEIGER LESBAR MACHEN! lennart . beringer { Klammeraffe } ifi PUNKT lmu PUNKT de

Hier können Sie die Oberseminar-Ankündigungen per E-mail bestellen.

24.04. Vivek Nigam (Ecole Polytechnique):
       Algorithmic specifications in linear logic with subexponentials

15.05. Hans-Wolfgang Loidl: EmBounded Project Summary 

22.05. Kein Vortrag

26.05. Datenschutz
29.05. Troels Bjerre Sørensen: Computing Equilibrium Refinements

2.6. Kein Vortrag (Pfingstdienstag)
05.06. Vladimiro Sassone: Permission-based separation logic for message-passing concurrency

9.6. entfällt
12.06. Felix Fischer

16.6. Dulma Rodriguez: New Results on the Implementation of Resource Aware JAva (RAJA)
      Georg Klein (FoPra): ML-GLR: a parser generator for context free grammars
19.06. Kein Vortrag (Mobius-Jahrestreffen)

23.6. Kein Vortrag
26.06. Kein Vortrag (Blockseminar Programmanalyse)

30.6. Felix Fischer: Probevortrag für das Rigorosum
03.07. entfällt (Rigorosum Felix Fischer)

7.7. Edith Hemaspaandra (Rochester Institute of Technology): 
     The Shield that Never Was: Societies with Single-Peaked
     Preferences are More Open to Manipulation and Control
10.07. Robert Grabowski: Improving String Security with Sound Points-to Analyses

14.7. Lane A. Hemaspaandra: Llull's Thirteenth-Century Election System Computationally Resists Bribery and Control
17.07. Markus Brill: Computational Aspects of Shapley's Saddles

21.7. NN
24.07. Kein Vortrag 

Fr, 17.07., 14:15 Uhr

Es spricht:    Markus Brill
               LFE Theoretische Informatik, LMU

über: Computational Aspects of Shapley's Saddles

Inhalt: Game-theoretic solution concepts, such as Nash equilibrium,
are playing an ever increasing role in the study of systems of
autonomous computational agents. A common criticism of Nash
equilibrium is that its existence relies on the possibility of
randomizing over actions, which in many cases is deemed unsuitable,
impractical, or even infeasible. In work dating back to the early
1950s Lloyd Shapley proposed ordinal set- valued solution concepts for
zero-sum games that he refers to as strict and weak saddles. These
concepts are intuitively appealing, they always exist, and are unique
in important subclasses of games.

We initiate the study of computational aspects of Shapley's saddles
and provide polynomial-time algorithms for computing strict saddles in
normal-form games and weak saddles in a subclass of symmetric zero-sum
games. On the other hand, we show that a number of problems associated
with weak saddles in bimatrix games are hard. In particular, we show
that finding weak saddles is NP-hard, under polynomial-time Turing
reductions. We moreover prove that recognizing weak saddles is
coNP-complete, and that deciding whether a given action is contained
in some weak saddle is hard for parallel access to NP and thus not
even in NP unless the polynomial hierarchy collapses.

This talk is based upon the following two papers:

F. Brandt, M. Brill, F. Fischer, and P. Harrenstein. Computational
aspects of Shapley's saddles. In Proceedings of the 8th International
Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS),
pages 209-216, 2009.

F. Brandt, M. Brill, F. Fischer, and J. Hoffmann. The computational
complexity of weak saddles. In M. Mavronicolas, editor, Proceedings of
the 2nd International Symposium on Algorithmic Game Theory (SAGT),
2009. To appear.

Di, 14.7., 10:15 Uhr

Es spricht: Lane A. Hemaspaandra,
            Department of Computer Science,
            University of Rochester

über: Llull's Thirteenth-Century Election System Computationally
Resists Bribery and Control


Inhalt: an computational complexity theory be used as a shield to
protect elections from bribery and control?  We show that an election
system developed by the 13th-century Catalan mystic Ramon Llull and
the closely related Copeland election system are both resistant to
(loosely, NP-hard with respect to) the ``standard'' types of
(constructive) electoral control.  This is the most comprehensive
resistance to control yet achieved by any natural,
preference-order-based election system whose winner problem is in
polynomial time.  In addition, we show that Llull and Copeland voting
are broadly resistant to bribery attacks, we show how network flows
can be used to find bribery attacks in certain settings, and we
integrate the potential irrationality of voter preferences into many
of our results.

Joint work with Piotr Faliszewski, Edith Hemaspaandra, Joerg Rothe,
and (transitively ;-) ) Henning Schnoor.

Fr, 10.7., 14:15 Uhr

Es spricht: Robert Grabowski,
            LFE Theoretische Informatik, LMU

über: Improving String Security with Sound Points-to Analyses

Many security exploits, such as cross-site scripting attacks
and code injection, are caused by the incorrect processing of
strings in programs.  An approach to tackle the problem is to
statically compute all possible values that may occur as results
of string expressions.

Existing analyses for string objects in Java usually depend on
the result of a points-to (or alias) analysis.  The existing
work in this research field focusses on the algorithmic
implementation and the trade-off between precision and efficiency,
while a formal soundness result is often omitted.

In this talk, we present a type system for a Java-like language
with strings.  We show how the results of a variety of existing
points-to and string analyses can be embedded into the type system,
and how typability can be decided by an algorithm.  The type system
is proven sound, hence it can be used to verify that the result
of a points-to or string analysis is indeed correct.

Di, 7.7., 10:15 Uhr

Es spricht: Edith Hemaspaandra, Rochester Institute of Technology

über: The Shield that Never Was: Societies with Single-Peaked
Preferences are More Open to Manipulation and Control

Inhalt: Much work has been devoted, during the past twenty years, to
using complexity to protect elections from manipulation and control.
Many results have been obtained showing NP-hardness shields, and
recently there has been much focus on whether such worst-case hardness
protections can be bypassed by frequently correct heuristics or by
approximations.  Our work takes a very different approach: We argue
that when electorates follow the canonical political science model of
societal preferences the complexity shield never existed in the first
place.  In particular, we show that for electorates having
single-peaked preferences, many existing NP-hardness results on
manipulation and control evaporate.

Joint work with Piotr Faliszewski, Lane A. Hemaspaandra, and Joerg

Di, 16.6., 10:15 Uhr

Es sprechen Dulma Rodriguez
            LFE Theoretische Informatik, LMU

über New Results on the Implementation of Resource Aware JAva (RAJA)

und Georg Klein (FoPra)
    Betreuer: Dr. Hans Leiß, CIS; Prof. Hofmann, TCS

über ML-GLR, ein Parsergenerator für algemeine kontextfreie Grammatiken


Titel: New Results on the Implementation of Resource Aware JAva (RAJA)

Inhalt: RAJA is a type system for a compile-time analysis of
heap-space requirements for Java style object-oriented programs with
explicit deallocation. It is based on an amortised complexity
analysis: the data is arbitrarily assigned a potential related to its
size and layout; allocations must be "payed for" from this
potential. The potential of each input then furnishes an upper bound
on the heap space usage for the computation on this input.

Since last year we have been developing a prototype implementation of
this system in Ocaml consisting of a typechecker and an interpreter
for RAJA programs.  In this talk we are going to present new results
on this implementation, like algorithmic typing rules which have been
proved correct, a subtyping algorithm and the integration of more
practical features to the implementation like separation of programs
and input. In particular, the output messages of the typechecker after
the static analysis show now clearly that the system predictcs upper
bounds as a function of the input of programs.


Titel: ML-GLR, ein Parsergenerator für algemeine kontextfreie Grammatiken

Inhalt: Verfügbare Parsergeneratoren für ML können keine allgemeinen
kontextfreien Grammatiken verarbeiten. Für Verarbeitung natürlicher
Sprachen ist dies aber notwendig. Daher wurde im Rahmen eines
Fortgeschrittenpraktikums der Parsergenerator ML-Yacc umgebaut. Die
Implementierung basiert auf dem Paper "Efficient Tabular LR Parsing"
von Nederhof und Satta, der entstandene Parsergenerator auf einer
Variante des CYK Algorithmus.

Fr 5.6.2009, 14:15 Uhr

Es spricht:    Vladimiro Sassone
               University of Southampton

über: Permission-based separation logic for message-passing concurrency

Inhalt: The talk presents new local reasoning techniques for message
passing concurrent programs based on ideas from separation logics and
resource usage analyses.  We propose a two-step analysis in which
concurrent processes are first analysed for confluent behaviour using
a type and effect system which provides a foundation for establishing
correctness through logical, separation-based reasoning.

Fr 22.5.2009, 14:15 Uhr

Es spricht:    Troels Bjerre Sørensen
               LFE Theoretische Informatik, LMU

über: Computing Equilibrium Refinements

Inhalt: It is well known that Nash equilibria of two-player extensive 
form games can be found using linear optimization techniques.  Despite 
the widespread use of these techniques, the strategies computed suffer 
from certain deficiencies as they sometimes fail to prescribe sensible 
play in all parts of the game.  This issue is addressed by the concept 
of equilibrium refinements.

In this talk, I will show how to compute several different equilibrium 
refinements of given two-player extensive form games with imperfect 
information but perfect recall.  If the given game is zero-sum, the 
presented algorithms run in polynomial time, relying on linear 
programming (LP).  One of the presented algorithms also works for non 
zero-sum games, using linear complementarity problem (LCP), for which 
there are currently no known polynomial time algorithms.  For the case 
of a perfect information game, I will show how to find a normal form 
proper equilibrium in linear time by a simple backwards induction procedure.

This talk is based on the following papers:

  * Fast algorithms for finding proper strategies in game trees (SODA'08)
  * Computing sequential equilibria for two-player games (SODA'06)

both of which are joint work with Peter Bro Miltersen.

Fr 15.5.2009, 14:15 Uhr

Es spricht:    Hans-Wolfgang Loidl
               LFE Theoretische Informatik, LMU

über: EmBounded Project Summary

Inhalt: In this talk I will give a summary of the very recently
finished EmBounded project. The goal of the project was to identify,
to quantify and to certify resource bounded code in a domain specific
high-level language (Hume) for embedded systems. I will report on
advances made in the amortised-cost based resource analysis and in the
formalisations of program logics and resource logics. Finally, I will
discuss the development of some Hume applications, driven by the
information provided by the resource analysis, and executed on real
embedded hardware.

Fr 24.4.2009, 14:15 Uhr

Es spricht:    Vivek Nigam
               Ecole Polytechnique

über: Algorithmic specifications in linear logic with subexponentials

Inhalt: The linear logic exponentials $\bang$, $\quest$ are not
canonical: one can add to linear logic other such modals, say
$\nbang{r}$, $\nquest{r}$, which may or may not allow contraction and
weakening, and where $r$ is from some partially ordered set of labels.
We shall call these additional operators \emph{subexponentials} and
use them to assign \emph{locations} to multisets of formulas within a
linear logic programming setting.  Treating locations as
subexponentials greatly increases the algorithmic expressiveness of
logic. To illustrate this new expressiveness, we show that focused
proof search can be precisely linked to a simple algorithmic
specification language that contains while-loops, conditionals, and
insertion into and deletion from multisets.  Finally, we propose a new
logical connective that allows for the creation of new

This is a joint work with Dale Miller. You can find a draft with more
details at my homepage:
Neben dem Oberseminar fand in früheren Semestern
auch noch der Lambda-Kalkül-und-Typen-Club statt,
der jetzt in dieses integriert wurde.

Wintersemester 2008

17.10. Igor Razgon: Fixed-Parameter Tractability of 2-CNF Deletion problem

24.10. NN

31.10. Kein Vortrag (Abschiedskolloquium Prof. Hegering)

05.11. Extra-Vortrag: Mathijs de Weerdt

07.11. Kein Vortrag (Medizininformatik-Workshop) 

12.11. Extra-Vortrag: Colin Riba

14.11. Kein Vortrag (PUMA-Workshop) 

19.11. Extra-Vortrag: Davide Grossi: Correspondences in the Theory of Aggregation

21.11. Ulrich Schöpp: A Formalised Lower Bound on Undirected Graph Reachability

28.11. Philip Wassenberg (DA): 
       Implementation and Evaluation of Cryptographic Auction Protocols, 

       Sarah Bluhme (FoPra): 
       Implementierung von Algorithmen zur Berechnung von Turnierl&oml;sungen mit Benutzeroberfläche

05.12. Dulma Rodriguez: Verification of an algorithm for computing greatest fixpoints in Coq

12.12. Entfällt

19.12. Andreas Abel: Normalization by Evaluation for System F

09.01. Jan Hoffmann: Automatic Amortized Analysis for Polynomially Bounded Resources

16.01. Paul Harrenstein: On the Complexity of Iterated Weak Dominance in Constant-Sum Games

23.01. Jan Johannsen: An exponential lower bound for width-restricted clause learning

30.01. Robert Grabowski: Information-flow security with dynamic domains

05.02. Markus Jehle: Bounded Model Checking for all Regular Properties (ausgefallen)

23.02. Stefan Schimanski: Polynomial Time Calculi (Rigorosum)

27.02. Sonja Mielchen: Der Core von kooperativen 3-Personen-Spielen

Fr 27.2.2009, 14:15 Uhr

Es spricht:    Sonja Mielchen 
               Mathematisches Institut, LMU

über: Der Core von kooperativen 3-Personen-Spielen

Inhalt: Im Vortrag betrachte ich das Lösungskonzept Core für
3-Personen-Spiele mit Hinblick auf dessen Additivität. Daraus
resultieren außer einer vollständigen Einteilung von
3-Personen-Spielen in Klassen bezüglich der Form ihres Cores auch
anschauliche geometrische Interpretationen.  

Mo, 23.2.2009, 15:00 Uhr

Es spricht:    Stefan Schimanski
               LFE Theoretische Informatik, LMU

über: Polynomial Time Calculi

This dissertation deals with type systems which guarantee polynomial
time complexity of typed programs. Such algorithms are commonly
regarded as being feasible for practical applications, because their
runtime grows reasonably fast for bigger inputs. The implicit
complexity community has proposed several type systems for polynomial
time in the recent years, each with strong, but different structural
restrictions on the permissible algorithms which are necessary to
control complexity. Comparisons between the various approaches are
hard and this has led to a landscape of islands in the literature of
expressible algorithms in each calculus, without many known links
between them.

This work chooses Light Affine Logic (LAL) and Hofmann's LFPL, both
linearly typed, and studies the connections between them. It is shown
that the light iteration in µLAL, the fixed point variant of LAL, is
expressive enough to allow a (non-trivial) compositional embedding of
LFPL. The pull-out trick of LAL is identified as a technique to type
certain non-size-increasing algorithms in such a way that they can be
iterated. The System T sibling LLT! of LAL is developed which
seamlessly integrates this technique as a central feature of the
iteration scheme and which is proved again correct and complete for
polynomial time. Because LLT!-iterations of the same level cannot be
nested, LLT! is further generalised to LLFPL! , which surprisingly can
express the impredicative iteration of LFPL and the light iteration of
LLT! at the same time. Therefore, it subsumes both systems in one,
while still being polynomial time normalisable. Hence, this result
gives the first bridge between these two islands of implicit
computational complexity.

Do 5.2.2009, 10:15 Uhr

Es spricht: Markus Jehle
               LFE Theoretische Informatik, LMU

über: Bounded Model Checking for all Regular Properties

Inhalt: Prioritized equation systems are a specification language for
specifying omega-regular properties. We present an encoding of the
bounded model checking problem for this logic into SAT formulas. This
encoding is based on a game-theoretic characterization of the BMC

Fr 30.1.2009, 14:15 Uhr

Es spricht:    Robert Grbaowski
               LFE Theoretische Informatik, LMU

über: Information-flow security with dynamic domains

Inhalt: Language-based information flow analysis is used to verify
that data flows between objects of different security domains within a
program follow a given policy. When the assignment of domains to
objects is given dynamically by the execution environment, maintaining
information flow security requires the ability to check at runtime
which flows between accessed objects are actually allowed before
operations inducing these flows are performed.

In this talk, it is shown how dynamic security domains can be
integrated as first-class members into a standard object-oriented
language. A type system is used to statically prove that the flow
tests included in a program are sufficient, such a noninterference
property for objects is ensured regardless of their domains.

Moreover, these programs can be compiled to a JVM-like bytecode
language. A type system for bytecode with dynamic domains is presented
which transfers the concepts from the high-level program analysis to
unstructured code.

Fr 23.1.2009, 14:15 Uhr

Es spricht:    Jan Johannsen
               LFE Theoretische Informatik, LMU

über: An exponential lower bound for width-restricted clause learning


The propositional formulas Ord_n in CNF  express the principle
that every finite linear ordering has a maximal element. They have
short regular resolution proofs, and thus can be solved by DLL
algorithms with clause learning quickly. In contrast, it is shown that
they require exponential time when the algorithm only learns short
clauses of length less than n/4.

Fr 16.1.2009, 14:15 Uhr

Es spricht:    Paul Harrenstaein
               LFE Theoretische Informatik, LMU

über: On the Complexity of Iterated Weak Dominance in Constant-Sum Games

Inhalt: In game theory, a player's action is said to be weakly
dominated if there exists another action that, with respect to what
the other players do, is never worse and sometimes strictly better. We
investigate the computational complexity of the process of iteratively
eliminating weakly dominated actions (IWD) in two-player constant-sum
games, \ie games in which the interests of both players are
diametrically opposed. It turns out that deciding whether an action is
eliminable via IWD is feasible in polynomial time whereas deciding
whether a given subgame is reachable via IWD is NP-complete. The
latter result is quite surprising as we are not aware of other natural
computational problems that are intractable in constant-sum games.
Furthermore, we improve a result by Conitzer and Sandholm by showing
that typical problems associated with IWD in win-lose games with at
most one winner are NP-complete.

Fr 9.1.2009, 14:15 Uhr

Es spricht: Jan Hoffmann 
            LFE Theoretische Informatik, LMU

über: Automatic Amortized Analysis for Polynomially Bounded Resources

Inhalt: In 2003 Hofmann and Jost introduced an automatic amortized
analysis to obtain bounds on the resource consumption of functional
programs.  They have shown that type systems and linear programming
can be used to compute these bounds fully automatic without requiring
any annotations in the program.

Such a static automatic amortized analysis can been applied to many
standard algorithms such as quicksort and is able to obtain bounds on
the run time, the heap space and the stack space needed by the
algorithm.  It is efficiently computable and does not affect the run
time behavior of the program.  As yet, the drawback of this method was
its limitation to resource bounds that are linear in the input.

We show how to overcome this drawback by presenting an extension to
the previous systems that is able to compute resource bounds on
functional programs whose resource consumption is polynomial in the

Fr 19.12., 14:15 Uhr

Es spricht:    Andreas Abel
               LFE Theoretische Informatik, LMU

über: Normalization by Evaluation for System F

Inhalt: Normalization by evaluation is a technique to compute the full
beta-normal form of lambda-terms.  In the first step, terms are
interpreted in some value domain, which corresponds to computing a
semantic weak head normal form.  In the second step, values are
reified, i.e., converted back to terms which are actually normal forms.
During reification, evaluation under binders takes place.

In this talk, I will first give a tutorial on normalization by
evaluation for the simply-typed lambda-calculus.  I review
interpretation of lambda-terms into an applicative structure and then
define a simple type-directed reification which returns eta-long
beta-normal forms.  I sketch the proof of soundness and completeness
which involves constructions of Kripke logical relations.  The proof
is generic enough to account for related results, like weak

In the last part of my talk, I outline how to extend normalization by
evaluation to System F, the polymorphic lambda-calculus, using
Girard's idea.

Fr 5.12.2008, 14:15 Uhr

Es spricht:    Dulma Rodriguez
               LFE Theoretische Informatik, LMU

über: Verification of an algorithm for computing greatest fixpoints in Coq

We present an algorithm for computing membership in the greatest fixpoint of
a monotone operator on the powerset of some given set. Rather than
computing the entire fixpoint by Knaster-Tarksi iteration we
depart from a given goal. This is maybe advantageous if the size of the
underlying set or of the greatest fixpoint is large compared to the
portion that is relevant for determining membership of a particular
element. We prove this algorithm sound and complete and  present a 
verification of the proof in the theorem prover Coq.

Fr 28.12.2008, 14:15 Uhr

Es sprechen Philip Wassenberg 
            LFE Theoretische Informatik, LMU

über: Implementation and Evaluation of Cryptographic Auction Protocols (Diplomarbeit)

und Sarah Bluhme
    LFE Theoretische Informatik, LMU

über: Implementierung von Algorithmen zur Berechnung von Turnierl\ouml;sungen mit Benutzeroberfläche (FoPra)

Fr 21.11.2008, 14:15 Uhr

Es spricht:    Ulrich Schöpp
               LFE Theoretische Informatik, LMU

über: A Formalised Lower Bound on Undirected Graph Reachability

Inhalt: We consider the expressivity of Jumping Automata on Graphs (JAGs), an
idealised model of computation with logarithmic space. JAGs operate on
graphs by using finite control and a constant number of pebbles. In this
talk we revisit the proof of Cook & Rackoff that JAGs cannot decide
s-t-reachability in undirected graphs. Cook & Rackoff prove this result
by constructing, for any given JAG, a finite graph that cannot be
traversed exhaustively by the JAG. We generalise this result from the
graphs constructed by Cook & Rackoff to a general class of group action
graphs. We establish a bound on the number of nodes that a JAG can visit
on such action graphs. This generalisation allows us to strengthen the
result of Cook & Rackoff to the existence of a graph of small degree
whose diameter (rather than its number of nodes) is larger than the
number of nodes the JAG can visit. In this talk I will report about a
formalisation of the main result in the theorem prover Coq, using
Gonthier's tactic language SSReflect.

Mi 19.11., 14:15 Uhr

Es spricht:    Davide Grossi
               Individual and Collective Reasoning Group,
               Computer Science and Communications,
               University of Luxembourg

über: Correspondences in the Theory of Aggregation

Inhalt: In this talk I will investigate some interrelationships between the  
social-theoretic problems of preference and judgment aggregation from  
the perspective of formal logic.  On the one hand, preference  
aggregation will be shown to be equivalent to the aggregation of some  
special types of judgments (logical formulae). On the other hand, the  
judgment aggregation problem will be proven equivalent to the  
aggregation problem of specific types of preferences. As a result, a  
detailed map of aggregation problems is obtained which better  
highlights differences and similarities between the two research  
fields of preference and judgment aggregation, and which makes a  
unified systematization of the two fields possible.

Mi 12.11.2008, 14:15 Uhr

Es spricht:    Colin Riba, INRIA Sophia Antipolis

über: Type-based termination with sized products

Inhalt: Type-based termination is a semantically intuitive method
that ensures termination of recursive definitions by tracking the size
of datatype elements, and by checking that recursive calls operate on
smaller arguments. However, many systems using type-based termination
rely on a semantical anomaly to guarantee strong normalization;
namely, they impose that non-recursive elements of a datatype, e.g. the
empty list, have size 1 instead of 0. This semantical anomaly also
prevents functions such as quicksort to be given a precise typing.
The main contribution of this paper is a type system that remedies this
anomaly, and still ensures termination. In addition, our type system
features prenex stage polymorphism, a weakening of existential
quantification over stages, and is precise enough to type quicksort as a
non-size increasing function. Moreover, our system accommodate stage
addition with all positive inductive types.

Mi, 5.11.2008, 14:15 Uhr

Es spricht:    Mathijs de Weerdt

über:     A strategy-proof auction mechanism without money

Inhalt: I'll start this talk by briefly discussing Vickrey's
well-known sealed- bid second-price auction, and illustrate this type
of auction by noting the similarity to eBay's bidding proxy. I'll show
that (under ideal circumstances) bidding your private value is a
dominant strategy in this auction.

After this introduction I'll propose a new similar mechanism but
without money. In this mechanism a bid can be any alternative, rather
than just a monetary offer. Such an auction is applicable to
situations where no numeraire is available, when there is a fixed
budget, or when money is no issue. Finally I'll discuss some of the
properties and variants of this mechanism.  

Fr 17.10.2008, 14:15 Uhr

Es spricht:    Igor Razgon

über: Fixed-Parameter Tractability of 2-CNF Deletion problem

Inhalt: Fixed-parameter algorithms provide an alternative methodology
of coping with NP-hardness that allows to solve hard optimization
problems exactly and in a low-polynomial time.  The necessary
condition for design a fixed-parameter algorithm is the presence of a
parameter associated with the given problem. The time complexity of a
fixed-parameter algorithm can be represented in a form $O(f(k)*n^c)$,
where $k$ is the parameter, $f(k)$ is an exponential function on the
parameter, $c$ is a constant usually not greater than 3.  Thus a
fixed-parameter algorithm is exponential in the parameter but
polynomial in the input size.  The low-polynomial time is achieved for
small values of $k$.

The design of fixed-parameter algorithms was inspired by observation
that in areas like bioinformatics and network design many hard
problems are associated with natural parameters that in practice are
very small.

Problems that admit fixed-parameter algorithms are called
fixed-parameter tractable (FPT) and the area of Theoretical Computer
Science studying various aspects of fixed-parameter tractability and
intractability is called Parameterized Complexity. One of the key
question investigated in the area is classification of problems
i.e. telling whether a particular problem is FPT. The community
identified a number of most challenging open question related to
classification of problems. One of such question is understanding
whether the following problem is FPT: given a 2-CNF formula, is it
possible to remove at most $k$ clauses so that the resulting formula
becomes satisfiable? This problem is called 2-CNF deletion. It
attracted attention of researchers due to a large number of potential
applications of this problem. Despite a number of attempts to solve
the problem, the status of fixed-parameter tractability of the problem
remained unresolved for more than 10 years. The fixed-parameter
tractability of the problem has been confirmed in Razgon, O'Sullivan
"Almost 2-SAT is fixed-parameter tractable", ICALP 2008.

In this talk I will briefly introduce the are of parameterized
complexity and overview main ideas of the fixed-parameter algorithm
for the 2-CNF deletion problem. The talk is designed for the audience
completely unfamiliar with the area of Parameterized Complexity.

Sommersemester 2008

18.04. Hermann Gruber

25.04. Riko Jacob (TUM): Optimal Sparse Matrix Dense Vector Multiplication in the I/O-Model

02.05. Ulrich Schoepp: Zeigerprogramme mit Iteration und 
      Erreichbarkeit in ungerichteten Graphen

09.05. -- kein Vortrag --

16.05. Lennart Beringer: Relational proof systems for non-interference
and code transformations

23.05. Nick Benton (MS Research)

30.05. Roland Axelsson, Model checking PDL with indexed grammars

05.06. Extra-Vortrag: Olivier Danvy (BRICS): From reduction-based to reduction-free normalization

06.06. Markus Latte: Towards lower bounds for intuitionistic propositional proofs

13.06. Dulma Rodriguez (LMU), Markus Brill (TUM)

20.06. Extra-Vortrag: Tarmo Uustalu (IoC Tallinn)

20.06. Hans-Wolfgang Loidl:  A Reasoning Infrastructure for the Embedded Systems Language Hume

27.06. Hermann Gruber: Sprachoperationen mit regulaeren Ausdruecken polynomieller Groesse

04.07. Felix Brandt

10.07. Andreas Abel:
Abstract equality checking algorithm for dependent types

18.07. Jan Hoffmann: 
Zwei NP-vollständige Entscheidungsprobleme für Resolutionsbeweise

22.07. Christian Dax: Alternation Elimination by Complementation

25.07. Dongyuan Yu:  Faktorisieren mit einem SAT-Solver

5.9. Steffen Jost: Introduction to Amortised Program Analysis

Fr 5.9., 11:00 Uhr

Es spricht:    Steffen Jost
               University of St Andrews, Scotland

über: Introduction to Amortised Program Analysis

We will explore a compile-time program analysis to determine the
worst-case heap-space usage of a given program.

The analysis applies a basic principle from amortised complexity
analysis (which will be properly introduced) to construct a linear
programming problem. Any existing solution to these constraints obtained
by a standard LP-solver gives rise to a simple arithmetic expression,
which linearly depends on the program's input sizes (e.g. "ax+by+c" for
a program that takes two lists of length x and y as its input). We have
proved for both a standard functional language and a subset of Java
(including inheritance, downcast, update and aliasing) that these
expressions present a strict upper bound on a program's heap-space

We have implemented the analysis for a functional language and expanded
it to bound execution time and stack-space usage as well as a part of
the ongoing research project EmBounded and will conclude the talk with a
short demonstration.

Fr 25.072008, 14:15 Uhr

Es spricht:    Dongyuan Yu
               LFE Theoretische Informatik, LMU

über: Faktorisieren mit einem SAT-Solver

Inhalt: Das Problem des Faktorisierens einer natürlichen Zahl
wird auf das Erfüllbarkeitsproblem für Aussagenlogik zurückgeführt,
und es wird gezeigt, wie SAT-Solver damit eingesetzt werden können,
um Zahlen in zwei Faktoren zu zerlegen. Im Vortrag wird die Reduktion
vorgestellt und es wird über Experimente mit einer Implementierung

Di 22.7.2008, 10:00 Uhr

Es spricht:    Christian Dax
               ETH Zürich

über: Alternation Elimination by Complementation

In the automata-theoretic approach for finite-state model checking, the 
question 'does a system satisfy a given specification' is reduced to the
emptiness problem for an automaton. Since (i) formulas of widely used 
linear-time temporal logics with future and past operators can directly 
be translated into alternating automata, and (ii) efficient algorithms 
exist for checking emptiness of nondeterministic automata, we just need 
efficient translations from alternating to nondeterministic automata.

In this talk, I show that the problem of translating an alternating 
automaton to a nondeterministic automaton can be reduced to the problem 
of complementing a nondeterministic automaton. Moreover, I will present 
a generalization of Miyano/Hayashi's construction for complementing 
loop-free 2-way nondeterministic Büchi automata. Finally, I will discuss
ongoing work, namely, how to combine the reduction with this 
complementation construction to obtain an efficient translation for 
formulas of fragments of the standardized property specification 
language PSL with past operators to nondeterministic Büchi automata.

Fr 18.7.2008, 14:15 Uhr

Es spricht: Jan Hoffmann
            LFE Theoretische Informatik, LMU

über:  Zwei NP-vollständige Entscheidungsprobleme für Resolutionsbeweise

In dem Vortrag wird zunächst der Begriff des Beweissystems für
aussagenlogische Formeln noch Cook und Reckhow definiert und der
Zusammenhang zu offenen Problemen aus der Komplexitätstheorie
erläutert.  Beispiele für solche Beweissysteme sind baumartige
Resolution und reguläre Resolution, die üblicherweise als Sequenzen
von Klauseln definiert werden.

Es werden zwei Resultate vorgestellt, die zeigen, dass diese üblichen
Definitionen nicht die Kriterien für Beweissysteme erfüllen falls P
und NP verschieden sind: Es ist NP-vollständig zu entscheiden ob eine
Sequenz von Klauseln ein baumartiger (bzw. regulärer)
Resolutionsbeweis ist.

Die Beweisidee für den baumartigen Fall wird an einem ähnlichen
NP-vollständigen Problem erläutert, das weniger technische Beweise

Do 10.Juli 2008, 13:00 Uhr

Es spricht:    Andreas Abel
               LFE Theoretische Informatik, LMU

über: Verifying a Semantic beta-eta-Conversion Test
for Dependent Type Theory

Inhalt: Types in programming languages are a tool to automatically check
program properties.  Such properties are, for instance, 

(1) absence of certain run-time errors (like adding integers to strings), 
(2) compliance with resource or access restrictions, 
(3) termination, and
(4) correctness with regard to a specification.

The last item, since undecidable in general, requires a language in
which one also can code up proofs.  Dependent type theory is a
candidate for that language; it allows programs and proofs to be
written with the same syntax and features types which can contain
whole specifications of programs.

In dependent type theory, programs can appear in types, which means
that the equality of types depends on the equality of programs.
Checking whether a program is type correct relies on checking type
equality.  Finding a powerful yet decidable type equality is an
ongoing quest in type theory research:  Decidability of equality is a
precondition for the decidability of type checking; and the more
powerful equality is, the less ok programs are rejected by the type

In this work, we present a beta-eta-equality check for a dependently typed
language which is the core of the languages Agda and Epigram.  The
check is verified using normalization-by-evaluation techniques.

Fr. 4.Juli 2008, 14:15 Uhr

Es spricht:    Felix Brandt
               LFE Theoretische Informatik, LMU

über:  Minimal Stable Sets in Tournaments

Inhalt: We propose a systematic methodology for defining tournament solutions  
as extensions of maximality. A central concept of this methodology is  
a generalization of Dutta's notion of a minimal covering set.  We  
obtain an infinite hierarchy of tournament solutions that encompasses  
the top cycle, the uncovered set, the Banks set, the minimal covering  
set, the tournament equilibrium set, the Copeland set, and the  
bipartisan set. Moreover, the hierarchy includes a new tournament  
solution, the minimal extending set (ME), which is conjectured to  
refine both the minimal covering set and the Banks set.

Fr. 27.Juni 2008, 14:15 Uhr

Es spricht:    Hermann Gruber
               LFE Theoretische Informatik, LMU

über: Sprachoperationen mit regulaeren Ausdruecken polynomieller Groesse

Inhalt: Bekanntermassen sind die regulaeren Sprachen unter sehr vielen
Sprachoperationen abgeschlossen. Dieser Vortrag beschaeftigt sich mit
der Frage, welchen Effekt gewisse Sprachoperationen die
Beschreibungskomplexit\"at regulaerer Ausdruecke haben. Einige effizient
realisierbare Operationen werden identifiziert. Effizient heisst hier,
dass das Resultat der Operation durch einen Ausdruck beschrieben werden
kann, dessen Groesse polynomiell in der des Operanden ist. Wir zeigen,
dass Quotientenbildung, insbesondere Praefix- und Suffixabschluss, mit
quadratischem Groessenzuwachs in regulaeren Ausdruceken realisiert
werden kann. Fuer den zirkulaeren Shift einer Sprache finden wir
entsprechend eine kubische obere, und eine quadratische untere Schranke.
(gemeinsam mit Markus Holzer, TU Muenchen)

Fr 20. Juni 2008, 14:15 Uhr

Es spricht:    Hans-Wolfgang Loidl
               LFE Theoretische Informatik, LMU

über: A Reasoning Infrastructure for the Embedded Systems Language Hume

Inhalt: One of the main design principles of the embedded systems
language Hume is predictability of resource consumption. To achieve
this, Hume is structured into two language layers: a box-based system
layer and a functional language as expression layer inside the
boxes. We are primarily interested in inferring and certifying
resource bounds for the expression layer. In order to provide
resource-bound guarantees for library code, we take a
proof-carrying-code approach, as elaborated in the MRG project, and
develop a reasoning infrastructure consisting of several layers of
logics.  In this talk, we present a general VDM-style program logic
for a core of expression layer Hume.  On top of this general logic, we
present a specialised logic that facilitates proofs of resource
consumption.  We will discuss the design, the Isabelle/HOL
formalisation and the meta-theoretic properties of these logics.

Fr 20. Juni 2008, 11:00 Uhr

Es spricht: Tarmo Uustalu
            Institute of Cybernetics, Tallinn, Estonia

über: Proof optimization for partial redundancy elimination

Inhalt: Partial redundancy elimination is a subtle optimization which
performs common subexpression elimination and expression motion at the
same time. In this talk, we use it as an example to promote and
demonstrate the scalability of the technology of \emph{proof
optimization}.  By this we mean automatic transformation of a given
program's Hoare logic proof of functional correctness or resource
usage into one of the optimized program, guided by a type-derivation
representation of the result of the underlying dataflow analyses. A
proof optimizer is a useful tool for the producer's side in a natural
proof-carrying code scenario where programs are proved correct prior
to optimizing compilation before transmission to the consumer.

We present a type-systematic description of the underlying analyses
and of the optimization, demonstrate that the optimization is
semantically sound and improving in a formulation using type-indexed
relations, and then show that these arguments can be transferred to
mechanical transformations of functional correctness/resource usage
proofs in Hoare logics. For the improvement part, we instrument the
standard semantics and Hoare logic so that evaluations of expressions
become a resource.

(Joint work with Ando Saabas.)


Fr 13.Juni 2008, 14:15 Uhr

Vortrag 1:    Markus Brill 
              Technische Universität Müchen

Titel:       Computational Aspects of Shapley's Saddles

Inhalt: We study the computational complexity of minimal  
generalized saddle points, a family of game theoretic solution  
concepts that were devised by Lloyd Shapley as early as 1953.  
Furthermore, we point out connections to a problem from social  
choice theory.

Vortrag 2:    Dulma Rodriguez 
              LFE Theoretische Informatik, LMU

Titel         Implementation of Resource Aware JAva (RAJA)

Inhalt: We present a type system for a compile-time analysis of heap-space
requirements for Java style object-oriented programs with explicit
deallocation. The system is called "Resource Aware JAva (RAJA)".
It is based on an amortised complexity analysis: the data is
arbitrarily assigned a potential related to its size and layout;
allocations must be "payed for" from this potential. The potential of
each input then furnishes an upper bound on the heap space usage for
the computation on this input.

This system has been developed by Martin Hofmann and Steffen Jost in
the article "Type-Based Amortised Heap-Space Analysis" in ESOP, 2006.
Now we present an implementation in Ocaml consisting of
a typechecker for RAJA programs and an interpreter that computes the
real maximal heap space usage of the program. In this talk we introduce the
system and show some implementation features and examples.

Fr 6. Juni 2008, 14:15 Uhr

Es spricht:    Markus Latte
               LFE Theoretische Informatik, LMU

über:     Towards lower bounds for intuitionistic propositional logic

An important problem in complexity theory is to establish superpolynomial 
lower bounds for propositional intuitionistic logic.  In order to do so, 
it suffices to exhibit a sequence $(\Gamma_n \vdash \gamma_n)_{n \in \Nat}$ 
such that every derivation of each sequent is superpolynomial in the size 
of the respective sequent.  Normalization of derivations and a super-
polynomial lower bound for the size of circuits which decide the clique 
problem are the main ingredients to obtain a partial result.

For every natural number $n$, a provable sequent $\Gamma_n \vdash \gamma_n$
is constructed such that any normal derivation encapsulates an enumeration 
of all cliques within a graph over $n$ nodes.  A monotone circuit for the 
clique problem can be extracted from any normal derivation.  The size of 
the circuit is polynomially bounded by the size of the derivation.  It is 
well known that the size of any such circuit is exponentially large.  
Therefore any normal derivation is exponential as well.

To enable an extraction out of any derivation of $\Gamma_n \vdash \gamma_n$,
the normalization process is considered in reverse order.  Essential 
information about the normal derivation is encapsulated in a set of 
so-called threads.  Each thread outlines a certain clique.  By backtracking, 
any derivation of the sequent receives a similar set of threads.  To this set
two formal languages are assigned.  If a context-free grammar can separate 
these two languages, an extraction of the required circuit is possible.  
The extracted monotone circuit decides the clique problem and is polynomially 
bounded in the size of the derivation.  It implements a dynamic programming 
paradigm to simulate both, the Cocke-Younger-Kasami parsing algorithm, and 
the sampling of the backtracked threads.  An extraction is possible for
cut rank two.  However, a counterexample for cut rank four is available.

(This talk reports on a part of the author's Diplomarbeit.)

Extra-Vortrag: Do, 5.Juni 2008, 16:15 Uhr

Es spricht:    Olivier Danvy
               University of Aarhus

über:     From reduction-based to reduction-free normalization

Inhalt:We document a method to construct reduction-free normalization
   functions.  Starting from a reduction-based normalization function,
   i.e., the iteration of a one-step reduction function, we   
   successively subject it to refocusing (i.e., deforestation of the
   intermediate reduced terms in the reduction sequence), fusion of 
   auxiliary functions, refunctionalization (i.e., the converse of
   defunctionalization), and direct-style transformation (i.e., the
   converse of the CPS transformation).  We consider in turn several
   simple examples and treat them in detail: for the first one, 
   arithmetic expressions, we construct an evaluation function; for the
   second one, terms in the free monoid, we construct an 
   accumulator-based flatten function; for the third one, applicative 
   order lambda-terms with explicit substitutions and call/cc, and, we
   construct a call-by-value evaluator.  The results are traditional
   reduction-free normalization functions.
   The overall method builds on previous work on a syntactic
   correspondence between reduction semantics and abstract machines and
   on a functional correspondence between evaluators and abstract

Fr, 30.Mai 2008, 14:15 Uhr

Es spricht:    Roland Axelsson

über: Model Checking Propositional Dynamic Logic with Indexed Grammars

PDL in its original definition by Fischer and Ladner is a relatively
simple modal logic which besides propositional components only consists
of universal and existential quantification over paths defined by
regular languages. Harel et al. have investigated the properties of PDL
when extended to context-free grammars and stated that the borderline of
undecidability for any extension of PDL runs close to the regular
languages in the sense that only a few simple additions constitute
undecidability. The model checking problem for PDL[CFG] however is in
PTIME (Lange).  A natural question to ask is for which classes of
languages model checking remains decidable. We give an easy polynomial
reduction of the model checking problem for PDL enriched with a language
to the intersection problem of this language with a regular language.
An interesting class of languages for which this problem is decidable
and which is strictly more expressive than the context-free languages is
the class of indexed languages. We give a model checking algorithm for
PDL with indexed grammars and establish PSPACE completeness (which as a
side product also transfers to the emptiness and word problem for
indexed grammars).

Fr, 23.Mai 2008, 14:15 Uhr

Es spricht:    Nick Benton
               Microsoft Research, Cambridge

über:     Embedded interpreters

This is a tutorial on using type-indexed embedding/projection pairs when
writing interpreters in statically-typed functional languages. The
method allows (higher-order) values in the interpreting language to be
embedded in the interpreted language and values from the interpreted
language may be projected back into the interpreting one. This is
particularly useful when adding command-line interfaces or scripting
languages to applications written in functional languages.

We first describe the basic idea and show how it may be extended to
languages with recursive types and applied to elementary
meta-programming. We then show how the method combines with Filinski's
continuation-based monadic reflection operations to define an
`extensional' version of the call-by-value monadic translation and
hence to allow values to be mapped bidirectionally between the levels
of an interpreter for a functional language parameterized by an
arbitrary monad. Finally, we show how SML functions may be embedded
into, and projected from, an interpreter for an asynchronous
\picalc\ via an `extensional' variant of a standard translation
from $\lambda$ into $\pi$.

Fr, 16.Mai 2008, 14:15 Uhr

Es spricht:    Lennart Beringer
               LFE Theoretische Informatik, LMU

über: Relational proof systems for non-interference and code transformations

Non-interference, determinism, code optimisation, and program
translation are special cases of two-execution properties,
i.e. properties which relate executions of similar (but possibly
different) programs starting from similar (but possibly different)
input states. Aiming towards a uniform approach to certifying such
correlation properties in the sense of PCC, I will present a static
relational proof system for sequential bytecode which unifies
non-interference with local transformation rules. In contrast to
existing type systems for non-interference, no restrictions are
imposed on the control flow structure. Instead of tracking formal
variable dependencies, we approximate the preservation of value
indistinguishability relations, integrating copy propagation and
aspects from separation logic in order to obtain a modular proof
system. Time permitting I will briefly discuss ongoing work on
extending the approach to higher language levels, and to the
preservation of arbitrary value relations, thus introducing a limited
form of logical reasoning.

Fr. 1.Mai 2008, 14:15 Uhr

Es spricht:    Ulrich Schöpp
               LFE Theoretische Informatik, LMU

über: Zeigerprogramme mit Iteration und 
      Erreichbarkeit in ungerichteten Graphen

Viele Algorithmen mit logarithmischem Platzbedarf lassen sich als
reine Zeigeralgorithmen beschreiben. Solche Algorithmen behandeln ihre
Eingabe als strukturierten Datentyp (z.B. Graphen) und greifen auf
diese nur durch eine konstante Anzahl von abstrakten Zeigern (bei
Graphen auf dessen Knoten) zu. Abstrakte Zeiger haben neben Gleichheit
keine weitere innere Struktur und können nur durch die Operationen
der Eingabestruktur manipuliert werden. In diesem Vortrag soll eine 
Klasse von reinen Zeigerprogrammen betrachtet werden, in der die 
Programme zur Manipulation von Zeigern zusätzlich noch über die gesamte 
Eingabestruktur iterieren können. Es soll gezeigt werden, dass sich zwar
viele typische Algorithmen mit logarithmischem Platzbedarf als solche 
Zeigerprogramme schreiben lassen, dies aber für Erreichbarkeit in 
ungerichteten Graphen nicht der Fall ist. Ein Zeigerprogramm für 
ungerichtete Erreichbarkeit kann mittels Reingolds Algorithmus
angegeben werden. Für diesen Algorithmus sind jedoch noch 
Zählregister logarithmischer Größe nötig.

Fr 25.4.2008, 14:15 Uhr

Es spricht:    Riko Jacob
               Technische Universität München

über:  Optimal Sparse Matrix Dense Vector Multiplication in the I/O-Model

Inhalt: We analyze the problem of sparse-matrix dense-vector multiplication
(\spmv) in the I/O-model. The task of \spmv\ is to compute
$y:=Ax$, where $A$ is a sparse $N\times N$ matrix and $x$ and $y$ are  
Here, sparsity is expressed by the
parameter~$k$ that states that~$A$ has a total of at most~$kN$
nonzeros, i.e., an average number of~$k$ nonzeros per column.
The extreme choices for parameter~$k$ are well studied special cases,
namely for $k=1$ permuting and for $k=N$ dense matrix-vector

In the first part, we study the worst-case complexity of this
computational task, i.e., what is the best possible upper bound on the
number of I/Os depending on $k$ and~$N$ only.
We determine this complexity up to a constant factor for large ranges
of the parameters.
By our arguments, we find that most matrices
with~$kN$ nonzeros require this number of I/Os, even if the program
may depend on the structure of the matrix.
The model of computation for the lower bound is a combination of the
I/O-models of Aggarwal and Vitter, and of Hong and Kung.

We study two variants of the problem, depending on the memory layout
If~$A$ is stored in column major layout, \spmv has I/O complexity
     \right)\!,\,kN\right\} }
for $k\leq N^{1-\varepsilon}$ and any constant $1>\varepsilon > 0$.
If the algorithm can choose the memory layout, the I/O complexity of
\spmv is
for $k\leq\sqrt[3]{N}$.

In the cache oblivious setting with tall
cache assumption $M\geq B^{1+\varepsilon}$, the
I/O complexity is $\bigO{\frac{kN}{B} \left(1+\log_{M/B}
   \frac{N}{k} \right)}$ for $A$ in column major layout.

This first part is joint work with Michael A. Bender, Gerth
St{\o}lting Brodal, Rolf Fagerberg, and Elias Vicari,
published at SPAA 2007, San Diego.

In the second part, we consider extensions of the above problem to
skew matrices, to the situation where several vectors are to be
multiplied with the same matrix, and the case where the access to the
matrix does not cost any I/O.

Mi 05.03., 15:15 Uhr

Es spricht:    Frédéric Blanqui
               LORIA, Nancy, Frankreich

über:          Building Decision Procedures in the 
               Calculus of Inductive Constructions

Joint work with J.-P. Jouannaud and P.-Y. Strub.

It is commonly agreed that the success of future proof assistants will
rely on their ability to incorporate computations within deductions in
order to mimic the mathematician when replacing the proof of a
proposition P by the proof of an equivalent proposition P' obtained
from P thanks to possibly complex calculations.

I will present our investigations of a new version of the calculus of
constructions which incorporates arbitrary decision procedures into
deductions via the conversion rule of the calculus. Besides the
novelty of the problem itself in the context of the calculus of
constructions, a major technical innovation of this work lies in the
fact that the computation mechanism varies along proof-checking: goals
are sent to the decision procedure together with the set of user
hypotheses available from the current context.

Our main result shows that this extension of the calculus of
constructions does not compromise its main properties: confluency,
strong normalization and decidability of proof-checking are all
preserved. We also show in detail how a goal to be proved in the
calculus of constructions is actually transformed into a goal in a
decidable first-order theory. Based on this transformation, we are
currently developping a new version of Coq implementing this calculus,
taking linear arithmetic and the theory of lists as targets combined
via Shostak's algorithm.

Mo 03.03., 15:15 Uhr

Es spricht:    Cody Roux
               LORIA, Nancy, Frankreich

über:          On the relation between size-based termination 
               and semantic labelling

Joint work with F. Blanqui.

Size annotation is a field devoted to proving termination by typing of
higher order functional programs or rewrite systems. This is achieved
by adding size information to types. This paper explores the links
between this technique and the technique of semantic labelling
introduced by Hans Zantema for first order rewrite systems.

In particular, we provide a first order version of size based
termination and show that it provides an interesting semantic
labelling. Then, we provide a new presentation of size based
termination for simply typed higher order rewrite systems and prove
its correctness using Hamana's extension to the higher order case of
semantic labelling.

Fr 29.02., 16:15 Uhr

Es spricht:    Colin Riba
               INRIA, Sophia-Antipolis, Frankreich

über:          Unions of Type Interpretations
               for Lambda-Calculus and Rewriting

The most flexible strong normalization proofs methods for various
extensions of typed lambda-calculi use type interpretations. Among
them we distinguish Girard's reducibility candidates, Tait's saturated
sets and interpretations based on biorthogonality.  In this talk we
compare these different interpretations wrt their ability to handle
rewriting and wrt their stability by union.

We first propose a generalization of the notion of neutral term in
Girard's candidates that allows to define them in a generic
way. Concerning stability by union, we recall that there are confluent
typed rewrite systems that do not admit stable by union type
interpretations. We present a necessary and sufficient condition for
Girard's candidate to be stable by union and a necessary and
sufficient condition for the closure by union of biorthogonals to be
reduciblity candidates. The second condition is strictly more general
than the first one, and allows to define Tait's saturated sets for
rewriting in a uniform way. Moreover, we show that for orthogonal
constructor rewriting, Girard's candidates are stable by union.

Wintersemester 2007/08

Lennart Beringer,  Ein Typ- und Effektsystem für quantitative Authorisierungen

Uli Schoepp

Felix Brandt, Computational Aspects of Covering in Dominance Graphs

Paul Harrenstein, 
On the Computational Complexity of the Tournament Equilibrium Set

Jan Johannsen

Oliver Friedmann, DA: An effective proof system for CTL*

Thomas Bleher, Secure Information Flow for Interacting State Machines
Felix Fischer, Incentive Compatible Regression Learning

Jan Hoffmann, DA

Dulma Rodriguez, DA

11.01. kein Oberseminar

Freiric Barral

Martin Lange

Robert Grabowski

Hermann Gruber

Fr 08.02.2008, 14:15 Uhr

Es spricht:    Hermann Gruber
               LFE Theoretische Informatik, LMU

über:          Finite Automata, Digraph Connectivity, 
               and Regular Expression Size

We study some descriptional complexity aspects of regular expressions.
In particular, we give lower bounds on the minimum required size for the
conversion of deterministic finite automata into regular expressions and
on the required size of regular expressions resulting from applying some
basic language operations on them, namely intersection, shuffle and
complement. Some of the lower bounds obtained are asymptotically tight,
and, notably, the examples we found are over an alphabet of size two.

To this end, we develop a new lower bound technique that is based on the
star height of regular languages. It is known that for a restricted
class of regular languages, the star height can be determined from the
digraph underlying the transition structure of the minimal finite
automaton accepting that language. In this way, star height is tied to
cycle rank, a structural complexity measure for digraphs proposed by
Eggan and Büchi, which measures the degree of connectivity of directed
graphs. This measure, although a robust graph theoretic notion, appears
to have remained widely unnoticed outside formal language theory. We
establish some basic theory for cycle rank and put it into context with
other digraph connectivity measures proposed recently. (joint work with
M. Holzer)

Fr 01.02.2008, 14:15 Uhr

Es spricht:    Karl Mehltretter
               LFE Theoretische Informatik, LMU

über:          Mugda – Abhängige Typen und Terminierung

Abhängige Typen finden Verwendung in Systemen wie Coq und Agda, welche
als Beweissysteme und als funktionale Programmiersprachen eingesetzt
werden können.

In Agda werden rekursive Funktionen durch Klauseln mit „Pattern
Matching” deklariert. Wichtig ist, dass nur terminierende Funktionen
zugelassen werden.  Ein syntaktisches Kriterium ist, dass in jedem
rekursiven Aufruf ein Argument strukturell kleiner wird. Eine
Verbesserung davon kann mit dem „Size-Change Principle” erreicht
werden.  Bei „Sized Types” dagegen ist das Typsystem für die
Terminierung verantwortlich und verfolgt dazu Größeninformation in den

Im Vortrag stelle ich die während der Diplomarbeit entwickelte Sprache
Mugda vor, die an Agda angelehnt ist. Mugda kombiniert dabei „Sized
Types” mit dem „Size-Change Principle”.  Außerdem unterstützt Mugda
coinduktive Typen wie z.B. unendlich lange Listen, und lässt für diese
nur produktive Definitionen zu.

Fr 01.02.2008, 14:15 Uhr

Es spricht:    Robert Grabowski
               LFE Theoretische Informatik, LMU

über:          Information flow analysis with dynamic security domains

Language-based information flow analysis is used to statically
examine a program for information flows between objects of different
security domains, and to verify these flows follow a given policy.
In certain situations, the objects accessed by a program are
not known until runtime, e.g. when a file to be processed is
chosen by the user. To maintain information flow security,
runtime tests are required for determining which flows between
these objects are actually allowed.

We present an imperative language with dynamic object choice and
a conditional for information flow tests. We use a type system
to prove that the flow tests included in a program are sufficient,
such that the program is secure regardless of the choice of objects.

Fr 25.01.2008, 14:15 Uhr

Es spricht:    Martin Lange
               LFE Theoretische Informatik, LMU

über:          A simple proof of the exponential succinctness gap 
               between CTL+ and CTL

CTL+ and CTL are two equi-expressive logics. In fact, CTL is a
fragment of CTL+.  It has been known for quite a while that CTL+ can
be translated into CTL at an O(n)! blow-up. The question whether this
is optimal has been answered, too: Wilke has shown that the
succinctness gap is of size 2^Omega(n) using automata theory. Adler
and Immerman have improved this to a Omega(n)! succinctness-gap using
Ehrenfeucht-Fraisse games. Both proofs are combinatorically very
involved.  Here we show that the exponential lower bound can be proved
in a simple and elementary way using no external machinery.

Fr 18.01.2008, 14:15 Uhr

Es spricht:    Freiric Barral
               LFE Theoretische Informatik, LMU

über:          Exceptional NbE for Sum Types

We present a normalization by evaluation algorithm to decide equality
of terms in a typed lambda calculus with sum types and extensional
conversions.  While the decision problem for sum types is notoriously
difficult, we have used a conceptually simple mechanism of exceptions
and were able to express the algorithm in a purely functional setting.

Fr 21.12.2007, 14:15 Uhr

Es spricht:    Dulma Rodriguez
               LFE Theoretische Informatik, LMU

über:          Algorithmic Subtyping for Higher-Order 
               Bounded Quantification Revisited

My diploma thesis investigates algorithmic subtyping for the system
Fω<: of higher-order bounded polymorphism. This topic has been 
investigated in the past extensively, yielding positive results.  The
reason for this new revision of the topic is the development of new
proof techniques in the recent years and the question whether the
completeness of algorithmic subtyping for this system can be proved
with these proof techniques, i.e., syntactically, without construction
of models.  We provide a revision of the algorithm and prove directly,
using lexicographical induction on kinds and derivations, its
soundness, completeness and termination. 

Fr 14.12.2007, 14:15 Uhr

Es spricht:    Jan Hoffmann
               LFE Theoretische Informatik, LMU

über:          Resolution Proofs and DLL-Algorithms with Clause Learning

My diploma thesis analyzes the connections between resolution proofs
and satisfiability search procedures.  It is well known that DLL
search algorithms that do not use learning are equivalent to tree-like
resolution in terms of proof complexity.  To generalize this result to
DLL algorithms that use learning, two natural generalizations of
regular resolution that are based on resolution trees with lemmas
(RTL) are introduced.  It is shown that dag-like resolution is
equivalent to these resolution refinements when there is no regularity
condition.  On the other hand an exponential separation between the
regular versions (regular weak resolution trees with input lemmas and
regular weak resolution trees with lemmas) and regular dag-like
resolution is given.

It is proved that executions of DLL algorithms that use learning based
on the conflict graph and unit propagation, like most of the current
state of the art SAT-solvers, can be simulated by regular WRTI.
Inspired by this simulation, a new generalization of learning in DLL
algorithms, which is polynomially equivalent to regular WRTL, is
presented.  This algorithm can simulate general resolution without
doing restarts.

Fr 07.12.2007, 14:15 Uhr

Es spricht:    Thomas Bleher
               LFE Theoretische Informatik, LMU

über:          Informationsflussanalyse mit Noninterference 
               und Interacting State Machines

Wir geben einen kurzen Einblick in die Geschichte der
Informationsflussanalyse und stellen das Noninterference-Modell vor,
ein sehr populäres Modell auf diesem Gebiet. Wir erweitern dieses
Modell an einigen Punkten, unter anderem um die Möglichkeit, mit
nicht-totalen Automaten zu arbeiten.

Wir zeigen an einem Beispiel, wie Noninterference mit dem
Automatenmodell der "Interacting State Machines" verbunden werden
kann, um Informationsfluss-Aussagen über Automaten mit Hilfe eines
Theorembeweisers (Isabelle) zu verifizieren.


Desweiteren:   Felix Fischer
               LFE Theoretische Informatik, LMU

über:          Incentive Compatible Regression Learning

(joint work with Ofer Dekel and Ariel D. Procaccia)

We initiate the study of incentives in machine learning.  We focus on
a game-theoretic regression learning setting where private information
is elicited from multiple agents with different, possibly conflicting,
views on how to label the points of an input space. This conflict
potentially gives rise to untruthfulness on the part of the agents.
In the restricted but important case when every agent cares about a
single point, and under mild assumptions, we show that agents are
motivated to tell the truth. In a more general setting, we study the
power and limitations of mechanisms without payments. We finally
establish that, in the general setting, the VCG mechanism goes a long
way in guaranteeing truthfulness and economic efficiency.

Fr 30.11.2007, 14:15 Uhr

Es spricht:    Oliver Friedman
               LFE Theoretische Informatik, LMU

über:          An effective proof system for CTL*

We present a Gentzen-style proof system for validity in the full
branching time logic CTL*. We prove the system sound and complete and
show how to obtain counterexamples for non-valid formulas from failing
proof attempts. Finally, a decision procedure that runs in double
exponential time based on our system is outlined.

Fr 23.11.2007, 14:15 Uhr

Es spricht:    Jan Johannsen
               LFE Theoretische Informatik, LMU

über:          Optimal Lower Bounds on Regular Expression Size

A technique for proving lower bounds on the size of regular
expressions is developed based on the notion of communication
complexity of search problems. This technique yields a new lower bound
for a family of finite languages L_n over the two letter alphabet,
which can be accepted by deterministic finite automata of size n. This
lower bound matches the best known algorithm for converting automata
to regular expressions in this case.  

Fr 16.11.2007, 14:15 Uhr

Es spricht:    Paul Harrenstein
               LFE Theoretische Informatik, LMU

über:          On the Hardness of the Tournament Equilibrium Set

A recurring theme in the mathematical social sciences is how to select
the "most desirable" elements given some binary dominance relation on
a set of alternatives. Schwartz's tournament equilibrium set (TEQ)
ranks among the most intriguing of tournament solutions that have been
proposed so far in this context, but also among the most enigmatic
ones. Due to its unwieldy recursive definition, little is known about
TEQ. In particular, its monotonicity remains an open problem up to
date. Yet, if TEQ were to satisfy monotonicity, it would be a very
attractive tournament solution concept refining both the Banks set and
Dutta's minimal covering set. We show that the problem of deciding
whether a given alternative is contained in the tournament equilibrium
set is NP-hard.

Fr 09.11.2007, 14:15 Uhr

Es spricht:    Felix Brandt
               LFE Theoretische Informatik, LMU

über:          Computational Aspects of Covering in Dominance Graphs

(joint work with Felix Fischer)

Various problems in AI and multiagent systems can be tackled by
finding the ``most desirable'' elements of a set given some binary
relation.  Examples can be found in areas as diverse as voting theory,
game theory, and argumentation theory.  Some particularly attractive
solution sets are defined in terms of a covering relation---a
transitive subrelation of the original relation.  We consider three
different types of covering (upward, downward, and bidirectional) and
the corresponding solution concepts known as the uncovered set
and the minimal covering set.  We present the first
polynomial-time algorithm for finding the minimal bidirectional
covering set (an acknowledged open problem) and prove that deciding
whether an alternative is in a minimal upward or downward covering set
is NP-hard.  Furthermore, we obtain various set-theoretical
inclusions, which reveal a strong connection between von
Neumann-Morgenstern stable sets and upward covering on the one hand,
and the Banks set and downward covering on the other hand.  In
particular, we show that every stable set is also a minimal upward
covering set.

Fr 02.11.2007, 14:15 Uhr

Es spricht:    Ulrich Schoepp
               LFE Theoretische Informatik, LMU

über:          Reine Zeigeralgorithmen und 
               Erreichbarkeit in ungerichteten Graphen

Viele Algorithmen mit logarithmischem Platzbedarf lassen sich als
reine Zeigeralgorithmen beschreiben. Solche Algorithmen behandeln ihre
Eingabe als strukturierten Datentyp (z.B. Graphen) und greifen auf
diese nur durch eine konstante Anzahl von abstrakten Zeigern (bei
Graphen auf dessen Knoten) zu. Abstrakte Zeiger haben neben Gleichheit
keine weitere innere Struktur. Sie können mit den Operationen der
Eingabestruktur manipuliert werden und es ist möglich, mit ihnen über
die gesamte Eingabestruktur zu iterieren.  Auch wenn sich viele
Programme mit logarithmischem Platzbedarf als solche Zeigeralgorithmen
schreiben lassen, ist dies anscheinend nicht für alle Programme der
Fall. In Reingolds Algorithmus für Erreichbarkeit in ungerichteten
Graphen werden zum Beispiel noch zusätzlich Zählregister
logarithmischer Größe verwendet. 

In diesem Vortrag soll die Frage behandelt werden, ob Erreichbarkeit
auf ungerichteten Graphen durch einen reinen Zeigeralgorithmus
entscheidbar ist. Da diese Frage im allgemeinen momentan noch
unzugänglich ist, soll in diesem Vortrag nur eine beschränkte Klasse
von Zeigeralgorithmen betrachtet werden, in der die Programme zur
Erforschung des Graphen neben dessen lokaler Struktur nur auf eine
einzige Iteration über alle seine Knoten zurückgreifen können.

Fr 26.10.2007, 14:15 Uhr

Es spricht:    Lennart Beringer
               LFE Theoretische Informatik, LMU

über:          Ein Typ- und Effektsystem für quantitative Authorisierungen

Wir behandeln ein Anwendungszenario aus dem Mobilfunkbereich, bei dem
jede Komunikation mit anderen Netzteilnehmern explizit vom Benutzer
authorisiert werden muss. Für eine entsprechende Erweiterung einer
Programmiersparche mit Funktionen höherer Ordnung stellen wir ein
Typ- und Effektsystem vor, welches sicherstellt, dass keine
unauthorisierten Kommunikationsereignisse auftreten,unabhaengig vom
Terminationsverhalten der Anwendung. Im zweiten Teil wird eine
Uebertragung des Formalismus auf Bytecode eingeführt und eine
Interpretation von Typurteilen in einer geeigneten Programmlogik

Fr 12.10.2007, 14:15 Uhr

Es spricht:    Roland Axelsson
               LFE Theoretische Informatik, LMU

über:          Model Checking the First Order Fragment 
               of Higher-Order Fixpoint Logic

We present a model checking algorithm for HFL1, the first-order fragment
of Higher-Order Fixpoint Logic. This logic is capable of expressing many
interesting properties which are not regular and, hence, not expressible
in the modal μ-calculus. The algorithm avoids best-case exponential
behaviour by localising the computation of functions and can be
implemented symbolically using BDDs. We show how insight into the
behaviour of this procedure, when run on a fixed formula, can be used to
obtain specialised algorithms for particular problems. This yields, for
example, the competitive antichain algorithm for NFA universality but
also a new algorithm for a string matching problem.

Fr 05.10.2007, 14:15 Uhr

Es spricht:    Robert Grabowski
               LFE Theoretische Informatik, LMU

über:          Information Flow Security for Imperative Languages

Information flow analysis is a way to ensure that a computer system
does not release confidential data to the public.  We present a
language-based notion of information flow security known as
non-interference, and show how to prove the property with a type system.
Furthermore, practical challenges arising in flow-secure
software development are outlined.

Mo 01.10.2007, 14:15 Uhr

Es spricht:    Dulma Rodriguez
               LFE Theoretische Informatik, LMU

über:          Algorithmic subtyping for higher-order
               bounded quantification

System Fω<: is an extension of Girard's higher-order
polymorphic lambda-calculus by subtyping. We develop the metatheory of
the system, by giving an algorithm for subtyping without the
transitivity rule, and give direct proofs of soundness, completeness,
and termination of the algorithm, without constructing
a model.

Sommersemester 2007

04.05. Martin Hofmann

11.05. Philip Wassenberg (FoPra Felix)       

18.05. Uli Schoepp, Categories with Bindable Names

25.05. Steffen Jost

01.06. Hans-Wolfgang Loidl

08.06. Paul Harrenstein

15.06. Herman Gruber, TBA

22.06. Robert Grabowski, Computational aspects of non-projective 
       dependency grammars (Diplomarbeit)
(ab 15.15 Uhr Festkolloquium Prof. Schwichtenberg im Math.Inst.)

29.06. Lennart Beringer

06.07. Andreas Abel, Nbe MLTT

13.07. Mariela Pavlova,
       A framework for formal verification of  Java bytecode
       programs against functional and security policies

20.07. Habilitation Martin Lange
       13:00 Uhr (Theresienstr. 39)

30.07. Nicolas Rachinsky
       The Complexity of Resolution Refinements and
       Satisfiability Algorithms.

24.08. Roland Axelsson
        Linear Time Logics around PSL: 
        Complexity, Expressiveness, and a little bit of Succinctness

Fr 24.08.2007, 14:15 Uhr

Es spricht:    Roland Axelsson
               LFE Theoretische Informatik, LMU

über:          Linear Time Logics around PSL: Complexity, Expressiveness,
               and a little bit of Succinctness

We consider linear time temporal logic enriched with semi-extended
regular expressions through various operators that have been proposed in
the literature, in particular in Accelera's Property Specification
Language. We obtain results about the expressive power of fragments of
this logic when restricted to certain operators only: basically, all
operators alone suffice for expressive completeness w.r.t. omega-regular
expressions, just the closure operator is too weak. We also obtain
complexity results. Again, almost all operators alone suffice for
EXPSPACE-completeness, just the closure operator needs some help.

This is a report on Martin Lange's work.

Mo 30. Juli 2007, 14:15 Uhr

Es spricht:    Nicolas Rachinsky
               LFE Theoretische Informatik, LMU

über:          The Complexity of Resolution Refinements and
               Satisfiability Algorithms.

Fr 13. Juli 2007, 14:15 Uhr

Es spricht:    Mariela Pavlova
               LFE Theoretische Informatik, LMU

über:          A framework for formal verification of 
               Java bytecode programs against functional and security policies

Program verification techniques based on programming logics and
verification condition generators provide a powerful means to reason
about programs. Whereas these techniques have very often been employed
in the context of high-level languages in order to benefit from their
structural nature, it is often required, especially in the context of
mobile code, to prove the correctness of compiled programs. Thus it is
highly desirable to have a means of bringing the benefits of source
code verification to code consumers.

We propose a mechanism that allows to transfer evidence from source
programs to compiled programs. It builds upon a specification language
for bytecode, a verification condition generator that operates on
annotated programs, and a compiler that transforms source annotations
into bytecode annotations. We show that the verification condition
generator is sound, and that the proof obligations at source and
bytecode level nearly coincide. We illustrate the benefits of our
framework with two applications. 

Fr 06.07.2007, 13:45 Uhr (!)

Es spricht:    Andreas Abel
               LFE Theoretische Informatik, LMU

über:          Normalization by Evaluation for Martin-Löf Type Theory

The decidability of equality is proved for Martin-Löf type theory with
a universe à la Russell and typed βη-equality judgements. A
corollary of this result is that the constructor for dependent function
types is injective, a property which is crucial for establishing the
correctness of the type-checking algorithm. The decision procedure uses
normalization by evaluation, an algorithm which first interprets terms
in a domain with untyped semantic elements and then extracts normal
forms. The correctness of this algorithm is established using a
PER-model and a logical relation between syntax and semantics.

Fr 22.06.2007, 14:00 Uhr (!)

Es spricht:    Robert Grabowski
               LFE Theoretische Informatik, LMU

über:          Computational Aspects of
               Non-Projective Dependency Grammars

Context-free grammars have been widely used for modelling grammars
of natural languages. While the word problem for context-free grammars
is efficiently decidable, it has been shown that the syntactic structures
they induce (ordered trees) are in general too restricted to account for
certain linguistic phenomena, such as flexible word positions within
a sentence. More expressive formalisms have been proposed, however,
they often lack a thorough complexity analysis.

In my diploma thesis, I explored a declarative framework for describing
non-projective tree structures, where the total order of the nodes is,
to a certain degree, independent of the tree relation. The framework is
powerful enough to encode a variety of existing grammar formalisms.
Different factors that contribute to the complexity of the word problem
can be isolated, and efficiency can be improved by borrowing well-known
techniques from context-free parsers.

Fr 15. Juni 2007, 14:15 Uhr

Es spricht:    Hermann Gruber
               LFE Theoretische Informatik, LMU

über:          The Size of Higman-Haines Sets: Recursive and Non-Recursive Trade-offs

A not too well-known result in formal language theory is that the
Higman-Haines sets for *any* language are regular, but it is easily seen
that (finite automata representations of) these sets cannot be
effectively computed in general.
Here the Higman-Haines sets are the languages of all scattered subwords
of a given language (down-sets) and the sets of all words that contain
some word of a given language as a scattered subword (up-sets).
The borderline of effectiveness and the sizes of these sets is
investigated when represented as finite automata, given when given
devices representing well-known language classes, e.g. context-free
grammars and Turing machines.
In case of effective constructibility, upper and lower bounds on the
finite automaton size in terms of the size of the given grammar are derived.
Joint work with Markus Holzer (TU Muenchen, Germany) and Martin Kutrib
(Uni Giessen, Germany)

Fr 08. Juni 2007, 14:15 Uhr

Es spricht:    Paul Harrenstein
               LFE Theoretische Informatik, LMU

über:          Dominance in Social Choice and Coalitional Game Theory

We consider dominance relations for social choice as based on the 
pairwise majority rule on the one hand and cooperative games with 
non-transferable utility (coalitional NTU games) on the other. As these 
dominance relations may fail to be transitive and even contain cycles, 
the notion of maximality becomes untenable as an analytical tool. Both 
in social choice theory and cooperative game theory, a number of 
concepts have been proposed to take over the role of maximality in the 
absence of transitivity. In 1953 McGarvey showed that any irreflexive 
and anti-symmetric relation can be obtained by the majority rule. In 
this paper, we address the analogous issue for finite NTU games. We find 
*any* irreflexive relation over a finite set can be obtained as the 
dominance relation of some ordinary, monotonic, and simple NTU 
coalitional game. We also show that any dominance relation can be 
induced by a *non-cooperative* game via beta-effectivity. Furthermore, 
we obtain a partial result for the case in which alpha-effectivity is 
used and consider the formal interrelationships between Smith sets, 
Schwartz sets, stable sets and the core in finite NTU games.

Fr 25. Mai 2007, 14:15 Uhr

Es spricht:    Steffen Jost
               University of St. Andrews, Scotland

über:          Amortised Resource Analysis for HUME

  Hume (www.hume-lang.org) is a strongly typed, functional programming
language aimed at software design for embedded systems. Embedded systems
must often meet strong correctness and performance properties, hence we
have applied the static amortized analysis approach originally presented
by Hofmann and Jost to Hume within the framework of the EmBounded
research project (www.embounded.org).
 I will presented an overview over the amortized program analysis
approach which yields upper bounds for Time, Stack and Heap space usage
in terms of linear expressions in the size of a program's input and
discuss the insight gained by extending our previous approach to Hume.

Fr 18. Mai 2007, 14:15 Uhr

Es spricht:    Ulrich Schöpp
               LFE Theoretische Informatik, LMU

über:          Categories with Binding Structure

Categories with binding structure are a simple categorical
axiomatisation of name binding. The definition of binding structure can
be seen as a direct formalisation of the statement: `Working with an
alpha-equivalence class is the same as working with a freshly named
instance.' This statement, which expresses that two modes of working
are equivalent, is being formalised directly as an equivalence of
categories. Categories with binding structure have rich structure that
includes and generalises well-known semantic constructions for name
binding. Among the instances of categories with binding structure are
the category of nominal sets and the category of species of structures.
In the case for nominal sets, binding structure contains a direct
universal characterisation of well-known constructions, such as the
new-quantifier, abstraction sets and the unique choice of new names. In
the case for species of structures, binding structure can be used to
model the nabla-quantifier of Miller and Tiu. The purpose of this talk
is to introduce categories with binding structure, to explain their
structure and how they relate to existing models of name binding.

Fr 11. Mai 2007, 14:15 Uhr

Es spricht:    Philip Wassenberg (FoPra Brandt)
               LFE Theoretische Informatik, LMU

über:          Implementierung eines kryptographischen Vergleichsprotokolls
Das "Millionaire's Problem" bezeichnet das Problem, wie zwei Millionäre feststellen können, welcher von beiden reicher ist, ohne dabei ihr jeweiliges Vermögen preiszugeben. Eine mögliche Lösung des Millionaire's Problem funktioniert (kurz gesagt), indem man [a < b] als boolsche Formel mit Elgamal-verschlüsselten Bitdarstellungen von a und b ausdrückt. Für das FoPra wurde diese Lösung in Java implementiert. Der Vortrag wird sich mit dem Elgamal-Kryptosystem, der Evaluierung von Elgamal-verschlüsselten boolschen Werten und dem Millionaire's Protocol, welches das Millionaire's Problem löst, befassen.
The "Millionaire's Problem" proposes the problem how two millionaires can find out who is richer, without revealing their respective fortunes. One possible solution of the Millionaire's problem works (in short) by expressing [a < b] as a Boolean formula using Elgamal-encrypted bit representations of a and b. For the student project, this solution has been implemented in Java. The talk will deal with the Elgamal cryptosystem, evaluation of Elgamal-encrypted Boolean values, and the Millionaire's protocol which solves the Millionaire's problem.

Wintersemester 2006/07

Fr 2. März 2007, 14:15 Uhr

Es spricht:    Marie-Fleur Revel
	       LMU München

Über:          Übersetzung von PSL in den linearen mu-Kalkül

Abstract:      Sowohl die Property Specification Language (PSL) als auch
               der lineare mu-Kalkül (muTL) beschreiben  mega-reguläre Sprachen.
               Während muTL Fixpunktoperatoren enthält, arbeitet PSL mit einer
               Kombination aus regulären Ausdrücken und temporallogischen
               Diese Diplomarbeit stellt eine Übersetzung von PSL in muTL vor,
               enstanden aus dem Ziel, eine intuitive, leicht verständliche
               Eingabesprache für das muSabre-Projekt anzubieten. Der Fokus
               liegt dabei auf der Übersetzung der in PSL integrierten
               regulären Ausdrücke in muTL Formeln, die über unendlichen
               Wörtern interpretiert werden. Im Rahmen des Vortrags werden die
               wichtigstens Erkenntnisse aus der Arbeit vorgestellt und anhand
               von Beispielen erklärt.

Fr 16. Feb 2007, 14:15 Uhr

Es spricht:    Ulrich Schoepp
               LMU München

Ueber:         Stratified Bounded Affine Logic for Logarithmic Space

Abstract:      A number of complexity classes, most notably PTIME, have
               been characterised by sub-systems of linear logic. In
               this talk I will show that the functions computable in
               logarithmic space can also be characterised by a
               restricted version of linear logic. I will introduce
               Stratified Bounded Affine Logic (SBAL), a restricted
               version of Bounded Linear Logic, in which not only the
               modality `!' but also the universal quantifier is
               bounded by a resource polynomial. Proofs of certain
               sequents in SBAL represent the functions computable
               logarithmic space. The translation of SBAL-proofs to
               LOGSPACE-functions is based on modelling computation by
               interaction dialogues in a version of the Geometry of
               Interaction. The purpose of this talk is to introduce
               SBAL, give examples and to outline how SBAL-proofs can
               be compiled to space-efficient programs.

Fr 9. Feb 2007, 14:15 Uhr

Es spricht:    Paul Harrenstein
               LMU München

Ueber:         Commitment and Extortion

Abstract:      Making commitments, e.g., through promises and threats,
               enables a player to exploit the strengths of his own
               strategic position as well as the weaknesses of that of
               his opponents.  Which commitments a player can make
               with credibility depends on the circumstances. In some,
               a player can only commit to the performance of an
               action, in others, he can commit himself conditionally
               on the actions of the other players. Some situations
               even allow for commitments on commitments or for
               commitments to randomized actions. We explore the
               formal properties of these types of (conditional)
               commitment and their interrelationships. So as to
               preclude inconsistencies among conditional commitments,
               we assume there is some order in which the players make
               their commitments. Central to our analyses is the
               notion of an extortion, which we define, for a given
               order of the players, as a profile that contains, for
               each player, an optimal commitment given the
               commitments of the players that committed earlier. On
               this basis, we investigate for different commitment
               types whether it is advantageous to commit earlier
               rather than later, and how the outcomes obtained
               through extortions relate to backward induction and
               Pareto efficiency.

Fr 2. Feb 2007, 14:00-14:45 Uhr

Es spricht:    Helmut Schwichtenberg
               LMU München

Ueber:         Goedels Dialectica Interpretation

Abstract:      We give a presentation of Goedels Dialectica
               Interpretation based on natural deduction with
               assumption variables.  In this setting the proof of the
               soundness theorem is particularly perspicious.  We then
               discuss the usability of this approach for machine
               extraction of computational content from classical
               existence proofs

Fr 26. Jan 2007, 14:15 Uhr

Es spricht:    Hermann Gruber
	       LMU Muenchen

Ueber:         Finding Lower Bounds on Nondeterministic State Complexity is Hard
               (joint work with Markus Holzer)

Abstract:      We investigate the following lower bound methods for
               regular languages: The fooling set technique, the
               extended fooling set technique, and the biclique edge
               cover technique. It is shown that the maximal
               attainable lower bound for each of the above mentioned
               techniques can be algorithmically deduced from a
               canonical finite graph, the so called dependency graph
               of a regular language. This graph is very helpful when
               comparing the techniques with each other and with
               nondeterministic state complexity. In most cases it is
               shown that for any two techniques the gap between the
               best bounds can be arbitrarily large. The only
               exception is the biclique edge cover technique which is
               always as good as the logarithm of the deterministic or
               nondeterministic state complexity.  Moreover, we show
               that deciding whether a certain lower bound w.r.t. one
               of the investigated techniques can be achieved is in
               most cases computationally hard, i.e., PSPACE-complete
               and hence are as hard as minimizing nondeterministic
               finite automata.

Fr 19. Jan 2007, 14:15 Uhr

Es spricht:    Lennart Beringer
	       LMU Muenchen

Ueber:         Secure information flow in VDM
               (Joint work with Martin Hofmann)

Abstract:      Suppose you receive some program from an untrusted
               source. Then, a crucial property that you want the
               program to satisfy is that it does not communicate some
               private data of yours to other participants.  In
               particular, the behaviour of the program as seen by an
               external observer should remain invariant under any
               modification applied to the private data.  This
               property is called "information flow security", and
               various techniques have been proposed for stating and
               proving that a system or program enjoys the
               property. In the talk, I will report on work carried
               out in connection with the projects Mobius and Infozert
               that aims to generate automatically checkable proofs of
               information flow security.  In particular, I will
               demonstrate that an ordinary program logic may be used
               to formulate when a system is secure, and how type
               systems that have been proposed in the literature for
               statically approximating information flow security may
               be used to automatically generate such proofs.

Fr 22. Dec 2006, 14:15 Uhr

Es spricht:    Max Meier
	       LMU Muenchen, Fopra

Ueber:         Computing Nash equilibria of normal-form 
               games using support enumeration

Abstract:      Nash equilibria belong to the most important solution concepts
               in game theory. A Nash equilibrium is a stable state of
               player strategies giving no player intent to change his
               strategy, because single players can't get more payoff by
               changing only their own actions. This concept is named
               after John F. Nash, who showed that every finite game has
               at least one equilibrium.  The goal of this project
               thesis was to develop a framework for representing games
               in normal form and to compute up to one Nash equilibrium
               for each support profile.

Es spricht:    Oliver Friedmann
               LMU Muenchen, Fopra bei Martin Lange
Ueber:         TBA
Abstract:      TBA

Fr 8. Dec 2006, 14:15 Uhr

Es spricht:    Felix Fischer
	       LMU München

über:          A Game-Theoretic Analysis of 
               Strictly Competitive Multiagent Scenarios

Abstract:      We conduct a comparative study of game-theoretic solution
               concepts in strictly competitive multi-agent scenarios,
               as commonly encountered in the context of parlor games,
               competitive economic situations, and some social choice
               settings. We model these scenarios as ranking games in
               which every outcome is a ranking of the players, with
               higher ranks being preferred over lower ones. Rather than
               confining our attention to one particular solution
               concept, we give matching upper and lower bounds for
               various comparative ratios of solution concepts within
               ranking games. The solution concepts we consider in this
               context are security level strategies (maximin), Nash
               equilibrium, and correlated equilibrium. Additionally, we
               also examine quasi- strict equilibrium, an equilibrium
               refinement proposed by Harsanyi, which remedies some
               apparent shortcomings of Nash equilibrium when applied to
               ranking games. In particular, we compute the price of
               cautiousness, i.e. the worst-possible loss an agent may
               incur by playing maximin instead of the worst
               (quasi-strict) Nash equilibrium, the mediation value,
               i.e. the ratio between the social welfare obtained in the
               best correlated equilibrium and the best Nash
               equilibrium, and the enforcement value, i.e. the ratio
               between the highest obtainable social welfare and that of
               the best correlated equilibrium.

Fr 8. Dec 2006, 14:15 Uhr

Es sprechen:   Martin Hofmann
               LMU München

               Andreas Hoesll
               Christoph Settgast
               Jugend Forscht

Abstract:      Die beiden Abiturienten (inzwischen Informatikstudenten)
               Andreas Hoesl und Christoph Settgast haben bei uns im
               Rahmen von Jugend Forscht eine Woche praktiziert und
               werden ihre Ergebnisse praesentieren. Es handelt sich
               um drei Anwendungen von SAT-Solvern: Sudoku,
               Stundenplaene, Proteinfaltung.
               In der zweiten Haelfte wird Martin Hofmann ueber sein
               Forschungsprojekt mit Nick Benton und Andrew Kennedy
               (Cambridge) und Lennart Beringer berichten. Es geht um
               Korrektheitsbeweise fuer effektbasierte
               Programmaequivalenzen in Gegenwart von dynamischer

Fr 1. Dec 2006, 14:15 Uhr

Es spricht:    Paul Harrenstein
               LMU München

über:          The Computational Complexity of Choice Sets

Abstract:      Social choice rules are often evaluated and compared by
               inquiring whether they fulfill certain desirable
               criteria such as the Condorcet criterion, which states
               that an alternative should always be chosen when more
               than half of the voters prefer it over any other
               alternative.  Many of these criteria can be formulated
               in terms of choice sets that single out reasonable
               alternatives based on the preferences of the voters.
               In this paper, we consider choice sets whose definition
               merely relies on the pairwise majority relation.  These
               sets include the Copeland set, the Smith set, the
               Schwartz set, and Von Neumann-Morgenstern stable sets
               (a concept originally introduced in the context of
               cooperative game theory).  We investigate the
               relationships between these sets and completely
               characterize their computational complexity.  This
               allows us to obtain hardness results for entire classes
               of social choice functions.

Fr 24. Nov 2006, 13:15 Uhr

Es spricht:    Felix Brandt
               LMU München

über:          On Strictly Competitive Multi-Player Games

Abstract:      We embark on an initial study of a new class of strategic
               (normal-form) games, so-called ranking games, in which
               the payoff to each agent solely depends on his position
               in a ranking of the agents induced by their actions. This
               definition is motivated by the observation that in many
               strategic situations such as parlor games, competitive
               economic scenarios, and some social choice settings,
               players are merely interested in performing optimal
               \emph{relative} to their opponents rather than in
               absolute measures.  A simple but important subclass of
               ranking games are single-winner games where in any
               outcome one agent wins and all others lose. We
               investigate the computational complexity of a variety of
               common game-theoretic solution concepts in ranking games
               and deliver hardness results for iterated weak dominance
               and mixed Nash equilibria when there are more than two
               players and pure Nash equilibria when the number of
               players is unbounded. This dashes hope that multi-player
               ranking games can be solved efficiently, despite the
               structural restrictions of these games.

Fr 17. Nov 2006, 14:15 Uhr

Es spricht:    Andreas Abel

über:          Normalization by Evaluation for 
               Martin-Löf Type Theory with One Universe

Abstract:      We present an algorithm for computing normal terms and
               types in Martin-Löf type theory with one universe and
               eta-conversion. We prove that two terms or types are
               equal in the theory iff the normal forms are identical
               (up to alpha-congruence). It thus follows that our
               algorithm can be used for deciding equality in
               Martin-Löf type theory. The algorithm uses the
               technique of normalization by evaluation; normal forms
               are computed by first evaluating terms and types in a
               suitable model. The normal forms are then extracted
               from the semantic elements.  We prove its completeness
               by a PER model and its soundness by a Kripke logical

Do 17. Aug 2006, 14:15 Uhr

Es spricht:    Jan Remy
               ETH Zürich

über:          Approximation Schemes for Geometric Optimization Problems

Abstract:      This talk is devoted to the approximability of NP-hard
               geometric optimization problems. Classical geometric
               optimization problems are to find the shortest tour
               through n points or to cover a point set with as few
               unit disks as possible. In fixed dimension we know
               polynomial time approximation schemes for most such
               problems. However, there are still several important
               and natural problems for which we neither have a PTAS
               nor a proof of APX-hardness. In this talk, we survey
               classical and recent results in this field and sketch
               the algorithms for the geometric TSP and the minimum
               weight triangulation problem.

               Joint work with A. Steger

Fr 28. Jul 2006, 13:15 Uhr

Es spricht:    Ulrich Schoepp
               LMU München

über:          Cyclic Proofs in Sequent Calculus

Abstract:      I will talk about a cyclic sequent calculus for
               (co)inductive reasoning in the modal mu-calculus.  A
               well-known issue in reasoning with induction is how to
               find induction invariants.  Many proof systems with
               induction are based on rules that require the user to
               directly express induction invariants.  The discovery
               of induction invariants is well-known to be problematic
               for proof-search, whether manually or automatically.
               Cyclic proof systems are an approach to aiding the
               discovery of induction invariants. In this approach,
               certain circular proofs are allowed. Circular proofs
               are identified as valid proofs by a global
               combinatorial condition on proofs.  This approach of
               allowing circular proofs has its origin in tableaux
               systems and games for model-checking fixed-points

               In this talk I will introduce a sequent calculus for
               the modal mu-calculus, explain how cyclic reasoning in
               it works and give an overview of soundness and
               completeness results.  

Fr 21. Jul 2006, 14:15 Uhr

Es spricht:    Hans-Wolfgang Loidl
               LMU München

über:          Resource Modelling and Prediction 
               in the Embedded Systems Language Hume               

Abstract:      Embedded systems represent a particularly challenging
               platform to develop efficient programs on: they are
               extremely resource constrained and the applications
               often have strong real-time requirements. To enforce
               bounded resource consumption of such applications, most
               modern embedded systems languages have very limited
               power, complicating the process of application

               In this talk I will report on work aiming to identify,
               to quantify and to certify resource bounded code in a
               domain-specific, high-level language Hume. I will
               present the main language concepts behind Hume, discuss
               the cost models we have developed for Hume and its
               underlying abstract machine, and I will give an outlook
               on the static analyses currently being developed for
               inferring bounds on time and space
               consumption. Finally, I will talk about the
               applications we are targeting in our work, revolving
               around computer vision algorithms to be applied in
               software for controlling autonomous vehicles.  This
               work is part of the EmBounded project:

Fr 7. Jul 2006, 14:15 Uhr

Es spricht:    Francisco José Cháves
               ENS de Lyon

über:          A library of Taylor models for PVS

Abstract:      It is now beyond doubt that programs and libraries contain
               bugs, no matter how precisely they have been specified
               and how thoroughly they have been tested.  The interval
               arithmetic has been used for certify the bounds of
               arithmetic expressions, but it has the problem of
               decorrelation of the variables, also known as dependency
               problem: X*X is computed as { x*y | x in X and y in X },
               and the correlation between x and y is lost. Taylor
               models, have recently emerged as a nice and convenient
               way to reduce decorrelation in interval arithmetic.

               We present a library of Taylor models in PVS. Working
               with an automatic proof checker, we had to manage two
               tasks. The first task was to create a data type and
               operations on this new type to allow users to define and
               evaluate expressions using Taylor models. The second task
               was to provide proofs that each operator is correct and a
               strategy to recursively analyze compound expressions. The
               library that we have developed can be used to derive
               quickly more or less accurate bounds.

Fr 30. Jun 2006, 14:15 Uhr

Es spricht:    Andreas Abel
               LMU München

über:          Two applications of Sized Types

Abstract:      Sized types are recursive datatypes which carry static
               (compile-time) information about the maximal size of
               the data structures they contain.  A type-system with
               sized types can certify lifeness properties like
               termination of recursive functions and productivity of
               infinite data structures.  In this talk, I will present
               F^_w ("f-omega-hat") which integrates sized types of
               higher kind and polymorphism, and show two larger

               1. A structurally recursive normalizer for simply-typed
                  lambda-terms: This normalizer can be implemented in
                  F^_w such that it is guaranteed to terminate on all
                  lambda-terms and return a normal form for the
                  well-typed ones.

               2. Generic programming with sized types: Generic
                  programs are usually structurally recursive in some
                  sense.  We try to make this observation formal and
                  demonstrate that with sized types we capture even
                  generic programs whose termination cannot be shown
                  by criteria disregarding their type.

Fr 23. Jun 2006, 14:15 Uhr

Es spricht:    Paul Harrenstein
               LMU München

über:          Effectivity and Noncooperative Solution Concepts

Abstract:      Game theory analyzes the strategic aspects of situations of
               social interaction by representing them as mathematical
               structures. Any such structure should at least account
               for the participants' powers and their preferences. The
               way powers and preferences are modelled makes a
               difference as to which strategic concepts are available
               for mathematical scrutiny. In strategic games the
               players' powers are modelled as possible courses of
               action. This provides sufficient structure to analyze
               interactive situations by means of the noncooperative
               solution concepts, in particular Nash equilibrium.

               There are, however, also different ways of representing
               the players' powers. One of them is by means of
               effectivity sets, i.e., sets of outcomes in which a
               player can guarantee a game to end by choosing a
               particular course of action. One could favor this way
               of representing interactive situations as, indeed, it
               has been fashionable in recent formalisms, such as ATL,
               ATEL and Coalition Logic.

               We are concerned with the extent to which the notions
               available in strategic games are still applicable in
               case the players' powers are represented in terms of
               the players' effectivity. We address this issue in
               particular for Nash equilibrium and a related solution
               concept, which we call outcome equilibrium.

Fr 16. Jun 2006, 14:15 Uhr

Es spricht:    Martin Hofmann
               LMU München

über:          Termination funktionaler Programme, Buechiautomaten und
               Linearzeit mu-Kalkuel. 

Abstract:      Jones et al haben ein Entscheidungsverfahren fuer
               Termination funktionaler Programme vorgeschlagen,
               welches auf Buechiautomaten basiert. Nachdem sie 
               die ueblichen Algorithmen fuer Buechiautomaten fuer
               ineffizient befinden, entwerfen sie einen alternativen
               Algorithmus, fuer dasselber Problem, der ohne
               Buechiautomaten arbeitet und stattdessen einen
               bestimmten Abschluss berechnet. Wir (Lange, Dax, MH)
               haben nun gesehen, dass man diesen Algorithmus auch
               fuer andere Fragen hernehmen kann, die ueblicherweise
               mit Buechiautomaten geloest werden, insbesondere die
               Korrektheit von unendlichen Beweisen im Linearzeit
               Mu-Kalkuel. Ich werde zunaechst die Verfahren von Jones
               et al vorstellen, dann unsere Anwendung, die auch das
               Jones' Verfahren betraechtlich erweitert, und
               schliesslich eine offene Frage ueber Termination
               schildern, an der ich gerade arbeite.

Fr 2. Jun 2006, 14:15 Uhr

Es spricht:    Felix Fischer
               LMU München

über:          Symmetries and the complexity of pure Nash equilibrium

Abstract:      Strategic games may exhibit symmetry in a variety of ways. A common aspect,
               enabling the succinct representation of games when the number of players is
               growing, is that players can not or need not distinguish between the other
               players. In this talk, we investigate four classes of symmetric games obtained
               by considering two additional properties: identical payoff functions for all
               players and the ability to distinguish oneself from the other players. We show
               that pure Nash equilibria can be found efficiently in all four classes if only
               a constant number of actions is available to each player, a problem that is
               intractable for other succinct representations of multi-player games. We
               further show that identical payoff functions simplify the search for
               equilibria, while a growing number of actions renders it intractable.

Fr 19. Mai 2006, 14:15 Uhr

Es spricht:   Jan Hoffmann
              LMU München

über:         Weak Parity Games and Language Containment 
              of Weak Alternating Parity Automata

Abstract:     Optimisations of the state space of weak alternating parity
              automata (WAPA) are needed for example to speed up mu-TL
              model-checking algorithms which involve WAPA. It is
              assumed that deciding language containment for WAPA is
              helpful to perform such optimisations. In my FoPra, the
              problem of language containment is reduced to the
              problem of computing winning sets in weak parity
              games. For the latter a linear time algorithm is

Fr 12. Mai 2006, 14:15 Uhr

Es spricht:   Karl Mehltretter
              LMU München

über:         Übersetzung zwischen dem Linearzeit-Mu-Kalkül und schwachen,
              alternierenden Paritätsautomaten (Fortgeschrittenenpraktikum)

Abstract:     Im Rahmen des Projektes muSABRE wurde die Übersetzung zwischen
	      (1) dem Linearzeit-Mu-Kalkül
	      (2) schwachen, alternierenden Paritätsautomaten


	      Diese sind äquivalente Formalismen zur Beschreibung regulärer
	      Eigenschaften unendlicher Wörter.
	      Die Kombination der Übersetzung von (1) nach (2) und von (2) zurück nach
	      (1) liefert ein Verfahren zur vollständigen Elimination von

Fr 5. Mai 2006, 13:30 Uhr

Es sprechen:  Anja Feichtner, Susanne Sojer
              LMU München

über:         Parity to SAT

Abstract:     Ziel dieses Fortgeschrittenenpraktikums ist es, die
              Reduktion von Parität auf SAT zu implementieren und durch
              einen SAT-Solver lösen zu lassen, d.h.  unter anderem die
              Formel umzusetzen.

Fr 5. Mai 2006, 14:30 Uhr

Es spricht:   Phil Trinder
	      Dependable Systems Group
	      Heriot-Watt University

über: 	      Towards Mobility Skeletons

Abstract:     In mobile languages programmers control the placement of code or
	      computations in open networks, e.g. a program can migrate between
	      locations.  Mobile computations are typically stateful and interact
	      with the state at each location in the network.  We propose mobility
	      skeletons, higher-order polymorphic functions that encapsulate common
	      patterns of mobile computation.  Mobility skeletons are analogous to,
	      but very different from, algorithmic skeletons - higher-order
	      polymorphic functions encapsulating common patterns of parallel

	      We have identified three common patterns of mobile computation,
	      and implemented them as a library of higher-order functions in
	      mobile Haskell.  Each mobility skeleton is defined and
	      illustrated with an example. We show how mobility skeletons can be
	      composed and nested, and illustrate their use in a non-trivial case
	      study: a distributed meeting planner. Mobility skeletons are extensible:
	      there is a small set of mobility primitives, and medium-level
	      abstractions such as remote evaluation can be defined using them. New
	      mobility skeletons can be defined using the medium and low level

Di 14. März 2006, 13:00 Uhr

Es spricht:   Christian Dax
              LMU München

über:         Spiele für den Linearzeit mu-Kalkül
              (Diplomarbeit Martin Lange)

Abstract:     Der Linearzeit mu-Kalkül ist eine temporale Logik mit
              der omega-reguläre Eigenschaften spezifiziert werden
              können. In der Diplomarbeit wird eine spieltheoretische
              Charakterisierung des Model-Checking, Validity und
              Satisfiability Problems erarbeitet. Bereits im
              Zwischenvortrag wurde auf die Spiele von Stirling zum
              Lösen des Model-Checking Problems für Wort-Modelle
              eingegangen und die Erweiterung dieser Spiele zum Lösen
              des Model-Checking Problems für Baum-Modelle
              vorgestellt. Daran anschließend soll nun die
              Verwandtschaft der "Baum-Spiele" zu den Spielen zur
              Lösung des Validity und Satisfiability Problems erklärt
              werden. Außerdem wird auf ein Verfahren eingegangen,
              welches mit Hilfe der Automatentheorie die
              Gewinnbedingungen der Spiele kodieren kann und dadurch
              die vorgestellten Probleme in PSPACE entscheidet.

3. Februar 2006, 14:00 Uhr

Es spricht:   Ulrich Schöpp
              LMU München

über:         Speichereffiziente Programme und die "Geometry of Interaction"

Abstract:     Møller-Neergaard und Mairson haben kürzlich die
              Funktionenalgebra BC^-_epsilon eingeführt und gezeigt,
              dass diese genau die in logarithmischem Platz
              berechenbaren Funktionen enthält.  In diesem Vortrag
              betrachten wir die Geometry of Interaction als einen
              Ansatz, das Ergebnis von Møller-Neergaard und Mairson
              auf realistischere Programmiersprachen und andere
              Komplexitätsklassen zu erweitern. Die Geometry of
              Interaction erlaubt es, Berechnungen durch das
              Herumreichen von Fragen und Antworten in einem
              statischen Netzwerk zu beschreiben. Da Fragen und
              Antworten oft nur wenig Speicherplatz benötigen, kann
              diese Beschreibung helfen, Programme platzeffizient
              auszuwerten.  Ziel des Vortrages ist, diese Ideen am
              Beispiel von BC^-_epsilon zu erklären und sich daraus
              ergebende Erweiterungen von BC^-_epsilon zu

16. Dezember 2005, 14:15 Uhr

Es spricht:   Felix Fischer
              LMU München

über:         On the complexity of finding pure Nash equilibria in
	      strategic games 

Abstract:     In this talk, we investigate the influence of
              neighbourhood on the complexity of deciding whether a
              strategic game has a pure strategy Nash equilibrium. We
              start by introducing strategic games and the concept of
              Nash equilibrium, as well as a particular class of games
              with bounded neighbourhood. We then show that deciding
              whether a strategic game has a pure strategy Nash
              equilibrium is NP-complete even if each player has at
              most 2 different actions, her payoff depends on the
              strategies of at most 2 other players, and payoffs are
              two-valued. This is the best possible result, since we
              also show the problem to be NL-complete for games with
              1-bounded neighbourhood.  With the additional
              restriction to a constant number of actions for each
              player, the problem under consideration becomes
              L-complete, and thus nicely resembles the
              deterministic-nondeterministic problem for logspace
              bounded Turing machines.

2. Dezember 2005, 14:15 Uhr

Es spricht:   Martin Hofmann
              LMU München

über:         The Integrated Project MOBIUS and our current contribution

Abstract:     The EU IST-FET Integrated Project MOBIUS --- Mobility,
              Ubiquity and Security --- has started on 1st September
              with LMU (Lennart Beringer & myself) being one of 16

              The goal of the project is to "...develop the technology
              for establishing trust and security for the next
              generation of global computers, using the Proof Carrying
              Code (PCC) paradigm."

              Currently, Lennart and I are thinking about a formalised
              program logic for Java Bytecode with Exceptions and
              Operand Stacks which would form the basis of the MOBIUS
              architecture. The idea is that proofs in this program
              logic can be checked by recipients of code, perhaps even
              on a small device, and that on the other hand, such
              proofs can be automatically generated by type inference
              or program analysis.

              After saying a little bit more about the project in
              general, I will explain the design decisions for the
              program logic that were laid down in a recent meeting
              and how we intend to meet them.

25. November 2005, 14:15 Uhr

Es spricht:   Jan Johannsen
              LMU München

über:         Die Komplexität von Suchproblemen

Abstract:     Kürzlich wurde von Daskalakis, Goldberg und Papadimitrou
              gezeigt, dass das Problem der Berechnung von Nash-
              Gleichgewichten vollständig für die Komplexitätsklasse
              PPAD ist. Hierbei handelt es sich um eine Klasse von
              Suchproblemen, bei denen das zugehörige Entscheidungs-
              problem trivial ist, da (wie in diesem Fall nach dem
              Theorem von Nash) eine Lösung immer existiert.  Im
              Vortrag soll ein Überblick über die Komplexität solcher
              Suchprobleme gegeben werden, mit dem Ziel, das
              o.g. Resultat verstehen und einordnen zu können.

18. November 2005, 14:15 Uhr

Es spricht:   Felix Brandt
              LMU München

über:         How to aggregate preferences without revealing them

Abstract:     Whenever a group of autonomous entities, such as humans
              or computational agents, strive for a global decision,
              they need to aggregate their possibly conflicting
              preferences. Well-known examples of traditional
              preference aggregation among humans are auctions and
              voting. Besides achieving a socially "desirable"
              outcome, agents are interested in maintaining the
              privacy of their individual preferences.  We investigate
              whether privacy that does not rely on trusted third
              parties can be achieved in preference aggregation. More
              precisely, we study the existence of distributed
              protocols that allow agents to jointly determine the
              outcome of an aggregation mechanism simply by exchanging
              messages and without revealing any unnecessary
              information. This is called "private emulation" of a
              mechanism. We provide results for both the computational
              (or cryptographic) and the unconditional (or
              information-theoretic) setting. It turns out that, in
              the unconditional setting, some mechanisms cannot be
              emulated privately at all and that the construction of
              efficient protocols for the computational setting is a
              very challenging task.  

Dienstag, 26. Juli 2005, 14:15 Uhr

Es spricht:   Murdoch Gabbay
              Kings College, London

über:         a-logic, the lambda-calculus, and using names and
	      binding to internalise the meta-level

Abstract:     a-logic extends first-order logic (FOL) with a unary
              predicate *atom* which is true when s is a variable
              symbol.  This internalises what is normally a meta-level

              For example, atom(x) is true, and atom(app(x,y)) is
              false, where "app" is a binary term-former.  It may
              sound surprising that such a logic should exist, but it

              We finitely axiomatise the lambda-calculus and using
              Interpolation give a model of the lambda-calculus exists
              for which no poset order exists such that application is
              continuous on both arguments.  (Consequently, a model of
              lambda-calculus computation based on posets, even ones
              without a bottom element, is incomplete.)

              This should take about 30 minutes.  The proof is quite
              involved (and a word of warning: it may yet break
              embarassingly).  So we shall only sketch it.

              We then discuss other work, and in so doing describe our
              broader research goal, which is (just) to discover a
              powerful mathematics which we shall argue underlies much
              of what makes several great trends in computer science
              so powerful: higher-order logic, object-oriented
              programming, component-based architectures, and

Dienstag, 19. Juli 2005, 14:15 Uhr

Es spricht:   Henrik Bjoerklund
              RWTH Aachen

über:         A Combinatorial Approach to Infinite Graph Games

Abstract:     Graph games have become an indispensable tool in modern
              computer science. The infinite games we study, such as
              parity, mean payoff, and simple stochastic games, have
              their primary applications in computer-aided
              verification, but also play an important role in
              complexity theory, logic, and automata theory.

              In this talk we focus on a combinatorial approach to
              solving these games. We investigate ways of assigning
              values to game strategies, and efficient ways of
              searching the strategy space, guided by these values. In
              particular, we show how this can be done for mean payoff
              games, and then generalize to a problem we call
              controlled linear programming, to which all the
              mentioned games reduce. The algorithms we present are
              adaptations of subexponential randomized algorithms
              originally invented for linear programming.

              We also consider future prospects of this line of
              research. Among possible continuations we mention
              reduction to linear complementarity problems,
              fixed-parameter analysis, and implications of graph
              structure, such as tree-width, path-width, and

Dienstag, 19. Juli 2005, 16:15 Uhr

Es spricht:   Andreas Abel
              Chalmers Universiity of Technology

über:         Type-Based Termination and Productivity Checking

Abstract:     Termination is a liveness property of programs, which
              has received continuing interest due to its importance
              for verification, program generation, and theorem
              proving.  Another liveness property, which is important
              in embedded and reactive programming, is productivity:
              It is dual to termination in the sense that productive
              systems, like, for instance, operating systems, should
              not terminate---but they should continuously turn input
              into output.

              Most methods to check termination and productivity are
              solely based on the operational behavior of a program.
              For higher-order programs, however, often their type
              provides the extra information that is needed to certify
              termination and productivity.  This insight has lead to
              the development of type-based termination: The size of
              datastructures is statically encoded in their type
              through ordinal expressions, and recursive programs can
              be defined by transfinite induction.

              In this talk, I will present the type system F_omega^hat
              with sized inductive and coinductive constructors and
              higher-order subtyping, in which all well-typed
              recursive programs are terminating or productive.  It
              supports higher-order functions, impredicative
              polymorphism, polymorphic recursion, and positive
              heterogeneous data types, also called nested data types.
              The type system recognizes termination in situations
              where untyped approaches seem hopeless, e.g., when, in a
              recursive call, a function is passed as an argument to
              itself or even to one of its arguments.

              In a second part of the talk I will explain how my work
              contributes to a bigger research program: the
              development of interactive tools for proving theorems
              about languages with binding and and for reasoning about
              properties of concrete programs.

4. Juli 2005, 16:15 Uhr

Es spricht:   Mark Braverman
              University of Toronto

über:         Computability and Complexity of Julia sets

Überblick:    In the first part of the talk I will give an overview of
              the computability and complexity for subsets of R^n.  In
              the second part, we will consider Julia sets as a
              special example.  Julia sets give rise to some of the
              most fascinating quasifractal images arising from
              repetition of a simple polynomial map on the complex
              plane C. We will discuss some computability and
              complexity results for these sets.

20. Mai 2005, 14:15 Uhr

Es spricht:   Harald Zauner
              LMU München

über:         Praktikum zur Anwendung von Informatiksystemen aus fachdidaktischer Sicht

Überblick:    Im Rahmen einer Präsentation soll kurz auf die Ergebnisse
              des o.g. Praktikums eingegangen werden, welches ich am
              Gymnasium Erding II absolvierte.

              Schwerpunkte werden sein:

              - Schilderung der Unterrichtserfahrung in den 6. und 9. Klassen

              - Vorstellung des Verwaltungsnetzes, mit Schwerpunkt auf die
                Möglichkeiten durch sog. GPOs (Group Policy Objects) und WinSV

              - Vorstellung des Schülernetzes, welches auf DX-Union in Kombination
                mit PC-Wächter-Karten und NetOp basiert
                Hierbei wird in besonderem Maße auf die automatische
                Softwareverteilung eingegangen werden, da dies einen Schwerpunkt
                meiner Tätigkeit bildete.

              Je nach Interesse der Teilnehmer kann die Gewichtung der einzelnen
              Themenbereiche auch variiert werden.

6. Mai 2005, 14:15 Uhr

Es spricht:   Ulrich Schöpp
              University of Edinburgh

über:         Eine Typtheorie mit Namen und Bindeoperationen

Überblick:    Wir betrachten das Problem der Formalisierung
              informeller Argumente mit abstrakter Syntax und
              Variablenbindern. Es ist bekannt, dass die
              Formalisierung gebundener Variablen oft erheblichen
              Aufwand erfordert. Wie Gabbay und Pitts gezeigt haben,
              lässt sich in Fraenkel Mostowski (FM) Mengentheory ein
              abstraktes Konzept von Namen definieren, mit dem sich
              informelle Argumente gut formalisieren lassen. In diesem
              Vortrag betrachten wir die Frage, wie sich solche Namen
              in Martin Löf Typtheorie integrieren lassen.  Als
              Ausgangspunkt dient uns dazu eine kategorielle
              Axiomatisierung von Namen.  Darin lassen sich bekannte
              FM-Konstrukte, wie zum Beispiel der Neue-Namen-Quantor,
              Namensbinder und die eindeutige Auswahl von neuen Namen,
              durch universelle Konstruktionen beschreiben.  Ausgehend
              von der kategoriellen Struktur definieren wir eine
              Typtheorie mit Namen. Diese Typtheorie ist
              substrukturell und verallgemeinert den
              \alpha\lambda-Kalkül von O'Hearn and Pym.  Für die
              Arbeit mit Namen enthält die Typtheorie multiplikative
              Produkte (Pi*), multiplikative Summen (Sigma*) sowie
              eine Formeln-als-Typen Version des Neue-Namen-Quantors
              von Gabbay und Pitts.

Es spricht:   Jean-Yves Moyen
              LORIA Nancy

über:         Quasi-Interpretations

Überblick:    Quasi-interpretations are a way to control ressource
              usage of programs.  Inspired by polynomial
              interpretations and predicative analysis, they give a
              bound on the size of computed values.

              By combining quasi-interpretations with termination
              orderings or linearity, we obtain syntactical
              characterisation of complexity classes such as Ptime,
              Pspace or LinSpace.

              This analyses the implicit complexity of programs in the
              sense that it gives a bound on the complexity of the
              function computed and not of the algorithm used. It may
              be necessary to transform programs in order to achieve
              the bound. 

18. Februar 2005, 14:30 Uhr

Es spricht:   Andreas Bauer
              TU München

über:         Konsistenzbasierte Diagnose mittels SAT-Solving

Überblick:    Bereits seit Anfang der 1980er existieren verschiedene
	      Arbeiten zum Thema konsistenz- bzw. logikbasierte
	      Diagnose deren Ziel es ist, aufgrund kausaler und
	      struktureller Systembeschreibungen Rueckschluesse auf
	      den Zustand eines unter Beobachtung stehenden Systems zu
	      ziehen.  Bekannte Ansaetze sind u.a. die von Reiter,
	      McCarthy oder Williams, die allesamt auf aufwaendigen
	      Verfahren bzw. unhandlichen Logikkalkuelen beruhen wie
	      z.B. Default-Logic oder Abduktion.  Nicht zuletzt
	      dadurch wurde diesen Ansaetzen (mit wenigen Ausnahmen)
	      eine praktische Verbreitung bisher verwehrt.

	      Ziel des Vortrags ist es, neben einer kurzen Einfuehrung
	      in das Thema Diagnose, einen Loesungsansatz zu zeigen,
	      der auf einer modifizierten Variante des
	      DPLL-Algorithmus fuer das SAT-Problem beruht.  Im
	      Vortrag wird die Modifikation dargestellt und dessen
	      Anwendung auf die Erstellung von Diagnosen gezeigt.
	      Damit soll es moeglich sein, unter bestimmten
	      Vorbedingungen das Diagnoseproblem auch praktisch
	      effizient zu loesen.

11. Februar 2005, 14:30 Uhr

Es spricht:   Martin Lange
              LMU München

über:         Die Alternierungshierarchie im modalen mu-Kalkül

Überblick:    1996 haben Bradfield und Lenzi unabhängig voneinander
	      gezeigt, dass man im modalen mu-Kalkül (multi-modaler
	      Logik + kleinsten und grössten Fixpunktoperatoren) mit
	      mehr syntaktischer Alternierung auch mehr ausdrücken
	      kann.  1999 hat Arnold einen weiteren und wesentlich
	      einfacheren Beweis für die Existenz solch einer
	      Hierarchie nachgelegt. Dieser benutzt lediglich die
	      Model Checking Spiele für den modalen mu-Kalkül
	      (Paritätsspiele) und Banach's Fixpunktsatz.  [Dieser
	      Vortrag sollte ursprünglich im Hauptseminar "Fixpunkte"
	      gehalten werden, musste jedoch krankheitsbedingt ins
	      Oberseminar verschoben werden.]  

4. Februar 2005, 14:30 Uhr

Es spricht:   Martin Hofmann
              LMU München

über:         Proof-theoretic approach to description-logic

Überblick:    In recent work Baader has shown that a certain
              description logic with conjunction, existential
              quantification and with circular definitions has a
              polynomial time subsumption problem both under an
              interpretation of circular definitions as greatest
              fixpoints and under an interpretation as arbitrary
              fixpoints (introduced by Nebel). This was shown by
              translating definitions in the description logic
              (``TBoxes'') into a labelled transition system and by
              reducing subsumption to a question of the existence of
              certain simulations.  In the case of subsumption under
              the descriptive semantics a new kind of simulation,
              called synchronised simulation, had to be introduced. In
              this paper, we also give polynomial-time decision
              procedures for these logics; this time by devising sound
              and complete proof systems for them and demonstrating
              that proof search is polynomial for these systems. We
              then use the proof-theoretic method to study the
              hitherto unknown complexity of description logic with
              universal quantification, conjunction, and GCI
              axioms. Finally, we extend the proof-theoretic method to
              negation and thus obtain a decision procedure for the
              description logic ALC with fixpoints. This last section
              is only sketched.

19. November 2004, 14:30 Uhr

Es spricht:   Ugo Dal Lago
              Università  di Bologna
              z.Zt. LMU München

über:         On Light Logics, Uniform Encoding and Polynomial Time

Überblick:    We investigate on the extensional expressive power of
              Light Affine Logic, analysing the class of uniformly
              representable functions for various fragments of the
              logic. We give evidence on the incompleteness (for
              polynomial time) of the propositional fragment. 
              Following previous work, we show that second order
              leads to polytime unsoundness. We then introduce
              simple constraints on second order quantification and
              least fixpoints, proving the obtained fragment to be
              polytime sound and complete.

12. November 2004, 14:30 Uhr

Es spricht:   Hans-Wolfgang Loidl
              LMU München

über:         A Termination Logic for Grail

Überblick:    Grail is a low-level, JVM-like language with an emphasis
              on exact cost modelling. In the past we have developed a
              VDM-style program logic for partial correctness of Grail
              programs. In this talk we will address the issue of
              proving termination on top of this existing logic, by
              defining a separate layer of a termination logic. Thereby
              we manage to decouple the issues of resource-awareness, a
              main novel feature of the program logic for Grail, from the
              issue of termination of Grail programs.  Finally, we give
              an outlook of integrating+ these techniques into our
              existing proof-carrying-code infrastructure for resource
              properties of mobile programs.

5. November 2004, 14:30 Uhr

Es spricht:   Martin Lange
              LMU München

über:         Weak Alternating Parity Automata - The Theory Behind mu-SABRE

Überblick:    This talk extends last week's presentation of the mu-SaBRe project,
              this time looking at the theory of omega-regular languages. We will
              introduce weak alternating parity automata and present their
              connection to the linear time mu-calculus. This shows that (1) weak
              alternating parity automata are enough to describe all omega-regular
              languages (2) every formula of the linear time mu-calculus is
              equivalent to an alternation-free formula.

29. Oktober 2004, 14:30 Uhr

Es sprechen:  Jan Johannsen, Martin Lange, Markus Jehle, Nicolas Rachinsky, Robert Reitmeier 
              LMU München

über:         Projektpräsentation:
              muSABRE - SAT-Based Verification of Regular Properties

Überblick:    In diesem Projekt wird ein Tool entwickelt, das die Technik des 
              Bounded Model Checking zur Verifikation von omega-regulären Eigen-
              schaften von Hard- und Softwaresystemen verwendet. 

              In diesem Vortrag werden nach einer Einführung in die Methode des 
              Bounded Model Checking und einer Übersicht über die Systemarchitektur
              die am Projekt beteiligten Studierenden die Themen ihrer Praktika bzw.
              Diplomarbeiten vorstellen.

16. Juli 2004, 14:30 Uhr

Es spricht:   Olha Shkaravska
              LMU München

über:         Arrays with Garbage and Hacks


We study a category of (Loc, V)-arrays subject to computationally
natural axiomatics of "global state". Arrays turn to be a convenient
tool for modeling computational effects of operations for "update" and
"new". We define a canonical array via a map of the form F -> P(R)
with F a filtered product of a Frechet filter and R a set, and
introduce a category of such maps. We show that (Loc, V)-arrays is
equivalent to this category.  Our construction of a canonical array is
based on the observation that an array is a "labeled transition
system".  This allow to  view an array as graph with vertices from
V^{Loc} and connected components as copies of classes from  F.

2. Juli 2004, 14:30 Uhr

Es spricht:   Hans Dietmar Jäger
              LMU München

über:         eLearning

Ich werde den Begriff eLearning beleuchten, wobei ich drei Kontexte 
berücksichtigen werde: 
a) ED-MEDIA (World Conference on Educational Multimedia, Hypermedia &
b) meine eigenen Forschung und 
c) Hochschuldidaktik. 
Eine Einteilung der drei Kontexte wird in die zeitlichen Abschnitte
past, today und future erfolgen.

25. Juni 2004, 14:30 Uhr

Es spricht:   N.S. Narayanaswamy
              IIT Madras, Chennai, India

über:         Algorithms for Satisfiability using Independent
              Sets of Variables

An independent set of variables is one in which no two variables occur
in the same clause in a given instance of k-SAT.  Instances of k-SAT
with an independent set of size i can be solved in time, within a
polynomial factor of O(2^(n-i)).  In this paper, we present an
algorithm for k-SAT based on a modification of the Satisfiability
Coding Lemma.  Our algorithm runs within a polynomial factor of
2^((n-i)(1-1/(2k-2))), where i is the size of an independent set.  We
also present a variant of Schöning's randomized local-search algorithm
for k-SAT that runs in time which is with in a polynomial factor of

4. Juni 2004, 14:30 Uhr

Es spricht:   David von Oheimb
              Siemens AG

über:         Information flow control revisited:
              Noninfluence = Noninterference + Nonleakage

We revisit the classical notion of noninterference for state-based
systems, as presented by Rushby in 1992. We strengthen his results in
several ways, in particular clarifying the impact of transitive vs.
intransitive policies on unwinding. Inspired partially by Mantel's
observations on unwinding for event systems, we remove the restriction
on the unwinding relation to be an equivalence and obtain new insights
in the connection between unwinding relations and observational preorders.

Moreover, we make two major extensions. Firstly, we introduce the new
notion of nonleakage, which complements noninterference by focusing not
on the observability of actions but the information flow during system
runs, and then combine it with noninterference, calling the result
noninfluence. Secondly, we generalize all the results to (possibilistic)
nondeterminism, introducing the notions of uniform step consistency and
uniform local respect. Finally, we share our experience using nonleakage
to analyze the confidentiality properties of the Infineon SLE66 chip.

Like Rushby's, our theory has been developed and checked using a theorem
prover, so there is maximal confidence in its rigor and correctness.

28. Mai 2004, 14:30 Uhr

Es spricht:   Christian Dax
              LMU München

über:         Implementation eines Erfüllbarkeitsspiel für Linear Time
              Logic (FoPra Martin Lange)

Mittels Spielen ist es möglich auf intuitive Weise Erfüllbarkeitsprobleme für 
LTL Formeln zu lösen.

Martin Lange hat in seiner PhD Arbeit soche logischen Spiele beschrieben. Im 
Zuge der Projektarbeit wurde ein Erfüllbarkeitsspiel für LTL implementiert 
und dabei in einigen Punkten erweitert, um die Interaktion für den Benutzer 
zu erleichtern.

4. Juli 2003, 14. Uhr c.t.

Es spricht:   Prof. Andreas Goerdt
              Fakultät für Informatik
              TU Chemnitz

über:         Efficient certification of random unsatisfiable 
              2k-SAT instances

We show that random 4-SAT instances with Cn^2 many clauses, C a
sufficiently large constant, can be efficiently certified as
unsatisfiable. The algorithm uses a combination of eigenvalue
calculations and classical approximation algorithms for MAX-CUT or
Our result improves a previous log n^C \cdot n^2 bound.

27. Juni 2003, 14. Uhr c.t.

Es spricht:   Olha Shkaravska
              Institut für Informatik
              LMU München 

über:         Polynomial Inference. The Case of Arrays.

We observe conditions under which a given set of inference rules
defines a polynomial-time decidable inference relation. One may
differ two approaches to polynomial-time decidability.

The first one is algebraical and based on works of Skolem (1920),
Evans(1951) and Burris(1995). It refers to certain embeddability and
axiomatizability properties.

The second approach is syntactical and inspired by works of
McAllester(1992, 1993) and Ganzinger (2001).  It is based on the
notion of a local Horn theory.  For a local theory if a ground Horn
formula C is provable then there is a proof such that every therm
mentioned in this proof is mentioned in the axiomatics or in
C. Therefore, the ``search space'' is limited by a size of the
DAG of C.

Ganzinger has shown that both approaches are closely connected.
Locality of a theory implies Burris condition, that, in its turn,
implies a more general property of stable locality.  The worst-case
time to deciding a stably local theory may be restricted by O(n^{3k}),
where n is the size of the DAG of a given clause C, k is the maximal
number of variables in any axiom (Ganzinger).

We apply the methods above to a special axiomatics for arrays.
Theories of arrays are widely applicable, for instance, in
verification procedures, and appear in such provers as PVS and HOL.

We show that our axiomatics satisfies Burris property and therefore
prove polynomial decidability of the theory.  One can apply a ``stable
local'' version of Tractability Lemma of McAllester to get a
corresponding procedure. We consider possibilities for optimization
and compare our approach with other methods.

16. Mai 2003, 14. Uhr c.t.

Es spricht:   Martin Hofmann
              Institut für Informatik
              LMU München 

über:         A new deductive system for Kozen's mu-calculus

A new deductive system for Kozen's propositional mu-calculus is
given and shown to be sound and complete. Proof search in the system
can be done in EXPTIME thus confirming the known strength of

The proof of completeness involves traditional methods from
mathematical logic in particular cut elimination and ultrafilters.

A syntactic translation from proofs in the new system into Kozen's
original axiomatisation is announced but not described in detail.

11. April 2003, 14. Uhr c.t.

Es spricht:   Martin Lange 
              Institut für Informatik
              LMU München 

über:         Modale Logik mit inflationären Fixpunkten    

Ein überraschendes Resultat der endlichen Modelltheorie ist die
Tatsache, dass Logik 1. Stufe mit kleinsten Fixpunkten (FO+LFP)
genauso ausdrucksstark ist wie FO+IFP, die Logik 1. Stufe mit
inflationären Fixpunkten.

Dawar, Grädel und Kreutzer haben gezeigt, dass dies in der modalen
Welt nicht so ist. Die Logik MIC (Modal Iteration Calculus) ist
ausdrucksstärker und hat andere Eigenschaften als der modale
mu-Kalkül. U.a. haben sie gezeigt, dass ihr Model Checking Problem
PSPACE-vollständig ist, basierend auf naiver Fixpunktiteration.  In
diesem Vortrag - ganz zwanglos an der Tafel - möchte ich eine Idee
für eine spielbasierte Charakterisierung des Model Checking Problems
(zuerest einmal für ein Fragment von MIC) vorstellen. Diese Spiele
funktionieren, sind aber bei weitem nicht optimal. Vielleicht sieht
dabei ja jemand, wie man es besser machen kann.

4. April 2003, 14. Uhr c.t.

Es spricht:   Kevin Hammond
              University of St Andrews
              Scotland, UK

über:         A system for automatically deriving costs for
              higher-order, recursive functions

This talk introduces a type-and-effect system for
a simple, polymorphic, higher-order and (primitive) recursive
functional language, L.  The system expresses evaluation costs
for L-expressions in terms of the size of input data.  We show
how an algorithm can be derived for L-expressions that can be
used to statically infer size and time information in conjunction
with suitable constraint solving technology.  We have used our
approach to infer costs for a number of standard higher-order
functions taken from the Haskell prelude.

Our intention is that this cost model should be applied in the
domain of embedded real-time systems.

24. Januar 2003, 14. Uhr c.t.

Es spricht:   Volker Strehl
              Institut für Informatik
              FAU Erlangen-Nürnberg

über:         Von den Fibonacci-Zahlen zur Catalan-Konstanten

  Formale Sprachen kann man als Verallgemeinerungen von erzeugenden
  Funktionen auffassen. Da letztere ein etabliertes Werkzeug zur
  Behandlung von kombinatorischen Zählproblemen sind, ist es nicht
  überraschend, dass Probleme, die mit rationalen bzw. algebraischen
  erzeugenden Funktionen zu tun haben, oft auch in formal-sprachlichem
  Kontext (reguläre bzw. kontextfreie Sprachen) gesehen und studiert
  werden können.  Die DSV-Methodik (Dyck-Schützenberger-Viennot)
  präzisiert diese Sichtweise. Der Vortrag wird sich unter diesem
  Aspekt vor allem mit einem klassischen Tiling-Problem beschäftigen.

10. Januar 2003, 14. Uhr c.t.

Es spricht:   Hans-Wolfgang Loidl 
              Institut für Informatik 
              LMU München                   

über:         GpH: A Parallel Functional Programming Language

In  this talk  I  will first  give  an overview  of  GpH (Glasgow  parallel
Haskell),  a  parallel  extension  of  the  purely  functional,  non-strict
programming language  Haskell.  I will discuss  programming techniques used
in order to  obtain a clean separation between  declarative and behavioural
code.   The  latter  enables  the  programmer  to  control  the  degree  of
parallelism  and the  order  of evaluation,  without explicitly  generating
threads.  The  implicit management of the generated  parallelism requires a
sophisticated  runtime-system.   I  will  discuss its  key  components  and
techniques   that   are    crucial   to   obtain   architecture-independent
performance. Finally, I will discuss  a static analysis that extracts upper
bounds on the computation costs of a simplified language.  This granularity
information can be used by the runtime-system to improve the performance of
a parallel program.

20. Dezember 2002, 14. Uhr c.t.

(Gemeinsame Veranstaltung mit CIS)

Es spricht:   Georg Gottlob
              TU Wien

über:         Monadic Queries over Tree-Structured Data

Monadic query languages over trees currently receive considerable
interest in the database community, as the problem of selecting nodes
from a tree is the most basic and widespread database query problem in
the context of XML. We give a survey of recent work done by the author
and his group on logical query languages for this problem and their
expressiveness, providing a number of new results related to
the complexity of such languages over so-called axis relations (such
as "child" or "descendant") which are motivated by their presence in
the XPath standard or by their utility for data extraction (wrapping).

Herr Prof. Gottlob hält am Vormittag, Fr.20.12., 10-12 Uhr,
Oettingenstr. 67, Raum 0.43, im Kolloquium Computerlinguistik einen
weiteren Vortrag über

     Web-Informationsextraktion mit LIXTO (Vortrag mit Demo)

der eine Anwendung dieser theoretischen Ergebnisse vorstellt.

22. November 2002, 14. Uhr c.t.

Es spricht:   Martin Lange
              Institut für Informatik 
              LMU München                   

über:         CTL+ ist 2-EXPTIME-vollständig

CTL+ ist wie die weitaus bekannteren CTL und CTL* eine Temporallogik, die
über Zuständen von Transitionssystemen interpretiert wird. Syntaktisch
gesehen ist CTL ein Fragment von CTL+, das wiederum ein Fragment von
CTL* ist. Deshalb ist das Erfüllbarkeitsproblem fuer CTL+ mindestens
EXPTIME-hart und höchstens in 2-EXPTIME. Ein Vollständigkeitsresultat
war bis dato jedoch nicht bekannt.

In diesem Vortrag zeigen wir, dass das Problem 2-EXPTIME-hart ist. Der
Beweis baut auf Vardi/Stockmeyers Beweis für die 2-EXPTIME-Härte von
CTL* auf. Das Resultat ist einigermassen überraschend, da bekannt ist,

- in puncto Ausdrucksstärke CTL und CTL+ gleich mächtig, jedoch
unterschiedlich succinct sind, während CTL* echt mächtiger ist,

- das Model Checking Problem fuer die drei Logiken jeweils fuer
verschiedene Komplexitätsklassen vollständig ist (PTIME, DELTA2 und

Der Vortrag basiert auf einer gemeinsamen Arbeit mit Jan Johannsen.

22. November 2002, 14. Uhr c.t.

Es spricht:   Phil Trinder
              School of Mathematical and Computer Sciences
              Heriot-Watt University, Edinburgh               

über:         Explaining Polymorphic Types

Hindley-Milner type inference forms a strong basis for polymorphic
type checking but is less well suited to explaining polymorphic types,
as it introduces intermediate constructs that relate poorly to a
programmer's understanding of the program. We present a type
explanation system motivated by explanations given by expert humans
rather than the classical W and M type inference algorithms. The core
of the system is a new inference algorithm, the H algorithm, that
sacrifices some efficiency to provide succinct explanations with
minimal reference to inference artifacts.

The talk summarises results reported in:
Explaining Polymorphic Types. The Computer Journal 45(4) (2002)
pp. 436-452.

15. November 2002, 14. Uhr c.t.

Es spricht:   Ian Stark
              University of Edinburgh               

über:         Strong normalisation for the lambda-calculus
              with computational monads

Moggi's "computational metalanguage" lambda-ML extends the
simply-typed lambda-calculus with type and term constructors for a
generic notion of computation.  This metalanguage is used, for
example, in Haskell's monads for state and I/O, and as the monadic
intermediate language of the MLj and SML.NET compilers.

We have shown that the calculus lambda-ML is strongly normalising; the
result itself is not surprising, but its proof suggests general ways
to handle computation.  In this talk I shall present two different
approaches.  The first relates lambda-ML to the lambda-calculus with
an extra associativity rewrite, which by some careful counting we can
show to be strongly normalising.  In a second, more substantive,
method we provide a reducibility predicate for computations.  This
uses a novel inductive definition via continuations, with a two-stage
lifting that we propose as a general method for extending existing
lambda-calculus techniques to handle computation types.

This is joint work with Samuel Lindley.

8. November 2002, 14. Uhr c.t.

Es spricht:   Andreas Abel
              Institut für Informatik 
              LMU München                   

über:         Bericht von der ICFP'02 

Die letzte ICFP (International Conference of Functional Programming)
fand vom 4. - 6. Oktober in Pittsburgh, USA, statt (zusammen mit GPCE
und PPDP, sowie 7 Workshops).  Ich konnte folgende Schwerpunkte der
Konferenz ausmachen:

- Generics (Typinformation zur Laufzeit, tagless interpreters)
- Meta-Programming (Typsichere Makros)
- Programm-Optimierung mit Fusion
- Monaden (Komponierung, Rekursion)

Darüberhinaus gab es noch viele Einzelthemen.  In meinem Bericht will
ich die Vorträge zu den vier Schwerpunkten kurz skizzieren (auch
solche aus dem Haskell-Workshop).

Die Liste aller Vorträge gibt es unter


25. Oktober 2002, 14. Uhr c.t.

Es spricht:   Favio Miranda
              Institut für Informatik 
              LMU München                   

über:         On the Inheritance of Strong Normalization and 
              Subject Reduction from Church to Curry Systems 
              of Monotone Inductive Types

We define a Curry-style version of Matthes' system of monotone inductive
types IMIT and extend the method presented in [1] to show that strong
normalization and subject reduction for this system are inherited from
the original Church-style system IMIT.
The method, based on type-erasing functions, formalizes the fact that
Church-terms are notations for Curry-derivations and is easily extensible
to other typing derivations in Curry-style systems of inductive and
coinductive types.
For Subject Reduction, this is an alternative to Barendregt's and
Krivine's Method.

[1] F. Miranda-Perea. A note on going from system F to system F a la
    Curry and back. 
    (available via http://www.tcs.informatik.uni-muenchen.de/~miranda)

18. Oktober 2002, 14. Uhr c.t.

Es spricht:   Prof. Roberto Amadio
              Universite de Provence, Marseille
              z.Z. Gast am Institut fuer Informatik der LMU

über:         On decidability of the control reachability problem
              in the asynchronous pi-calculus


We explore the expressive power of name generation in the framework of
the asynchronous pi-calculus.

In particular we consider the basic control reachability problem and
show that the combination of name generation with either name mobility
or unbounded control leads to an undecidable fragment.

On the other hand, we prove that name generation with unique receiver
and bounded input is decidable by reduction to the coverability
problem for Petri Nets with transfer.

We take this opportunity to review some basic but quite useful facts
on transition systems on well-orders.

12. Juli 2002, 14. Uhr c.t.

Es spricht:   Dr. J. Wolfgang Degen
              Institut für Informatik 
              FAU Erlangen-Nürnberg

über:         Einige arithmetische und algebraische Auswahlprinzipien


Wir betrachten Auswahlprinzipien arithmetischer Natur; sie sind
systematisch verbunden und betreffen 
  (1) die Existenz von Paarungsfunktionen, 
  (2) Kürzbarkeit und 
  (3) die Existenz von Zylindrifizierungen.

Diese Auswahlprinzipien besitzen auch eine algebraische Natur. Wir
schliessen mit einem rein algebraischen Auswahlprinzip, welches die
Existenz gewisser Groupoid-Strukturen betrifft.

21. Juni 2002, 14. Uhr c.t.

Es spricht:   Olha Shkaravska
              Institut für Informatik 
              LMU München                   

über:         Real functions defined by transducers


We consider transducers over infinite symbolic representations
of real numbers. An R-transducer reads symbol-by-symbol an
infinite binary representation of an input number and outputs
the symbols of the representation of a given function's

Also we will use the notion of an R_m^n-transducer with m
input and n output tapes and the notion of an R_{(s, t)}-
transducer which has s-ary input expansions and t-ary
outputs. We do not demand computability of input numbers.

R-transducers can be finite, also they can have additional
pushdown memory, stack, etc. So one can classify real functions
in accordance with the type of the corresponding transducers.
>From a user's point of view transducers are the algorithms
that can compute the values of the functions in given points
with arbitrary precision. Also transducers can be applied
in Fractal Geometry since they are good in defining functions
on recursive fractal sets.

In particular, we are going to consider the following results:

- the class of the functions differentiable on some interval
and defined by push-down transducers is contained in the
class of the functions linear on this interval;

- Mostovsky-Uspensky's Theorem on notation translation for
transducers: a strict R_{(s, t)}-transducer (symbols "s" are
not allowed on an output tape) defining the identity function
exists if and only if the set of primary divisors of the number
s contains the set of primary divisors of t; the condition
for the finite transducers is also considered;

- there exists an  R_1^2-transducer defining a Peano Curve
with overlapping coefficient 3.

14. Juni 2002, 14 Uhr c.t.

Es spricht:   Jan Johannsen
              Institut für Informatik 
              LMU München                   

über:         An Optimal Lower Bound for Resoluton with Width-2 Conjunctions


A lower bound is proved for refutations of certain clause sets in a
generalization of Resolution that allows cuts on conjunctions of width
$2. The hard clauses are the Tseitin graph formulas for a class of
logarithmic degree expander graphs. The bound is optimal in the sense
that it is truly exponential in the number of variables.  

This talk continues the talk of N.S. Narayanaswamy (19.4.02) on our
joint work; it will present more details, but also be self-contained. 

7. Juni 2002, 14 Uhr c.t.

Es spricht:   Prof. Martin Hofmann
              Institut für Informatik 
              LMU München                   

über:         Completeness results for (traced) monoidal categories 

In joint work with Gordon Plotkin and M. Hasegawa we have obtained the
following two results:

Two maps in the free symmetric monoidal closed category are equal iff they
are equal in an infinte dim vector space over the field of rational
functions. (This is a known result conjectured by MacLane and proved by
Solovev, however, our proof is different and in our opinion a lot simpler
than Solovev's.)

Two maps in the free traced monoidal category are equal iff they are equal
in the category of finite dimensional vetor spaces over Q. (We do not know
whether this result is known.)

19. April 2002, 14 Uhr c.t.

Es spricht:   N.S. Narayanaswamy
              Institut für Informatik 
              LMU München                   

über:         An Optimal Lower Bound for Resoluton with Two Conjunctions


A lower bound is proved on the size of refutations of the Tseitin
graph clauses for a logarithmic degree expander graph, in the proof
system Res(2) that extends Resolution by allowing disjunctions of
conjunctions of two literals instead of just clauses. It is
exponential in the number of variables, and thus as
large as possible upto a constant factor in the exponent.

(Joint work with Jan Johannsen)

18. Mai 2001, 16 Uhr c.t.

Es spricht:   Rohit Khandekar
              Dept. of Computer Science and Engineering
              Indian Institute of Technology, Delhi               

über:         Local Search Heuristics for k-median and
              Facility Location Problems      


In this talk, we will consider some local search heuristics for the
k-median and facility location problems. We define the locality gap
of a local search procedure as the maximum ratio of a locally optimum
solution (obtained using this procedure) to the global optimum. For
k-median, we show that local search with swaps has a locality gap of
exactly 5. When we permit p facilities to be swapped simultaneously
then the locality gap of the local search procedure is exactly 3+2/p.
This also improves the previous known 4 approximation for this problem.
For Uncapacitated facility location, we show that local search, which
permits adding, dropping and swapping a facility, has a locality gap
of exactly 3. This improves the 5 bound of Korupolu et al. We also
consider a capacitated facility location problem where each facility
has a capacity and we are allowed to open multiple copies of a facility.
For this problem we introduce a new operation which opens one or more
copies of a facility and drops zero or more facilities. We prove that
local search which permits this new operation has a locality gap between
3 and 4.            

11. Mai 2001, 16 Uhr c.t.

Es spricht:   N.S. Narayanaswamy
              Institut für Informatik 
              LMU München                   

über:         On Assigning Prefix Free Codes to the Vertices of a Graph


We consider the problem of assigning binary strings of a specified length
to the vertices of an undirected graph such that, the strings assigned to
the end points of an edge are not prefixes of each other.  We present the
motivation for this problem and the existence of the above mentioned
assignment as a function of the specified lengths.

16. Juni 2000, 16 Uhr c.t.

Es spricht:   N.S. Narayanaswamy
              LFE Theoretische Informatik
              Institut für Informatik 
              LMU München                   

über:         Space Efficient Algorithms through Randomized Algorithms 

Abstract: It is known that the random walk on a connected graph "will"
visit every vertex of the graph. The "time" taken to visit every
vertex is also a polynomial in the number of vertices of the graph.  
This gives the most efficient algorithm to decide if a given
undirected graph is connected.  The drawback is that the above
algorithm is a randomized algorithm. This talk will present the state
of art techniques to reduce the randomness used by random walks and
yet, not lose out drastically on the amount of space used.

18. Februar 2000, 16 Uhr c.t.

Es spricht:   PD Dr. Martin Grohe
              Institut für Mathematische Logik und 
                  Grundlagen der Mathematik
              Albert-Ludwigs-Universität Freiburg

über:        Ein logischer Zugang zum Graphenisomorphieproblem      

Zusammenfassung: Das Graphenisomorphieproblem, also die Frage nach
einem Polynomialzeitalgorithmus, der entscheidet, ob zwei gegebene
Graphen isomorph sind, ist sowohl theoretisch als auch praktisch von
großer Bedeutung. 1992 haben Cai, Fürer und Immerman eine enge
Verbindung zwischen der Definierbarkeit in Logiken mit endlich vielen
Variablen und einem bekannten Ansatz zur Lösung des
Graphenisomorphieproblems, dem sogenannten Weisfeiler-Lehman-Algorithmus,
beobachtet. Sie haben diese Verbindung dazu verwendet, um mit
Techniken aus der Logik die Grenzen des Weisfeiler-Lehman-Algorithmus

Wir greifen diesen logischen Zugang auf, um umgekehrt nachzuweisen,
daß für wichtige Klassen von Graphen, wie planare Graphen oder Graphen
beschränkter Baumweite, der Weisfeiler-Lehman-Algorithmus sehr wohl das
Isomorphieproblem löst. Unter anderem erhalten wir auf diese Weise den
ersten polynomiellen Isomorphietest für Graphen, die sich in eine feste
nicht-orientierbare Fläche einbetten lassen.  

19. November 1999, 16 Uhr c.t.

Es spricht:    Dr. Jan Johannsen
               Institut fuer Informatik
               LMU Muenchen

ueber:         Resolution und der Polynom-Kalkuel

Der Polynom-Kalkuel (PC) ist ein Beweissystem fuer die
Aussagenlogik, das von R. Impagliazzo eingefuehrt wurde 
und in dem mit Polynomen ueber einem (endlichen) Koerper
gerechnet wird. 

Die wesentliche Eigenschaft, die PC vor anderen Beweis-
systemen auszeichnet, ist seine Automatisierbarkeit, 
d.h. dass die Beweissuche ohne wesentliche Verluste
deterministisch gemacht werden kann. 

Im Vortrag sollen PC und seine Beziehung zum bekannten
Resolutionskalkuel, insbesondere das neue Resultat, wonach
die uebliche Simulation der Resolution in PC optimal ist,
praesentiert werden. 

30. Juli 1999, 14 Uhr c.t.

 Am  Freitag, 30. Juli um 14 Uhr c.t.  
 im  Raum Z1.09, Oettingenstr. 67
Es spricht:	  Dipl. Inf. Walter Koch
-----------       TEMIC

ueber:      Spracherkennung großer und sehr großer Wortschätze


Spracherkennungssysteme werden heutzutage für immer mehr Anwendungen interessant. Längst schon sind nicht mehr nur sprecherabhängige Dikitersysteme, sondern auch sprecherunabhängige Systeme mit großen Wortschätzen (z.B. Auskunfts-, Navigationssysteme) auf dem Markt. Der Trend geht zu immer größeren Wortschätzen und immer natürlichsprachlicherer Eingabe. 
Bei derartigen Systemen ist natürlich die Erkennleistung wichtig, aber nahezu gleichbedeutend ist auch die Erkenngeschwindigkeit und Hardwareanforderung (Preis) des Systems.
In diesem Vortrag nun soll eine Einführung in die sprecherunabhängige state-of-the-art Spracherkennung gegeben werden mit besonderer Berücksichtigung von einigen Ansätzen und Algorithmen in Hinblick auf große Wortschätze.


Speech Recognition Systems are becoming more and more important in various fields. There are not anymore just speaker dependent dictation-systems, but also speaker independent systems for large and very large vocabularies on the market (e.g. information -, navigation-systems). The future lies in even bigger vocabularies and user-driven dialogs and inputs.
In Large Vocabulary Systems the recognition rate is, of course, important. But nearly as important is the speed and hardware requirements (prize) of the system.
In this presentation an introduction into state-of-the-art speaker-independent speech recognition systems, especially in respect to large vocabularies, will be given.

18. Juni 1999, 14 Uhr c.t.

Am  Freitag, 18. Juni um 14 Uhr c.t.  
im  Raum Z1.09, Oettingenstr. 67
Es spricht:	  Dr. Sebastiaan Terwijn

ueber:  PAC-learning and the Vapnik-Chervonenkis dimension


We give a short introduction to Valiants model of
probably approximately correct (pac) learning. This model was
introduced in 1984 and has become one of the most studied
models of learning in computer science, with many connections
to other subjects such as Kolmogorov complexity, cryptography,
structural complexity theory, and formal language theory.
We will focus the discussion on a notion of dimension from
combinatorics called the Vapnik-Chervonenkis dimension, and

4. Juni 1999, 14 Uhr c.t.

 Am  Freitag, 4. Juni um 14 Uhr c.t.
 im  Raum Z1.09, Oettingenstr. 67
Es spricht:	  Uli Ruehrmair, M.Sc.            
-----------       Lehrstuhl f. Theoretische Informatik  
                  Universitaet Muenchen

ueber:  Die Komplexitaetsklasse NPI = NP/NPC/P.


Die Komplexitaetsklasse NPI ist eine interessante Klasse von Problemen.
Grudlegendes Resultat ist Ladners Theorem, das besagt, dass NPI genau dann
nichtleer ist, wenn NP ungleich P ist. Dieses Resultat und die zum Beweis
verwendete Diagonalisierungstechnik dienen als Ausgangspunkt fuer eine
genauere strukturelle Untersuchung der Klasse NPI. 
Zu Beginn des Vortrags wird eine Variante von Schoenings Uniform
Diagonalisation Theorem vorgestellt und bewiesen, das Complement
Diagonalisation Theorem, und anschliessend zum Beweis einiger
Abschlusseigenschaften von NPI benutzt. 
Danach werden die Low und die High Hierarchy besprochen, die ebenfalls von
Schoening eingefuehrt wurden. Sie sind ein auesserst nuetzliches
Instrument, um NPI zu strukturieren, und um Probleme innerhalb von NPI
strukturell einzuordnen. Durch die Low und die High Hierarchy konnten z.B.
Aussagen der Form "Falls Graph Isomorphisms NP-vollstaendig ist, dann
kollabiert die Polynomielle Hierarchie auf die 2. Stufe" motiviert und
bewiesen werden.
Schliesslich fuehren wir die Low Reducibilities ein, eine unendliche
Familie von Reduzierbarkeiten, und untersuchen die Struktur von NPI in
Bezug auf diese Reduzierbarkeiten. Dabei ergibt sich unter der
Voraussetzung, dass die Polynomielle Hierarchie nicht kollabiert, eine
starke Verallgemeinerung des Satzes, dass NPI unendlich viele
unvergleichbare Turing-Grade enthaelt. Diese Verallgemeinerung wird
besprochen und bewiesen.

21. Mai 1999, 14 Uhr c.t.

 Am  Freitag, 21. Mai um 14 Uhr c.t.  (NEUE ANGFANGSZEIT)
 im  Raum Z1.09, Oettingenstr. 67
Es spricht:	  Dr. Jan Johannsen          

ueber:      Exponentielle Trennungen zwischen eingeschränkten Resolutions- und 


Wir zeigen eine exponentielle untere Schranke an die Größe von baumartigen
Cutting-Planes-Beweisen einer Familie von Tautologien, die Resolutionsß
beweise polynomialer Größe haben. Daraus folgt eine exponentielle Trennung 
ywischen baumartigen und dagßartigen Beweisen sowohl für den Resolutions-
kalkül als auch für Cutting-Planes; in beiden Fällen war bisher nur eine
superpolznomiale Trennung bekannt.
Der Beweis beruht auf der Verallgemeinerung der unteren Schranken an die
Tiefe von monotonen Schaltkreisen von Raz und McKenzie (FOCS'97) auf den
Fall monotoner reeller Schaltkreise.
Für den Resolutionskalkül läßt sich das Ergebnis noch zu einer exponentiellen 
Trennung baumartiger von regulären Beweisen verbessern.
Schließlich wird noch eine exponentielle Trennung der Davis-Putnam-Resolution
von uneingeschr"ankter Resolution angegeben, die die bisher bekannte super-
polynomiale Trennung verbessert.
Diese Ergebnisse wurden in Zusammenarbeit mit M.L.Bonet, J.L.Esteban und
N.Galesi (Barcelona) erzielt. 

12. Februar 1999, 16 Uhr c.t.

Am  Freitag, 12. Februar um 16 Uhr c.t.
im  Raum Z1.09, Oettingenstr. 67
Es spricht:	Dipl.-Inf. Uwe Ohler           
-----------     Lehrstuhl f. Mustererkennung   
                Universitaet Erlangen-Nuernberg
ueber:          Interpolierte und diskriminative Markovketten zur 
                funktionellen Annotation von DNA-Sequenzen

In grossen Sequenzierprojekten ist es mittlerweile ueblich,
in einem ersten Schritt mit Hilfe des Rechners die neu gewonnenen Sequenzdaten 
mit bereits bekannten Gensequenzen zu vergleichen und funktionell
zu annotieren. Dadurch erhofft man sich erste Erkenntnisse ueber die Struktur
und die Funktion der Sequenzen. Das automatische Erkennen von Genen und ihrer
Struktur wird als "Gen-Parsing" bezeichnet und hat 
durch den Einsatz statistischer Modelle enorme Fortschritte 
gemacht; einer fehlerfreien Annotierung steht aber immer noch die Komplexitaet 
der eukaryontischen Genstruktur entgegen.  

Ein weitverbreiteter Bestandteil von Gen-Parsern sind Entscheidungsfunktionen,
die eine vorliegende Sequenz inhaltsbasiert -- das heisst als ganzes und
nicht anhand spezifischer Muster -- in eine von mehreren moeglichen Klassen
einordnen koennen. Dieses Problem tritt beispielsweise bei der Unterscheidung
codierender und nichtcodierender DNA-Sequenzen sowie der Suche nach 
Promotorregionen, die wesentlich am Genregulationsmechanismus beteiligt sind,

Im Vortrag werden die Grundlagen eines inhaltsbasierten Klassifikators,
der Markovketten, erlaeutert. Darauf aufbauend wird gezeigt, wie sowohl
durch eine Interpolation mehrerer Markovketten als auch durch diskriminative
Lernverfahren die Klassifikationleistung gesteigert werden koennen. Die
Suche nach eukaryontischen Promotoren und die Erkennung codierender Teilbereiche
dienen als Beispiele, wie sich diese Verfahren zur Annotation von Sequenzdaten
einsetzen lassen. 

5. Februar 1999, 16 Uhr c.t.

Am  Freitag, 5. Februar um 16 Uhr c.t.
im  Raum Z1.09, Oettingenstr. 67
Es spricht:	Rolf Backofen, Peter Clote und Sebastian Will
-----------     Institut für Informatik
                LMU Muenchen
ueber:          Algorithmic approach to quantifying the hydrophobic force 
                contribution in protein folding

Wir werden ueber unsere neuen Ergebnisse zur Quantifizierung des Beitrags
der Hydrophobicitaet zur Energiefunktion bei der Proteinfaltung berichten.
Hier ist der Abstrakt der dazugehoerigen Papiers:

Though the electrostatic, ionic,
van der Waals, 
Lennard-Jones, hydrogen bonding,
and other forces play an important
role in the energy function minimized at a protein's
native state, it 
is widely believed that the {\em hydrophobic force}
is the dominant term in protein folding.
In this paper, we attempt to
quantify the extent to which the
hydrophobic force determines the positions of the
backbone $\alpha$-carbon atoms in PDB data, by applying
Monte-Carlo and genetic algorithms to determine the
predicted conformation with minimum energy, where
only the hydrophobic force is considered
(i.e. Dill's HP-model,
and refinements using Woese's polar requirement).
This is done by computing the root mean square deviation
between the normalized distance matrix $D = (d_{i,j})$
($d_{i,j}$ is 
normalized Euclidean distance between residues
$r_i$ and $r_j$) for
PDB data with that obtained from the output of our 
algorithms. Our program was run on 
the database of ancient conserved 
regions drawn from GenBank 101 
generously supplied by W. Gilbert's lab
\cite{gilbertData,desouzaGilbert}, as well
as medium-sized proteins (E. Coli RecA, {\tt 2reb}, 
Erythrocruorin, {\tt 1eca}, and  Actinidin {\tt 2act}). The root mean square
deviation (RMSD) between distance matrices derived from
the PDB data and from our program output
is quite small, and by comparison with 
RMSD between PDB data and random coils, allows
a quantification of the hydrophobic force

18. Dezember 1998, 16 Uhr c.t.

Am  Freitag, 18. Dezember um 16 Uhr c.t.
im  Raum Z1.09, Oettingenstr. 67
Es spricht:	Prof. Dr. Peter Clote, Ph.D.
-----------     LFE Theoretische Informatik
                LMU Muenchen

ueber:		Beweise für die 'injective mapping' Methode 
                von Sinclair.

11. Dezember 1998, 16 Uhr c.t.

Am  Freitag, 11. Dezember um 16 Uhr c.t.
im  Raum Z1.09, Oettingenstr. 67

Es spricht:	Prof. Dr. Peter Clote, Ph.D.
-----------     LFE Theoretische Informatik
                LMU Muenchen

ueber:		Protein folding, the Levinthal paradox and
                rapidly mixing Markov chains (Fortsetzung)

Bemerkung:   Dies ist die Fortsetzung des Seminars vom 27. November.
---------    Der Vortrag wird self-contained sein, d.h., man
             daran teilnehmen kann, ohne das Seminar vom 27. November
             besucht zu haben.

In 1994 A.~\v{S}ali, E.~Shakhnovich and M.~Karplus 
modeled protein folding using a 27-bead heteropolymer
on a cubic lattice with normally distributed contact energies.
Using a Monte-Carlo folding algorithm with a local
move set between conformations, \v{S}ali et al. attempted
to answer the Levinthal paradox of how a protein can fold
rapidly, i.e. within milliseconds to seconds, despite
the magnitude of the conformation space
(e.g. approximately $5^{26} \approx 10^{18}$ for the
27-mer).  Letting $t_0(P)$ denote the folding time 
(i.e. {\em first passage time}) and $\Delta(P)$
denote the energy {\em gap} between the
lowest energy $E_{i_0}$(native state) and
second lowest energy $E_{i_1}$ of protein $P$
with normally distributed contact energy,
\v{S}ali, Shakhnovich and Karplus observed that
$\Delta(P)$ is large exactly when $t_0(P)$ is small.
Using Sinclair's notion of rapid mixing 
and his modification of the Diaconis-Stroock 
bound on relative pointwise distance in terms of
the subdominant eigenvalue, we provide the first
theoretical basis for the principal observation of
\v{S}ali, Shakhnovich and Karplus.
Specifically, we show that the mean first passage time
is bounded above by $c_1 \pi_{i_0} \pi_{i_1}+c_2$, where
$\pi_{i_0}$ [resp. $\pi_{i_1}$]
is the Boltzmann probability of the system being in 
the native minimum energy state [resp. second minimum].
It follows that this upper bound {\em decreases} iff the
energy gap $E_{i_1}-E_{i_0}$ {\em increases}.
Our result is actually
proved for pivot moves (rotations) with multiple
occupancy, rather than local moves.

4. Dezember 1998, 16 Uhr c.t.

Es spricht:	Prof. Dr. Michel de Rougement
-----------     Universite Paris II

ueber:		Interactive proofs on general structures.

The notion of an interactive proof has followed developments in complexity therory, where
computations are made over finite structures. We investigate its generalization to arbitrary
structures as the classes P and NP can be generalized (see "Les petits cailloux" by B. Poizat and
the recent book by Blum, Cucker, Shub and Smale on "real Computations").

 We introduce the classes $IP_{\cal R_+}$ 
 (resp. $IP_{\cal R_{\times}}$) as
the class of languages that admit an interactive protocol
on the reals when the verifier is a BSS-machine with addition (resp. addition and multiplication). Let  $BI
P_{\cal R_+}$ (resp. $BIP_{\cal R_{\times}}$) its restriction when only boolean messages
can be exchanged between the prover and the verifier. We prove
that the classes $BIP_{\cal R_+}$ and  $PAR_{\cal R_+}$, the class of languages accepted in parallel polyno
mial time coincide.  In the case of multiplicative machines, we show that $BIP_{\cal R_{\times}} \subseteq 
PAR_{\cal R_{\times}}$.\\

We also separate $BIP_{\cal R}$ from $IP_{\cal R} $ in both models by
exhibiting a language $L$ which is not in $PAR_{\cal R_{\times}}$ but
in $IP_{\cal R_{+}} $.  As a consequence we show that additive quantifier elimination can't be solved in $P
AR_{\cal R_{\times}}$ and that all boolean languages are in $IP_{\cal R_{+}} $.

27. November 1998, 16 Uhr c.t.

Es spricht:	Prof. Dr. Peter Clote, Ph.D.
-----------     LFE Theoretische Informatik
                LMU Muenchen

ueber:		Protein folding, the Levinthal paradox and
                rapidly mixing Markov chains

In 1994 A.~\v{S}ali, E.~Shakhnovich and M.~Karplus 
modeled protein folding using a 27-bead heteropolymer
on a cubic lattice with normally distributed contact energies.
Using a Monte-Carlo folding algorithm with a local
move set between conformations, \v{S}ali et al. attempted
to answer the Levinthal paradox of how a protein can fold
rapidly, i.e. within milliseconds to seconds, despite
the magnitude of the conformation space
(e.g. approximately $5^{26} \approx 10^{18}$ for the
27-mer).  Letting
$t_0(P)$ denote the folding time 
(i.e. {\em first passage time}) and $\Delta(P)$
denote the energy {\em gap} between the
lowest energy $E_{i_0}$(native state) and
second lowest energy $E_{i_1}$ of protein $P$
with normally distributed contact energy,
\v{S}ali, Shakhnovich and Karplus 
observed that
$\Delta(P)$ is large exactly when $t_0(P)$ is small.

Using Sinclair's notion of rapid mixing 
and his modification of the Diaconis-Stroock 
bound on relative pointwise distance in terms of
the subdominant eigenvalue, we provide the first
theoretical basis for the principal observation of
\v{S}ali, Shakhnovich and Karplus. 
Specifically, we show 
that the mean first passage time
is bounded above by $c_1 \pi_{i_0} \pi_{i_1}+c_2$, where
$\pi_{i_0}$ [resp. $\pi_{i_1}$]
is the Boltzmann probability of the system being in 
the native minimum energy state [resp. second minimum].
It follows that this upper bound {\em decreases} iff the
energy gap $E_{i_1}-E_{i_0}$ {\em increases}.
Our result is actually
proved for pivot moves (rotations) with multiple
occupancy, rather than local moves.

20. November 1998, 16 Uhr c.t.

Kein Oberseminar 

13. November 1998, 16 Uhr c.t.

Es spricht:	Dipl.-Math.  Ralph Matthes
----------      LFE Theoretische Informatik
                LMU Muenchen

über:		Short Proofs of Normalization for the simply-typed
                lambda-calculus, permutative conversions and G"odel's T 


Es handelt sich um den Titel eines mit Felix Joachimski geschriebenen
Papers, das wir kuerzlich eingereicht haben.

Die Zusammenfassung des Papers lautet:

We study inductive characterizations of the sets of terms, the subset
of strongly normalizing terms and normal forms in order to give short
proofs of weak and strong normalization for the simply-typed
lambda-calculus and for an extension by sum types with permutative
conversions. In contrast to the strong computability/candidate style a
la Tait and Girard this proof can be formalized in primitive recursive
arithmetic. By introducing an infinitely branching inductive rule the
method extends to G"odel's T.

In dem Vortrag soll der Inhalt des Papers vorgestellt
werden. Besonderes Augenmerk liegt dabei auf den konzeptionellen
Entscheidungen, die in der langen Einleitung beschrieben sind.

Der Text ist verf"ugbar unter


        Freitag, 25. 7. 97    14 Uhr c.t.

Es spricht:	Steve Bellantoni

über:		Recursion, Ramification, and the Grzegorczyk Hierarchy


Bellantoni and Cook gave a recursion-theoretic characterization of 
the polynomial-time computable functions that does not refer directly 
to polynomials or to time-bounded computation. Much earlier results
of Schwichtenberg and of Mueller showed that the Grzegorczyk classes
at and above the elementary level can be characterized by simply 
counting the nesting depth of recursion (i.e. using the Heinermann 
classes). Now, new work by Bellantoni and Niggl shows how to 
integrate these two different approaches. 

The talk proceeds in three sections. In the first section I shall 
describe the Bellantoni-Cook result and explain its connection to
ideas of predicativity and ramification. The mechanism used for the
characterization is distinguished from a type-theoretic approach. 
In the second part of the talk, I will describe the Bellantoni-Niggl
ranking of recursions and explain how it nicely categorizes all 
primitive recursive functions with respect to their computational
complexity.  I also introduce the 'Grzegorczyk polynomials' which 
are used to prove the result. In the third section I will describe 
a ramified model of arithmetic and show how this model, unlike the 
standard model of arithmetic, enables one to semantically distinguish 
between two different algorithms for the same natural-number function. 
In the ramified model, the computational complexity of an algorithm 
(as specified by its position in the Bellantoni-Niggl ranking) is 
directly related to the extension of the computed function.

        Mittwoch, 23. 7. 97    13 Uhr c.t.
Es spricht:
          Tom Santner
          Statistiker von Ohio State University

Titel: Gibbs Samplers in the discrete case

	This talk will introduce successive substitution sampling algorithms,
	and in particular, the Gibbs Algorithm as methods for sampling from a
	given high-dimensional joint probability distribution. In statistical
	applications there are important applications where the joint
	distribution of interest consists entirely of components all of which
	are discrete random variables, or the joint can have components all of
	which are continuous random variables, or the components can be a
	mixture of discrete and continuous random variables.  The dimension of
	the joint distribution can be tens of thousands.  The strategy of
	these methods will be explained and conditions given under which the
	algorithms and guaranteed to converge.

        Freitag, 11. 7. 97    14 Uhr c.t.

Es spricht:
          Thorsten Altenkirch

Bericht von der LICS 97 

Ich werde über einige ausgewählte Vorträge der LICS 97 (29.6.-2.7. in
Warschau) berichten (Programm der LICS97)

        Freitag, 4. 7. 97    14 Uhr c.t.
Es spricht:
              Uwe Ohlers
              TU Berlin

Statistische Modelle zur DNA--Sequenzanalyse --- 
auf der Suche nach Promotoren und anderen funktionellen Elementen

Seit einigen Jahren werden mit exponentieller Wachstumsrate die Genome 
unterschiedlichster Organismen sequenziert. Jedoch k"onnen diese 
DNA--Sequenzen schon aufgrund des Aufwandes nicht allein durch experimentelle 
Untersuchungen entschl"usselt werden. Es ist also dringend erforderlich, mit 
Hilfe des Rechners die anfallende Informationsflut zu analysieren, um 
Hinweise auf interessante Stellen zu erhalten. 

Solche Stellen sind zum Beispiel funktionelle Elemente wie der
Anfang und das Ende eines Gens, codierende und nichtcodierende Regionen
oder Promotoren, die eine wesentliche Rolle bei der Genregulation spielen. 
Allerdings bereitet die Variabilit"at dieser Elemente erhebliche 
Schwierigkeiten -- in der Regel unterliegen Elemente aus einer Klasse nur 
einem groben Bauplan und stimmen nicht v"ollig miteinander "uberein. 
Hier bieten sich statistische Klassifikationsverfahren an, die zum Beispiel 
in der Spracherkennung bei "ahnlichen Problemen verwendet werden.
Sie bieten die M"oglichkeit, Modelle funktioneller Elemente automatisch zu 
trainieren und mit diesen DNA--Sequenzen zu analysieren.

Ausgehend von einfachen Stringsuchverfahren wird ein "Uberblick "uber
einige  Ans"atze zur Modellierung von Promotoren gegeben: 
Hidden Markov--Modelle (HMMs), Sprachmodelle ($N$--Gramme) und 
generalisierte HMMs. 

		   Freitag, 13. 12. 96, 14 Uhr c.t.

			     Es spricht:

			   Sergei Vorobyov
		 Max-Planck-Institut fuer Informatik

TITLE: New complexity and undecidability results in typed lambda

ABSTRACT. We settle the new lower bounds for several problems in the
simply typed lambda calculus with a single constant base type and
prove the undecidability of the higher-order matching problem in the
simple theories of type assignment. The talk will consist of two
parts, on complexity and on undecidability.

1. We show that a rudimentary fragment of Leon Henkin's theory of
hereditarily finite sets (1963) has an exponentially growing tower of
twos as a lower bound. This is the strongest currently known lower
bound for a "natural" decidable theory. The best previously known
lower bound for the theory was just a logarithmically growing tower of
twos (Fisher, Meyer, 74). It was also conjectured that any decidable
theory can be decided within time bounded by a linearly growing tower
of twos. The proof is based on the Compton-Henson uniform lower bound

As applications we immediately get: 1) a new lower bound for the
beta-eta equality in the simply typed lambda calculus, which matches
Tait's upper bound (linearly growing tower of twos) 2)
Schwichtenberg's lower bound on normalization in the simply typed
lambda calculus, 3) a new lower bound for the higher-order matching
problem (almost linearly growing tower of twos), 4) a stronger version
of the Rice-Statman theorem.

2. Undecidability of the higher-order matching in the Curry-style
   simply typed lambda calculus

Simple theories of type assignment (essentially due to Hindley) are
Curry-style variants of the simply typed lambda calculus that assign
types to untyped lambda-terms by means of the well-known arrow
introduction and elimination rules (Hindley, Barendregt,
Mitchell). Simple theories of type assignment form a core of the ML
type system. We prove the following surprizing

THEOREM. Given closed untyped normal lambda terms lambda x1...xk.s and
t, which can be assigned some types S1->...->Sk->T and T respectively
in the simple theories of type assignment, it is undecidable whether
there exist closed terms s1,...,sk of types S1,...,Sk such that
s[s1/x1,...,sk/xk]=t, even if the orders of Si's do not exceed 3.

This undecidability result should be contrasted to the decidability of
the third-order matching in the Church-style simply typed lambda
calculus with a single constant base type (Dowek, 1992).
The proof is given by reduction from the recursively inseparable sets
of unsatisfiable and finitely satisfiable sentences of the first-order
theory of a binary relation (Trakhtenbrot, 1953, Vaught, 1960). The
central idea is that every finitely satisfiable sentence about a
binary relation has a lambda definable realization by means of
third-order functionals over projection functions, whereas an
unsatisfiable one does not have any.

        Freitag, 14. 3. 97    14 Uhr c.t.

Es spricht:

        Dr. Anton Setzer
        Mathematisches Institut, 
        Uni München 


        Beweistheorie von Martin-L"of Typentheorie (IV)

        Mittwoch, 12. 3. 97    16 Uhr c.t. <--- **********!!!

Es spricht:

        Dr. Jan Johannsen
        Universität Erlangen-Nürnberg
	  Modelltheoretische Unabh"angigkeitsbeweise f"ur schwache
	  Fragmente der Beschr"ankten Arithmetik

	Es wird eine Eigenschaft von Unterstrukturen von Modellen der
        Arithmetik eingef"uhrt, genannt l"angeninitial, derart dass scharf
        beschr"ankte Formeln absolut zwischen einem Modell und seinen
        l"angeninitialen Unterstrukturen sind.
	Dies wird genutzt, um Unabhaengigkeitsresultate f"ur sehr schwache 
	Fragmente der Arithmetik zu beweisen, n"amlich durch Konstruktion
	von l"angeninitialen Untermodellen eines gegebenen Modells.

        Freitag, 7. 3. 97    14 Uhr c.t.

Es spricht:

        Dr. Anton Setzer
        Mathematisches Institut, 
        Uni München 


        Beweistheorie von Martin-L"of Typentheorie (III)

        Freitag, 28. 2. 97    16 Uhr c.t. <- **** neue Zeit ****

Es spricht:

        Dr. Anton Setzer
        Mathematisches Institut, 
        Uni München 


        Beweistheorie von Martin-L"of Typentheorie (II)

        Freitag, 21. 2. 97    14 Uhr c.t.

Es spricht:

        Dr. Anton Setzer
        Mathematisches Institut, 
        Uni München 


        Beweistheorie von Martin-L"of Typentheorie - Einf"uhrung

        Freitag, 14. 2. 97    14 Uhr c.t.

Es spricht:

        Dr. Karl-Heinz Niggl,
        Mathematisches Institut, 
        Uni München 


        Subrekursive Funktionen auf partiellen Sequenzen (II)

        Freitag, 7. 2. 97    14 Uhr c.t.

Es spricht:

        Dr. Karl-Heinz Niggl,
        Mathematisches Institut, 
        Uni München 


        Subrekursive Funktionen auf partiellen Sequenzen


Da die partiellen stetigen Funktionen des Typs 1 nicht unter 
Wertverlaufsrekursion abgeschlossen sind, ist der Wunsch nach einer 
Erweiterung der partiellen primitiv rekursiven Funktionen des Typs 1 
(ppr^1) auf subrekursive Funktionen mit ``partiellen Sequenzen'' 
als zus"atzlichen Argumenten sehr nat"urlich. 

Ohne die zugrundeliegende Bereichsstruktur ver"andern zu m"ussen, werden 
partielle Sequenzen durch Typ 1 Objekte f interpretiert, deren Werte 
h"ochstens im Argumentbereich 0 bis f(0) definiert sind, d.h. die L"ange 
solcher partieller Sequenzen f wird durch f(0) kontrolliert. Subrekursive
Funktionen auf partiellen Sequenzen werden folglich als Typ 2 Objekte 

In Erweiterung der ppr^1 Funktionen wird ein bereichstheoretischer Begriff
von ``partiellem primitiv rekursivem Funktional des Typs 2'' (ppr^2) 
vorgestellt, der sich auf eine nicht-monotone Kodierung von Typ 1 Objekten
- interpretiert als partielle Sequenzen - abst"utzt. 

Die ppr^2 Funktionale lassen sich auch ohne Bezugnahme auf Kodierungen 
von partiellen Sequenzen verm"oge einer Typ 2 Funktionenalgebra darstellen,

die aus stetigen Anfangsfunktionalen mittels Stetigkeit erhaltender
erzeugt wird. Eines dieser Schemata ist eine Verallgemeinerung des Schemas
f"ur ``parallel beschr"ankte parallele Suche'' auf Typ 2. Die 
Funktionenalgebra ist abgeschlossen unter einem kanonischen Begriff 
von Wertverlaufsrekursion. 

Schichtet man die ppr^2 Funktionale in Komplexit"atsklassen E^{*2}_n, 
die sogenannten n-ten ``partiellen Grzegorczyk-Klassen des Typs 2'', 
und die Funktionenalgebra in Rekursionsklassen R^{*2}_n im Stile 
Heinermanns, so l"asst sich in Verallgemeinerung der Resultate von 
Schwichtenberg, M"uller, Niggl folgendes Theorem zeigen: 

             F"ur n>=2 gilt: E^{*2}_{n+1} = R^{*2}_n. 

Dies legt den Grundstein f"ur eine einfache Programmiersprache PPR^2 
im Stile von Plotkins PCF, in der genau die ppr^2 Funktionale berechnet 
werden k"onnen. Damit ist gezeigt, dass der vorgestellte ``externe'', 
eingeschr"ankte Begriff von Scott-Berechenbarkeit, ppr^2, eine 
``interne'' Charakterisierung via PPR^2 besitzt. 

        Freitag, 13. 12. 96    14 Uhr c.t.

Es spricht:

	Christa Mauer

        Boolsche Komplexitaet und st-Konnexitaet
        (Beame, Impagliazzo, Pitassi)

        Montag, 25. 11. 96    14 Uhr 30

Es spricht:

	Rolf Backofen

        Constraintsprachen und Proteinfaltung

        Der Vortrag behandelt die Problematik der Faltung von Proteinen auf
        einem einfachen Gittermodell, wie es in der Literatur eingefuehrt
        In diesem Vortrag werde ich Ideen zur Anwendung von Constraint-
        Programmiertechniken vorstellen.

        Montag, 11. 11. 96    16 Uhr c.t.

Es spricht:

	Peter Clote

        (von Haeseler, Waterman)

Lehr- und Forschungseinheit          Institut          Universität

Thorsten Altenkirch