SYNASC03 Pictures


Invited lecturers






 


Adina Magda Florea
University Politehnica of Bucharest, Romania


Intelligent Agents on the Web (slides)



 


Helmut Veith
Tehnical University of Munich, Germany

Model Checking: from Hardware to Software

 
 
Algorithm Invention and Verification by Lazy Thinking

Some time ago, we introduced the paradigm of computer-supported theory exploration as an alternative to the isolated theorem proving paradigm that prevailed in formal, computer-supported mathematics for the past decades and we proposed various approaches to systematic, computer-supported  mathematical theory exploration, see for example our paper at SYNASC 2001.  In this paper, we study algorithm invention and verification as a specific variant of sytematic theory exploration and propose the "lazy thinking paradigm" for inventing and verifying algorithms automatically; i.e.,  for a given predicate logic specification of the problem in terms of a set of operations (functions and predicates), the method produces an algorithm that solves the problem together with a correctness proof for the algorithm.  In the ideal case, the only information to be provided by the user consists of the formal problem specification and a complete knowledge base for the operations that occur in the problem specification.
Roughly, the method proceeds as follows:

- The method tries out, one after the other,  various algorithm types that are stored in a library of algorithm types for the given domain. An algorithm type is a predicate logic formula that describes an algorithm (recursively) in terms of unspecified subalgorithms together with a proof method appropriate for (recursive) proofs of properties of the algorithm.

- For a chosen algorithm type, the proof method is called for the correctness theorem. Typically, this proof will fail because nothing is known about the unspecified subalgorithms.

- From the failing proof situation, by a conjecture generating algorithm, lemmata  are generated that would enable the prover to complete the proof successfully. The lemmata will describe certain requirements on the subalgorithms. These lemmata are added to the knowledge base and the proof

of the correctness theorem is attempted again. Now, the proof will  get over the failing situation and will either succeed or will fail at some later situation

- This procedure is iterated until the proof of the correctness theorem goes through (or some time limit is exhausted).

After termination, the following will be true: Under the assumption that all ingredient subalgorithms satisfy the requirements described in the lemmata generated, the main algorithm satisfies the problem specification
In this stage, there are two possibilities:

-  Either, in the initial knowledge, algorithms are available that satisfy the requirements for the subalgorithms described in the lemmata  and we are done, i.e. a correct algorithm has been synthesized for the initial problem and its correctness proof has been generated.

- Or subalgorithms that satisfy the requirements can be synthesized by another application of the method in a next round of the procedure.

  We illustrate the method and its implementation by case studies carried out in our Theorema system.

The distinctive features of our algorithm synthesis method, as compared to other methods, are:

 -  the use of algorithm type libraries

-  the crucial role of failing proofs and conjecture generation from failing proofs

-  the decomposition of theory exploration and, in particular, algorithm invention and verification into layers

-  the naturalness of the approach, which makes it attractive both for complete or partial automation in systems for formal mathematics and for usage as a strategy for human invention and teaching


 
Towards Reusable and Shareable Courseware: Topic Maps-based Digital Libraries

The effectiveness of instructors and efficiency of learners engaged in open-learning and teaching tasks in Web-based course support environments depend crucially on the support they receive. Two main groups of related problems concern findability of learning resources and reusability and shareability of digital educational repositories.
A solution to these problems is only viable if the digital content of the repositories is standards-based. We address standardization of e-learning repositories not only from a technological perspective but also from a knowledge perspective. The latter entails using specialized domain ontologies for description and classification of the digital library content.

The focus of my talk is on a general framework that facilitates the building of ontology-aware digital course libraries intended to provide trusted reference information coupled with support for resource location and domain comprehension. The framework utilizes the advantages of concept-based and standards-based content organization. The new ISO standard, Topic Maps, will be used as a means to organize and retrieve information in the digital course libraries, which will guarantee their shareability, reusability, mergeability, and interoperability. The research related to my talk is aimed at developing concept-based, ontology-aware courseware in support of learning tasks in Web-assisted or independent learning courses. Such courseware implies organizing learning content around a well-structured network of domain concepts and relationships (with sound ontological commitment) which permits using that network for classification, description, and indexing of learning resources.


 

Intelligent Agents on the Web

Intelligent agents and multi-agent systems represent a new paradigm of distributed and interconnected computation of autonomous entities. Nowadays, an increasing number of software systems are being viewed in terms of autonomous agents in which the software components act more like "individuals" or agents, rather than just "parts". Intelligent agents offer powerful mechanisms for quickly modeling, creating and deploying complex applications that readily adapt to real world needs. 

The objective of this tutorial is to give an overview of the existing agent and multi-agent systems applications on the Web. Three broad categories of agents are covered: agents for e-commerce, information agents, and agents for e-learning. The tutorial surveys several noteworthy existing systems and applications and, for some of them, gives insight on technicalities of those systems.

The tutorial is intended for researchers and PhD students who are interested in finding more about specific application domains of this new computational paradigm, but also for software developers and managers who are interested in the application potential of an emergent technology.


 
Constructing Origami on the Internet

Providing a service on the Internet is a next step towards a knowledge-based society.

We would like to have services of large scale numeric and symbolic computing, theorem proving, data mining etc. offered by "knowledge providers" on the Internet.   In this talk I will report our continued work on distributed constraint solving with the goal to make contributions in realizing such a knowledge-based society. We are developing, as an application of our constraint functional logic programming system, a web-based computer Origami site, where interested mathematicians, computer scientists and Origami artists visit and perform computer origami construction.  The computer origami poses interesting research and developments on algorithms of symbolic algebraic constraint solving and computational geometry, and on tools for server-side systems programming. I will discuss our findings in this research and developments. I also plan to give demonstration of computer origami.

 

Model Checking: from Hardware to Software

In contrast to hardware which is finite-state and based on relatively few ample principles, software systems generally give rise to infinite state
spaces, and are described in terms of programming languages involving rich
semantical concepts. The challenges of software verification can be
addressed only by a combined effort of different communities including,
most notably, model checking, theorem proving, symbolic computation,
static analysis, compilers, and abstract interpretation. We focus on a
recent family of tools which use predicate abstraction and theorem proving
to extract a finite state system amenable to model checking. 


A brief history of the Krylov methods

Abstract: The talk covers two main areas. The first is the need to solve
linear simultaneous equations and we give two simplified examples of the
sort of sparse systems that the methods are designed to solve. The second is
a short description of the principal differences between typical iterative
methods and Krylov methods, and a characterisation of the Krylov methods.
The talk is aimed at the non-mathematician and not even knowledge of matrix
algebra is assumed.

 

Copyright (C) 2001-2003 Department of Computer Science, email: synasc03@info.uvt.ro