Abstract: In this talk we study algorithms for computing normal forms for matrices of Ore polynomials while controlling coefficient growth. By formulating row reduction as a linear algebra problem, we obtain a fraction-free algorithm for row reduction for matrices of Ore polynomials. The algorithm allows us to compute the rank and a basis of the left nullspace of the input matrix. When the input is restricted to matrices of shift polynomials and ordinary polynomials, we obtain fraction-free algorithms for computing row-reduced forms and weak Popov forms. These algorithms can be used to compute a greatest common right divisor and a least common left multiple of such matrices. Our fraction-free row reduction algorithm can be viewed as a generalization of subresultant algorithms. The linear algebra formulation allows us to obtain bounds on the size of the intermediate results and to analyze the complexity of our algorithms. We then make use of the fraction-free algorithm as a basis to formulate modular algorithms for computing a row-reduced form, a weak Popov form, and the Popov form of a polynomial matrix. By examining the linear algebra formulation, we develop criteria for detecting unlucky homomorphisms and determining the number of homomorphic images required.
Parts of this work have been done jointly with George Labahn (University of Waterloo) and Bernhard Beckermann (Universite des Sciences et Technologies de Lille).
Abstract: Let A be a matrix over a polynomial ring R[x]. It is known that we can compute some normal forms of A, for example, the Smith form, Jordan form and Hermite form. But in many cases the degrees of entries in these normal forms are much bigger those in A. Popov forms avoid this disadvantage and play an important role in control theory. We define and discuss Popov forms of a matrix whose entries are skew polynomials and make some progress in their computation:
Abstract: The transposition principle, sometimes referred to as Tellegen's theorem, is a set of transformation rules for linear programs, originating from linear circuit design and analysis in the 50s. In computer algebra, despite a recurrent need for transposed algorithms, Tellegen's principle is not used systematically, specific algorithms being often developed to circumvent its use; moreover, few practical implementations rely on it. The primary goal of this talk is to demonstrate that the transposition principle can, and should, be strictly complied with in practice.
We will describe explicit transposed versions of basic polynomial algorithms and report on their successful implementation in the NTL C++ library. This is joint work with G. Lecerf and É. Schost. We will also discuss a range of applications of Tellegen's principle in designing fast algorithms for computing with algebraic numbers and for solving polynomial systems. This is joint work with Ph. Flajolet, B. Salvy and É. Schost.
Abstract: We consider the property of a radical differential ideal to be characterizable and give a criterion of characterizablility. This criterion is based on the known algorithms for decomposing radical differential ideals into characterizable components. The Ritt problem and its restriction to our particular case are discussed. Furthermore, we discuss differential algebraic properties of reducible to zero elements of radical ideals and present some sufficient conditions for an autoreduced set to be a characteristic set of a radical differential ideal in Kolchin's sense. We discuss the method of computation of characteristic sets in ordinary case developed by Brahim Sadik. We also introduce and investigate a special class of definable radical differential ideals.
Abstract: The design problems in system and control theory require determining feasible values of certain parameters satisfying the given design specifications. Ideally, it is desired to obtain the feasible regions of the parameters which satisfy the given specifications in a parameter space. The methods to obtain such feasible regions in a parameter space is referred to a parameter space approach. In the last decade it has been shown that a parameter space approach accomplished by quantifier elimination (QE) is effective for the problems such as robust control synthesis, multi-objective design problems, and hybrid dynamical system design that are difficult to solve by using numerical methods. In this talk, we show how various control system design problems are solved by a parameter space approach based on QE and how we achieve efficiency of the QE based method with concrete control design examples. The examples show that combining reduction of control problems to particular formulas and usage of QE algorithms specialized to such particular formulas works in an efficient way for the practical control design problems. We also report the present state of the development of our maple-package for real algebraic constraints called "SyNRAC". SyNRAC includes the implementation of specialized QE algorithms used to solve the control problems.
Parts of this work have been done jointly with Prof. Shinji Hara (University of Tokyo) and Prof. Volker Weispfenning (University of Passau).
Abstract: We study the structure of Grobner bases for zero-dimensional radical ideals. These ideals correspond to vanishing ideals of finite sets of points in an affine n-dimensional space. We are interested in monomial orders that ``eliminate'' one variable, say z, which correponds to the projection map of points in the n-space to (n-1)-space where the z-coordinate is projected out. We show that any minimal Grobner basis of an ideal under an elimination order exhibits the geometric structure of the variety defined by the ideal. Particularly, the degrees in z of the polynomials in the Grobner basis match the fibre sizes of the projection map, and the leading coefficients of z with given degrees give Grobner bases for the projected points with corresponding fibre sizes.
Joint work with Shuhong Gao and Jeff Stroomer.
Abstract: Automatic reasoning is applied to building design problems with a big number of highly standardized conditions, such as non-singular housing.
PRELIMINARIES:During the XXth century, several studies and experiences regarding mass-produced houses, industrialization and prefabricated elements, have been carried out. These antecedents were studied prior to designing this system, in order to determine a big (but finite) number of house layout schemes that could form the set of possible proposals. A modular structure was adopted. It uses modules that are all based on a 3.60 meter span bay. It considers the inclusion of a set of predefined service rooms (bathrooms, kitchen, storage spaces, boiler room...) and the adoption of compatible wall panels and closing systems. The schemes were developed for (isolated) house typology (for houses of one or more floors).
DESCRIPTION OF THE SYSTEM:The schemes have been grouped in basic types, the relative position of the rooms (aligned, around a court, around the service rooms...) being the basic criterion for the classification. Each scheme implies a set of characteristics: area occupied by the building, number of usable square meters (discounting the width of walls), number of floors, total built area. For each "basic type", a matrix of data has been prepared. In this matrix the data are organized by rows and columns. Each row represents a certain house layout scheme. Each column contains the different values of a certain characteristic of the houses for the different kind of houses considered. A matrix-based approach has been chosen because there are many possible different schemes and very many data involved. Moreover there is no really deep extraction of knowledge or interference among types (that could provoke any inconsistency problems). A similar system that takes into account the town planning laws of Madrid was developed using GB-based inference engine. It was the same kind of approach and the inference engine was infra-used.
Maple was chosen because it is a user-friendly system that provides comfortable and complete programming and matrix-handling features. The first data the user has to introduce are those related to the climate and the specific plot (dimensions, hours of direct sunlight, views, noise sources...). Based on these data, the system selects a "best" basic type (and, consequently, one of the prepared data matrices). Afterwards, the user has to introduce information about the number of members and special needs of the family group. There are some other few technical data required (that can be provided by a smart user or an architect), such as building regulations concerning the plot. All these data are processed in order to select the specific house layout scheme, i.e., row of the matrix, that fits best. The process can be iterated in order to obtain other appropriate schemes. If the user didn't like the basic type or no scheme could be found in the "best" basic type, a "second option" basic type is used instead.
CONCLUSIONS:Expert systems for choosing mass-produced house models allows the buyer to be involved in the designing process (for instance to easily compare different alternatives) in order to obtain a tailor-made house, at a reasonable price. As far as we know no similar systems exist. Although the system is simple from the computational point of view, a very important part of this work consisted in the previous organization of the architectural knowledge.
ACKNOWLEDGMENTS: Partially supported by research project TIC-2000-1368-C03-03, Ministry of Science and Technology (MCYT), Spain.