Abstract: We consider non-commutative algebras comprising of non-linear invariantized differential equations with invariant differential operators acting on them. The structure differs from standard differential algebras in that the application of the invariant differential operators introduces error terms. By close inspection of the error terms we are able to prove that an extended non-commutative version of the differential Gr\"obner basis algorithm terminates in algebras of differential invariants. Applications include classical invariant theory, geometric integration and invariant numerical schemes.
Abstract: Any univariate polynomial whose coefficients lie in a differential field of characteristic zero possesses an associated nonzero linear ordinary differential equation called an alpha-resolvent of the polynomial. In the case that the distinct roots of the polynomial are differentially independent over constants, we will determine the minimal number of powers of alpha that appear with nonzero coefficient in a resolvent of smallest possible order. We will then give an upper bound on the weight of a resolvent of smallest possible weight. The difficult part of calculating this upper bound was generalizing the known case of simple roots to multiple roots. I will present the various ways I eliminated the roots with the help of computer algebra before figuring out the proper symbolic formula.
Abstract: The computation of a resultant of a polynomial system is one of the fundamental problems in computer algebra. The significance of the resultants rapidly grows with advancement of technology as well as the efficiency of methods to compute them. Since most efficient methods reduce resultant computation to the computation of the determinant of a symbolic matrix, the matrix size is a major bottleneck in the computation of the resultant of a polynomial system.
In this talk we outline a new construction of Sylvester-type resultant matrices which is derived from the Dixon/Bezout formulation. We show that in practice these matrices are usually much smaller than the Sylvester-type resultant matrix based on other constructions. Consequently, a resultant method based on such matrices is of lower overall complexity. Moreover, it is possible to select certain construction parameters based on the support structure of an input polynomial system to minimize the size of these matrices. The method has been implemented, tried on a variety of examples and compared with other resultant methods.
Abstract: We give a survey on toric (sparse) resultants of composed polynomials. By the toric (sparse) resultant, we mean an irreducible polynomial in the coefficients of a set of given polynomials that vanishes if the given polynomials have a common zero. By a composed polynomial we mean the polynomial obtained from a given polynomial by replacing each variable by a polynomial. We give an overview of previous findings as well as of a recent one, all providing the irreducible factors of resultants of various composed polynomials. Furthermore, we discuss open questions. This line of research is motivated by the need for efficiently eliminating variables from composed polynomials.
Abstract The implicitization problem asks for an implicit equation of a rational hypersurface given by a parameterization map from P^(n-1) to P^n, with n>=2. This problem received recently a particular interest, especially in the cases n=2 and n=3, corresponding respectively to curves and surfaces implicitization, because it is a key-point in computer aided geometric design and modeling. In this talk I will report on joint works with Marc Chardin (University of Paris IV) and Jean-Pierre Jouanolou (University of Strasbourg) using approximation complexes to solve this problem when the base points of the parameterization map are isolated and locally a complete intersection. I will also try to show the close link with the so-called method of moving surfaces for curves and surfaces implicitization.
Abstract Like Groebner bases of polynomial ideals, triangular decompositions of algebraic varieties are computed by means of rewriting techniques and offer important features such as canonicity. The drawback is their theoretical complexity which is superior to that of probabilistic methods for solving polynomial systems.
Most of the effort in the area of triangular decompositions was devoted in the recent years to the design of algorithms for processing arbitrary systems of algebraic or differential equations. However, many systems of equations arising in practice have structural properties.
We will present new developments based on this observation (and others) leading to algorithms for triangular decompositions whose practical complexity is far improved.
Abstract Polynomial systems of equations arise ubiquitously in Mathematics (Differential Equations, Graph Theory, Topology etc), Physics, Chemistry and many other branches of Science. Often the polynomial systems arising naturally in these areas exhibit some sort of symmetry that can be accounted for rigorously using group theoretic ideas such as representations and invariant theory of finite groups. I will discuss some aspe cts of a joint work with R. Corless (UWO, Canada) and K. Gatermann (ZIB-Berlin, Germany) on solving efficiently polynomial systems with symmetries. To illustrate the method I will discuss an application from Chaos Theory, namely the determination of bifurcation points of the logistic map. The logistic map is the prototype of one-dimensional mappings of the interval and although it is the simplest example of a nonlinear dynamical system, it captures remarkably many features of more general nonlinear systems. The global structure of the bifurcation diagram for instance, is a universal property.
Abstract Searching for molecular conformations with respect to geometric and energy constraints is frequent in computational biology and chemistry. The conformational search often can be reduced to solving systems of polynomial equations. These equations, however, have number of unknowns and degree beyond the scope of the available algebraic systems.
Realizing solving for the exact solutions is a too difficult task, we turn to seek the approximations. I will present two methods that we have investigated to find the real solutions of the polynomial equations.
The first method computes the number of solutions in regions of the parameter space. By subdividing the regions into smaller and smaller sizes, we get closer and closer approximations to the solutions. This method uses bilinear mappings induced by the polynomials defining the boundaries of the regions, and obtains the solution counts from the positive and negative eigenvalues of the bilinear mappings. The mappings, however, are computed using Groebner bases or resultant matrices. We experienced very big computational complexities from them. This method thus still remains theoretical.
The second method focuses on eliminating regions that contain no solutions in the parameter space instead of obtaining number of solutions within the regions. This method also carries a subdivision process. With enough steps, the regions become so small that the values of the original polynomials can be approximated by evaluating at any point inside the regions. The final small regions after the subdivision process are reported as the approximations of the solutions. This method, with an initial implementation, has "solved" systems of six polynomials equations (degree 10) rising from the docking problems in computer-assisted drug design. To our knowledge, no other deterministic algebraic methods have solved these equations.