Full Papers
The program committee has accepted the 43 papers below.
Click on the plus symbol to view a paper's abstract.
List of Accepted Papers

[+]
Fredrik Johansson.
Calcium: computing in exact real and complex fields.
Abstract:
Calcium is a C library for real and complex numbers in a form suitable for exact algebraic and symbolic computation. Numbers are represented as elements of fields $\mathbb{Q}(a_1,\ldots,a_n)$ where the extension numbers $a_k$ may be algebraic or transcendental. The system combines efficient field operations with automatic discovery and certification of algebraic relations, resulting in a practical computational model of $\mathbb{R}$ and $\mathbb{C}$ in which equality is rigorously decidable for a large class of numbers.

[+]
Shaoshi Chen, Ruyong Feng, Pingchuan Ma and Michael F. Singer.
Separability Problems in Creative Telescoping.
Abstract:
For given multivariate functions specified by algebraic, differential or difference equations, the separability problem is to decide whether they satisfy linear differential or difference equations in one variable. In this paper, we will explain how separability problems arise naturally in creative telescoping and present some criteria for testing the separability for several classes of special functions, including rational functions, hyperexponential functions, hypergeometric terms, and algebraic functions.

[+]
Mikhail Starchak.
Positive Existential Definability with Unit, Addition and Coprimeness.
Abstract:
We consider positively existentially definable sets in the structure $\left\langle\mathbb{Z};1,+,\perp\right\rangle$. It is well known that the elementary theory of this structure is undecidable while the existential theory is decidable. We show that after the extension of the signature with the unary `$$' functional symbol, binary symbols for disequality $\neq$ and $\text{GCD}(\cdot,\cdot)=d$ for every fixed positive integer $d$, every positive existential formula in this extended language is equivalent in $\mathbb{Z}$ to some positive quantifierfree formula.
Then we get some corollaries from the main result. The binary order $\leq$ and discoprimeness $\not\perp$ relations are not positively existentially definable in the structure $\left\langle\mathbb{Z};1,+,\perp\right\rangle$. Every positively existentially definable set in the structure $\left\langle\mathbb{N};S,\perp\right\rangle$ is quantifierfree definable in $\left\langle\mathbb{N};S,\neq0,\perp\right\rangle$. We also get a decidable fragment of the undecidable $\forall\exists$Theory of the structure $\left\langle\mathbb{Z};1,+,\leq,\mid\right\rangle$, where $\mid$ is a binary predicate symbol for the integer divisibility relation. 
[+]
Ngoc Hoang Anh Mai, Abhishek Bhardwaj and Victor Magron.
The Constant Trace Property in Noncommutative Optimization.
Abstract:
In this article, we show that each semidefinite relaxation of a ballconstrained noncommutative polynomial optimization problem can be cast as a semidefinite program with a constant trace matrix variable.
We then demonstrate how this constant trace property can be exploited via first order numerical methods to solve efficiently the semidefinite relaxations of the noncommutative problem. 
[+]
Jianwei Li.
On the Smallest Ratio Problem of Lattice Bases.
Abstract:
Let $(\mathbf{b}_1, \ldots, \mathbf{b}_{n})$ be a lattice basis with GramSchmidt orthogonalization $(\mathbf{b}_1^{\ast}, \ldots, \mathbf{b}_{n}^{\ast})$, the ratios $\\mathbf{b}_{1}\/\\mathbf{b}_{i}^{\ast}\$ for $i = 1, \ldots, n$ do arise in the analysis of many lattice algorithms and are somehow related to their performances. In this paper, we study the problem of minimizing the ratio $\\mathbf{b}_{1}\/\\mathbf{b}_{n}^{\ast}\$ over all bases $(\mathbf{b}_{1}, \ldots, \mathbf{b}_{n})$ of a given $n$rank lattice. We first prove that there exists a basis $(\mathbf{b}_{1}, \ldots, \mathbf{b}_{n})$ for any $n$rank lattice $L$ such that $\\mathbf{b}_1\ = \min_{\mathbf{v} \in L\backslash\{\mathbf{0}\}} \\mathbf{v}\$, $\\mathbf{b}_{1}\/\\mathbf{b}_{i}^{\ast}\ \leq i$ and $\\mathbf{b}_{i}\/\\mathbf{b}_{i}^{\ast}\ \leq i^{1.5}$ for $1\leq i \leq n$. This leads us to introduce a new NPhard computational problem, namely the {\em smallest ratio problem} (SRP): given an $n$rank lattice $L$, find a basis $(\mathbf{b}_{1}, \ldots, \mathbf{b}_{n})$ of $L$ such that $\\mathbf{b}_{1}\/\\mathbf{b}_{n}^{\ast}\$ is minimal. The problem inspires a new lattice invariant $\mu_{n}(L) = \min\{\\mathbf{b}_1\/\\mathbf{b}_n^{\ast}\: (\mathbf{b}_1, \ldots, \mathbf{b}_n) \textrm{ is a basis of } L\}$ and a new lattice constant $\mu_{n} = \max \mu_{n}(L)$ over all $n$rank lattices $L$: both the minimum and maximum are justified. Some properties of $\mu_{n}(L)$ and $\mu_{n}$ are investigated. We also present an exact algorithm and an approximation algorithm for SRP.
This is the first sound study of SRP. Our work is a tiny step towards solving an open problem proposed by DadushRegevStephensDavidowitz (CCC '14) for tackling the closest vector problem with preprocessing, that is, whether there exists a basis $(\mathbf{b}_{1}, \ldots, \mathbf{b}_{n})$ for any $n$rank lattice such that $\max_{1 \le i \le j \le n} \\vec{b}_{i}^{\ast}\/\vec{b}_{j}^{\ast}\ \le \textrm{poly}(n)$. 
[+]
Mawunyo Kofi DarkeyMensah, Przemysław Koprowski and Beata Rothkegel.
The anisotropic part of a quadratic form over a global function field.
Abstract:
We present two new algorithms for computing an anisotropic part of a nondegenerate diagonal quadratic form over a global function field.

[+]
Ryoya Fukasaku.
Criteria for Hopf Bifurcations with Fixed Multiplicities.
Abstract:
The Hopf bifurcation theorem describe a sufficient condition for that there is a PoincaréAndronovHopf bifurcation with a prior assumption on special coordinates. In 2020, Kruff and Walcher introduced a useful method to compute a sufficient condition for that there is a simple PoincaréAndronovHopf bifurcation, without such a prior assumption. In the present paper, for a multiple Hopf bifurcation, we generalize the method. The author has implemented the generalized method on the computer algebra system SageMath. The usefulness of the generalized method is illustrated by using the implementation and some examples.

[+]
Haokun Li, Bican Xia, Huiying Zhang and Tao Zheng.
Choosing the Variable Ordering for Cylindrical Algebraic Decomposition via Exploiting Chordal Structure.
Abstract:
Cylindrical algebraic decomposition (CAD) plays an important role in the field of real algebraic geometry and many other areas. As is wellknown, the choice of variable ordering while computing CAD has a great effect on the time and memory use of the computation as well as the number of sample points computed. In this paper, we indicate that typical CAD algorithms, if executed with respect to a special kind of variable orderings (called ``the perfect elimination orderings''), naturally preserve chordality, which is well compatible with an important (variable) sparsity pattern called ``the correlative sparsity''. Experimentation suggests that if the associated graph of the polynomial system in question is chordal (resp., is nearly chordal), then a perfect elimination ordering of the associated graph (resp., of a minimal chordal completion of the associated graph) can be a good variable ordering for the CAD computation. That is, by using the perfect elimination orderings, the CAD computation may produce a much smaller full set of projection polynomials than by using other naive variable orderings. More importantly, for the complexity analysis of the CAD computation via a perfect elimination ordering, an $(m,d)$property of the full set of projection polynomials obtained via such an ordering is given, through which the ``size'' of this set is characterized. This property indicates that when the corresponding perfect elimination tree has a lower height, the full set of projection polynomials also tends to have a smaller ``size''. This is well consistent with the experimental results, hence the perfect elimination orderings with lower elimination tree height are further recommended to be used in the CAD projection.

[+]
Xavier Caruso, Tristan Vaccon and Thibaut Verron.
On FGLM Algorithms with Tate Algebras.
Abstract:
In his seminal paper, Tate introduced the notion of Tate algebras to serve, in the context of analytic geometry over the padics, as a counterpart of polynomial algebras in classical algebraic geometry. In two previous papers, we introduced the formalism of Gröbner bases over Tate algebras and proposed advanced signaturebased algorithms.
In the present article, we extend the FGLM algorithm to Tate algebras. Beyond allowing for fast change of ordering, this strategy has two other important benefits. Firstly, it provides an efficient algorithm for changing the radii of convergence which, in particular, makes effective the bridge between the polynomial setting and the Tate setting and may help in speeding up the computation of GB over Tate algebras. Secondly, it sets up the foundations for designing a fast algorithm for interreduction, which could serve as a basic primitive in our previous algorithms and accelerate them significantly. 
[+]
Maria Francis and Thibaut Verron.
On two signature variants of Buchberger's algorithm over principal ideal domains.
Abstract:
Signaturebased algorithms have brought large improvements in the performances of Gröbner bases algorithms for polynomial systems over fields. Furthermore, they yield additional data which can be used, for example, to compute the module of syzygies of an ideal or to compute coefficients in terms of the input generators.
In this paper, we examine two variants of Buchberger's algorithm to compute Gröbner bases over principal ideal domains, with the addition of signatures. The first one is directly adapted from KandriRody and Kapur's algorithm, whereas the second one uses the ideas developed in the algorithms by L. Pan (1989) and D. Lichtblau (2012). The differences in constructions between the algorithms entail differences in the operations which are compatible with the signatures, and in the criteria which can be used to discard elements.
We prove that both algorithms are correct and discuss their relative performances in a prototype implementation in Magma. 
[+]
Elie Eid.
Fast computation of hyperelliptic curve isogenies in odd characteristic.
Abstract:
Let $p$ be an odd prime number. We present an algorithm for computing explicit rational representations of isogenies between Jacobians of hyperelliptic curves of small genus $g$ over an extension $K$ of the field of $p$adic numbers $\mathbb{Q}_p$. The algorithm has a quasilinear complexity in the degree of the rational representation. It relies on an efficient resolution, with a logarithmic loss of $p$adic precision, of a first order system of differential equations.

[+]
Raphaël Pagès.
Computing characteristic polynomials of pcurvatures in average polynomial time.
Abstract:
We design a fast algorithm that computes, for a given linear differential operator with coefficients in Z[x], all the characteristic polynomials of its pcurvatures, for all primes p ⩽ N , in asymptotically quasilinear bit complexity in N . We discuss implementations and applications of the new algorithm. We shall see in particular that
the good performances of our algorithm are quickly visible. 
[+]
Erich Kaltofen, Clément Pernet and ZhiHong Yang.
Hermite Interpolation With Error Correction: Fields of Zero or Large Characteristic and Large Error Rate.
Abstract:
Multiplicity code decoders are based on Hermite polynomial interpolation with error correction. In order to have a unique Hermite interpolant one assumes that the field of scalars has characteristic 0 or >= k+1, where k is the maximum order of the derivatives in the list of values of the polynomial and its derivatives which are interpolated. For scalar fields of characteristic k+1, the minimum number of values for interpolating a polynomial of degree <= D is D+1+2E(k+1) when <= E of the values are erroneous. Here we give an errorcorrecting Hermite interpolation algorithm that can tolerate more errors, assuming that the characteristic of the scalar field is either 0 or >= D+1. Our algorithm requires (k+1)D + 1  (k+1)k/2 + 2E values.
As an example, we consider k = 2. If the error ratio (number of errors)/(number of evaluations) <= 0.16, our new algorithm requires ceiling( (4+7/17) D  (1+8 /17) ) values, while multiplicity decoding requires 25D+25 values. If the error ratio is <= 0.2, our algorithm requires 5D2 evaluations over characteristic 0 or >= D+1, while multiplicity decoding for an error ratio 0.2 over fields of characteristic 3 is not possible for D >= 3.
Our algorithm is based on ReedSolomon interpolation without multiplicities, which becomes possible for Hermite interpolation because of the high redundancy necessary for errorcorrection. 
[+]
Shinichi Tajima and Katsusuke Nabeshima
Computing Grothendieck point residues via solving holonomic systems of first order partial differential equations.
Abstract:
Grothendieck point residue is considered in the context of symbolic computation. Based on the theory of holonomic $D$modules associated to a local cohomology class, a new effective method is given for computing Grothendieck point residue mappings. A basic strategy of our approach is the use of holonomic systems of first order linear partial differential equations. The resulting algorithm is easy to implement and can also be used to compute Grothendieck point residues in an effective manner.

[+]
Mark Giesbrecht, QiaoLong Huang and Eric Schost.
Sparse Multiplication of Multivariate Linear Differential Operators.
Abstract:
We propose a new randomized algorithm for multiplication in the ring of noncommutative polynomials $K[x_1,...,x_n]\langle \delta_1,..., \delta_n \rangle$, where $\delta_i = x_i(\partial/\partial x_i)$, dedicated to sparse inputs. The complexity of our algorithm is polynomial in the input size and on an a priori sparsity bound for the output.

[+]
Shaoshi Chen, Lixin Du and Manuel Kauers.
Lazy Hermite Reduction and Creative Telescoping for Algebraic Functions.
Abstract:
Bronstein's lazy Hermite reduction is a symbolic integration technique that reduces algebraic functions to integrands with only simple poles without the precomputation of an integral basis. We sharpen the lazy Hermite reduction by combining it with the polynomial reduction to solve the decomposition problem of algebraic functions. The sharpened reduction is then used to design a reductionbased telescoping algorithm or algebraic functions in two variables.

[+]
Antonio JiménezPastor, Philipp Nuspl and Veronika Pillwein.
On $C^2$finite sequences.
Abstract:
Holonomic sequences are widely studied as many objects interesting to mathematicians and computer scientists are in this class. In the univariate case, these are the sequences satisfying linear recurrences with polynomial coefficients and also referred to as $D$finite sequences. A subclass are $C$finite sequences satisfying a linear recurrence with constant coefficients.
We investigate the set of sequences which satisfy linear recurrence equations with coefficients that are $C$finite sequences. These sequences are a natural generalization of holonomic sequences. In this paper, we show that $C^2$finite sequences form a difference ring and provide methods to compute in this ring. 
[+]
Nikhil Balaji, Sylvain Perifel, Mahsa Shirmohammadi and James Worrell.
Cyclotomic Identity Testing and Applications.
Abstract:
We consider the cyclotomic identity testing problem: given a polynomial $f(x_1,\ldots,x_k)$, decide whether $f(\zeta_n^{e_1},\ldots ,\zeta_n^{e_k})$ is zero, where $\zeta_n = e^{2\pi i/n}$ is a primitive complex $n$th root of unity and $e_1,\ldots,e_k$ are integers, represented in binary. In case $f$ is given by an algebraic circuit we give a randomized polynomialtime algorithm. When $f$ is given by a circuit of polynomially bounded syntactic degree, we give a randomized NC algorithm. In case $f$ is given in sparse representation, we show that the problem lies in NC. Assuming the generalised Riemann hypothesis, we extend this approach to obtain a polynomialtime algorithm in case $f$ has the form $f=\sum_{i=1}^m a_ig^i$, where $g$ is in sparse representation and $a_1,\ldots,a_m$ is a list of integers. To complement these upper bounds, we show that the existence of a polynomialtime algorithm for checking zeroness of a sparse polynomial evaluated at integer translations of roots of unity would entail a subexponentialtime algorithm for polynomial identity testing. Finally, we use our results to give a new proof that equality of compressed strings, i.e., strings presented using contextfree grammars, can be decided in randomized NC.

[+]
Joris van der Hoeven and Grégoire Lecerf.
Amortized bivariate multipoint evaluation.
Abstract:
The evaluation of a polynomial at several points is called the problem of multipoint evaluation. Sometimes, the set of evaluation points is fixed and several polynomials need to be evaluated at this set of points. Efficient algorithms for this kind of “amortized” multipoint evaluation were recently developed for the special case when the set of evaluation points is sufficiently generic. In this paper, we design a new algorithm for arbitrary sets of points, while restricting ourselves to bivariate polynomials.

[+]
Kai Hormann, Lucas Kania and Chee Yap.
Novel Range Functions via Taylor Expansions and Recursive Lagrange Interpolation with Application to Real Root Isolation.
Abstract:
Range functions are an important tool for interval computations, and they can be employed for the problem of root isolation. In this paper, we first introduce two new classes of range functions for real functions. They are based on the remainder form by Cornelius and Lohner (1984) and provide different improvements for the remainder part of this form. On the one hand, we use centered Taylor expansions to derive a generalization of the classical Taylor form with higher than quadratic convergence. On the other hand, we propose a recursive interpolation procedure, in particular based on quadratic Lagrange interpolation, leading to recursive Lagrange forms with cubic and quartic convergence. We then use these forms for isolating the real roots of squarefree polynomials with the algorithm EVAL, a relatively recent algorithm that has been shown to be effective and practical. Finally, we compare the performance of our new range functions against the standard Taylor form. Range functions are often compared in isolation; in contrast, our holistic comparison is based on their performance in an application. Specifically, EVAL can exploit features of our recursive Lagrange forms which are not found in range functions based on Taylor expansion. Experimentally, this yields at least a twofold speedup in EVAL.

[+]
Seung Gyu Hyun, Vincent Neiger and Eric Schost.
Algorithms for Linearly Recurrent Sequences of Truncated Polynomials.
Abstract:
Linear recurrent sequences are those whose elements are defined as linear combinations of preceding elements, and finding relations of recurrences is a fundamental problem in computer algebra. In this paper, we focus on sequences whose elements are vectors over the ring $\mathbb{A} = \mathbb{K}[x]/(x^d)$ of truncated polynomials. We present three methods for finding the ideal of canceling polynomials: a BerlekampMasseylike approach due to Kurakin, one which computes the kernel of some blockHankel matrix over $\mathbb{A}$ via a minimal approximant basis, and one based on bivariate Padé approximation. We propose complexity improvements for the first two methods, respectively by avoiding the computation of redundant sequence generators and by exploiting the Hankel structure to compress the approximation instance. We then observe these improvements in our empirical results through a C++ implementation. Finally we discuss applications to the computation of minimal polynomials and determinants of sparse matrices over $\mathbb{A}$.

[+]
Antonio JiménezPastor.
Simple differentially definable functions.
Abstract:
Holonomic functions satisfy linear differential equations with polynomial coefficients. The solutions to this type of equations may have singularities determined by the zeros of their leading coefficient. There are algorithms to desingularize equations, i.e., remove the singularities from the equation that do not appear in its solutions. However, classical computations of closure properties (such as addition, multiplication, etc.) with holonomic functions return equations with extra zeros in the leading coefficient. In this paper we present theory and algorithms, based on linear algebra, to control the leading coefficients when computing these closure properties and also extend this theory to the more general class of differentially definable functions.

[+]
Jakob Ablinger and Carsten Schneider.
Solving linear difference equations with coefficients in rings with idempotent representations.
Abstract:
We introduce a general reduction strategy that enables one to search for solutions of parameterized linear difference equations in difference rings. Here we assume that the ring itself can be decomposed by a direct sum of integral domains (using idempotent elements) that enjoys certain technical features and that the coefficients of the difference equation are not degenerated (e.g., not all coefficients contain a common zero divisor as factor). Using this mechanism we can reduce the problem to find solutions in a ring (with zerodivisors) to search solutions in several copies of integral domains. Utilizing existing solvers in this integral domain setting, we obtain a general solver where the components of the linear difference equations and the solutions can be taken from difference rings that are built e.g., by R∏∑extensions over ∏∑fields. This class of difference rings contains, e.g., nested sums and products, products over roots of unity and nested sums defined over such objects.

[+]
Rina Dong, Dong Lu, Chenqi Mou and Dongming Wang.
Comprehensive Characteristic Decomposition of Parametric Polynomial Systems.
Abstract:
This paper presents an algorithm that decomposes an arbitrary set $F$ of multivariate polynomials involving parameters into finitely many sets $\Gamma_i$ of (lexicographical) Gröbner bases $G_{ij}$ such that associated with each $\Gamma_i$ there is a system $A_i$ of parametric constraints, all the $A_i$'s partition the parameter space, all the $G_{ij}$'s in each $\Gamma_i$ remain Gröbner bases under specialization of the parameters satisfying the constraints in $A_i$, and for each $i$ the Gröbner bases $G_{ij}$ together with their corresponding Wcharacteristic sets form a normal characteristic decomposition of $F$. The sets of Gröbner bases computed by the algorithm provide a comprehensive characteristic decomposition of $F$ that is structureinvariant under the associated constraints and possesses many algebraic and geometric properties, including most of the notable properties on comprehensive Gröbner systems and comprehensive triangular decomposition. Some of these properties are highlighted in the paper and advantages of the proposed algorithm are discussed briefly from the aspects of methodological simplicity and computational performance using illustrative examples and preliminary experiments.

[+]
Alexandre Sedoglavic and Alexey V. Smirnov.
The tensor rank of $5 \times 5$ matrices multiplication is bounded by 98 and its border rank by 89.
Abstract:
We present a noncommutative algorithm for the product a $3 \times 5$ by a $5 \times 5$ matrices using 58 multiplications.
This algorithm allows to construct a noncommutative algorithm for multiplying $5 \times 5$ (resp. $10 \times 10$, $15 \times 15$) matrices using 98 (resp. 686, 2088) multiplications.
Furthermore, we describe an approximate algorithm that requires 89 multiplications and computes this product with an arbitrary small error. 
[+]
Jasper Nalbach, Erika Ábrahám and Gereon Kremer.
Extending the fundamental theorem of linear programming for strict inequalities.
Abstract:
Usual formulations of the fundamental theorem of linear programming only consider weak inequalities as side conditions.
While this suffices for solving linear programs with the Simplex algorithm, when we want to check the satisfiability of general quantifierfree linear real arithmetic formulas via Satisfiability Modulo Theories (SMT) solving, we need to extend the Simplex method to be able to handle strict inequalities, too.
In this paper we formalize such an extension, which has been successfully used in SMT solving even before a correctness proof has been given by King in his dissertation in the year 2014. Our contribution is an alternative correctness formalization which is analogous to the original theorem, and a corresponding proof that better highlights the inherent nature of the problem. 
[+]
Pascal Giorgi, Bruno Grenet and Armelle Perret du Cray.
On exact division and divisibility testing for sparse polynomials.
Abstract:
No polynomialtime algorithm is known to test whether a sparse polynomial $G$ divides another sparse polynomial $F$. While computing the quotient $Q=F$ quo $G$ can be done in polynomial time with respect to the sparsities of $F$, $G$ and $Q$, this is not yet sufficient to get a polynomialtime divisibility test in general. Indeed, the sparsity of the quotient $Q$ can be exponentially larger than the ones of $F$ and $G$. In the favorable case where the sparsity #$Q$ of the quotient is polynomial, the best known algorithm to compute $Q$ has a nonlinear factor #$G$#$Q$ in the complexity, which is not optimal.
In this work, we are interested in the two aspects of this problem. First, we propose a new randomized algorithm that computes the quotient of two sparse polynomials when the division is exact. Its complexity is quasilinear in the sparsities of $F$, $G$ and $Q$. Our approach relies on sparse interpolation and it works over any finite field or the ring of integers. Then, as a step toward faster divisibility testing, we provide a new polynomialtime algorithm when the divisor has a specific shape. More precisely, we reduce the problem to finding a polynomial $S$ such that $QS$ is sparse and testing divisibility by $S$ can be done in polynomial time. We identify some structure patterns in the divisor $G$ for which we can efficiently compute such a polynomial $S$. 
[+]
Thierry Combot.
Elementary Integration of Superelliptic Integrals.
Abstract:
Consider a superelliptic integral $I=\int P/(Q S^{1/k}) dx$ with $\mathbb{K}=\mathbb{Q}(\xi)$, $\xi$ a primitive $k$th root of unity $P,Q,S\in\mathbb{K}[x]$ and assume $S$ has degree and root multiplicities coprime with $k$. Note $d$ the maximum of the degree of $P,Q,S$, $h$ the logarithmic height of the coefficients and $g$ the genus of $y^kS(x)$. We present an algorithm which solves the elementary integration problem of $I$ generically in $O((kd)^{\omega+2g+1} h^{g+1})$ operations.

[+]
Lorenzo Baldi and Bernard Mourrain.
Computing real radicals by moment optimization.
Abstract:
We present a new algorithm for computing the real radical of an ideal $I$ and, more generally, the $S$radical of $I$, which is based on convex moment optimization. A truncated positive generic linear functional $\sigma$ vanishing on the generators of $I$ is computed solving a Moment Optimization Problem (MOP). We show that, for a large enough degree of truncation, the annihilator of $\sigma$ generates the real radical of $I$. We give an effective, general stopping criterion on the degree to detect when the prime ideals lying over the annihilator are real and compute the real radical as the intersection of real prime ideals lying over $I$.
The method involves several ingredients, that exploit the properties of generic positive moment sequences. A new efficient algorithm is proposed to compute a graded basis of the annihilator of a truncated positive linear functional. We propose a new algorithm to check that an irreducible decomposition of an algebraic variety is real, using a generic real projection to reduce to the hypersurface case. There we apply the Sign Changing Criterion, effectively performed with an exact MOP. Finally we illustrate our approach in some examples. 
[+]
Pierre Karpman, Clément Pernet, Hippolyte Signargout and Gilles Villard.
Computing the characteristic polynomial of Generic Toeplitzlike and Hankellike Matrices.
Abstract:
New algorithms are presented for computing annihilating polynomials of
Toeplitz, Hankel, and more generally Toeplitz+ Hankellike matrices over a field. Our approach follows works on Coppersmith's block Wiedemann method with structured projections, which have been recently successfully applied for computing the bivariate resultant. A first babystep/giant step approach directly derived using known techniques on structured matrices gives a randomized Monte Carlo algorithm for the minimal polynomial using O~(n^{w  c(w)} \alpha^(c(w))) arithmetic operations for an n x n Toeplitzlike matrix of displacement rank \alpha, where w is the exponent of matrix multiplication and c(2.373) \approx 0.523 for the best known value of w. For generic Toeplitz+Hankellike matrices a second algorithm computes
the characteristic polynomial; in particular, when the displacement rank is considered constant, its cost O~(n^{21/w})
operations when the displacement rank is considered constant. Previous algorithms required O(n^2) operations while the exponents presented here are respectively less than 1.86 and 1.58 with the best known estimate for w. 
[+]
Erich Kaltofen.
Computing Higher Polynomial Discriminants.
Abstract:
In https://arxiv.org/abs/1609.00840, D. Wang and J. Yang in 2016 have posed the problem how to compute the ``third'' discriminant of a polynomial f(X) = (XA_1) ... (XA_n),
delta_3(f) = product_{1 <= i < j < k < l <= n}
( (A_i + A_j  A_k  A_l)
(A_i  A_j + A_k  A_l)
(A_i  A_j  A_k + A_l)
),
from the coefficients of f; note that delta_3 is a symmetric polynomial in the A_i. For complex roots, delta_3(f) = 0 if the midpoint (average) of 2 roots is equal the midpoint of another 2 roots. Iterated resultant computations yield the square of the third discriminant. We apply a symbolic homotopy by Kaltofen and Trager [JSC, vol. 9, nr. 3, pp. 301320 (1990)] to compute its squareroot. Our algorithm use polynomially many coefficient field operations. 
[+]
Alexander Levin.
Generalized Gröbner Bases and New Properties of Multivariate Difference Dimension Polynomials.
Abstract:
We present a method of Gröbner bases with respect to several term orderings and use it to obtain new results on multivariate dimension polynomials of inversive difference modules. Then we use the difference structure of the module of Kähler differentials associated with a finitely generated inversive difference field extension of a given difference transcendence degree to describe the form of a multivariate difference dimension polynomial of the extension.

[+]
Jérémy Berthomieu, Christian Eder and Mohab Safey El Din.
msolve: A Library for Solving Polynomial Systems.
Abstract:
We present a new open source C library msolve dedicated to solve multivariate polynomial systems of dimension zero through computer algebra methods. The core algorithmic framework of msolve relies on Gröbner bases and linear algebra based algorithms for polynomial system solving. It relies on Gröbner bases computation w.r.t. the degree reverse lexicographical order, Gröbner conversion to a lexicographical Gröbner basis and real solving of univariate polynomials. We explain in detail how these three main steps of the solving process are implemented, how we exploit AVX2 instruction processors and the more general implementation ideas we put into practice to better exploit the computational capabilities of this algorithmic framework. We compare the practical performances of msolve with leading computer algebra systems such as Magma, Maple, Singular on a wide range of systems with finitely many complex solutions, showing that msolve can tackle systems which were out of reach by the computer algebra software stateoftheart.

[+]
Huu Phuoc Le and Mohab Safey El Din.
Faster one block quantifier elimination for regular polynomial systems of equations.
Abstract:
Quantifier elimination over the reals is a central problem in computational real algebraic geometry, polynomial system solving and symbolic computation. Given a semialgebraic formula (whose atoms are polynomial constraints) with quantifiers on some variables, it consists in computing a logically equivalent formula involving only unquantified variables. When there is no alternate of quantifier, one has a one block quantifier elimination problem.
We design a new practically efficient algorithm for solving one block quantifier elimination problems when the input semialgebraic formula is a system of polynomial equations satisfying some mild assumptions such as transversality. When the input is generic, involves $s$ polynomials of degree bounded by $D$ with $n$ quantified variables and $t$ unquantified ones, we prove that this algorithm outputs semialgebraic formulas of degree bounded by $\mathcal{D}$ using $O\ {\widetilde{~}}\left ((ns+1)\ 8^{t}\ \mathcal{D}^{3t+2} \binom{t+\mathcal{D}}{t} \right )$ arithmetic operations in the ground field where $\mathcal{D} := 2(n+s)\ D^s(D1)^{ns+1}\ \binom{n}{s}$. In practice, it allows us to solve quantifier elimination problems which are out of reach of the stateoftheart (up to $8$ variables). 
[+]
Robert Corless, Leili Rafiee Sevyeri and Dave Saunders.
Equivalences for Linearizations of Matrix Polynomials.
Abstract:
One useful standard method to compute eigenvalues of matrix polynomials $P(z) \in \mathbb{C}^{n\times n}[z]$ of degree at most $\ell$ in $z$ (denoted of grade $\ell$, for short) is to first transform $P(z)$ to an equivalent linear matrix polynomial $L(z)=zBA$, called a companion pencil, where $A$ and $B$ are usually of larger dimension than $P(z)$ but $L(z)$ is now only of grade $1$ in $z$. The eigenvalues and eigenvectors of $L(z)$ can be computed numerically by, for instance, the QZ algorithm.
The eigenvectors of $P(z)$, including those for infinite eigenvalues, can also be recovered from eigenvectors of $L(z)$ if $L(z)$ is what is called a ``strong linearization'' of $P(z)$.
In this paper we show how to use algorithms for computing the Hermite Normal Form of a companion matrix for a scalar polynomial to direct the discovery of unimodular matrix polynomial cofactors $E(z)$ and $F(z)$ which, via the equation $E(z) L(z) F(z) = \text{diag}( P(z), I_n, \ldots, I_n)$, explicitly show the equivalence of $P(z)$ and $L(z)$.
By this method we give new explicit constructions for several linearizations using different polynomial bases. We contrast these new unimodular pairs with those constructed by strict equivalence, some of which are also new to this paper. We discuss the limitations of this experimental, computational discovery method of finding unimodular cofactors. 
[+]
Eleonora Guerrini, Romain Lebreton and Ilaria Zappatore.
Polynomial Linear System Solving with Random Errors: new bounds and early termination technique.
Abstract:
This paper deals with the polynomial linear system solving with errors (PLSwE) problem. More specifically, we solve linear systems with univariate polynomial coefficients via an evaluation interpolation technique assuming that errors can occur before the interpolation step. In this framework, the number of evaluations needed to recover the solution depends on parameters of the linear system (degrees, size) and on the number of errors.
Our work is part of a series of papers about PLSwE aiming to reduce this number of evaluations, which is crucial since it affects the complexity. We proved in [Guerrini et al., Proc. ISIT’19] that if errors are randomly distributed, the bound on the number of evaluations can be lowered with respect to the error rate.
In this paper, following the approach of [Kaltofen et al., Proc. ISSAC’17], we improve the results of [Guerrini et al., Proc. ISIT’19] in two directions. First, we propose a new bound on the number of evaluations, lowering the dependency on the parameters of the linear system, based on work of [Cabay, Proc. SYMSAC’71]. Second, we introduce an early termination strategy in order to handle the unnecessary increase of the number of evaluations due to overestimation of the output degrees and of the number of errors. 
[+]
Joris van der Hoeven and Gleb Pogudin.
A zero test for σalgebraic power series.
Abstract:
One fundamental problem in symbolic computation is zero testing of expressions that involve special functions. Several such zero tests have been designed for the case when such special functions satisfy algebraic differential equations or linear difference equations. In this paper, we present an algorithm for the case of power series solutions to certain nonlinear difference equations.

[+]
David Braun, Nicolas Magaud and Pascal Schreck.
Two New Ways to Formally Prove DandelinGallucci's Theorem.
Abstract:
Mechanizing proofs of geometric theorems in 3D is significantly more challenging than in 2D. As a first noteworthy case study, we consider an iconic theorem of 3D geometry : DandelinGallucci theorem. We work in the very simple but powerful framework of projective incidence geometry, where only incidence relationships are considered. We study and compare three very different approaches to prove this theorem. The first one is based on classical synthetic geometry. We then propose a new proof based on the wellknown Wu's method. Finally, we use an original method based on matroid theory to generate a proof script which is then checked by the Coq proof assistant. For each method, we point out which parts of the proof we manage to carry out automatically and which parts are more difficult to automate and require human interaction. We hope these first developments will lead to formally proving more 3D theorems automatically and that it will be used to formally verify some key properties of computational geometry algorithms in 3D.

[+]
Pierre Lairez and Mohab Safey El Din.
Computing the dimension of real algebraic sets.
Abstract:
The dimension of a real algebraic set is the maximum number of degree of freedom when moving inside it. We propose an algorithm to compute the dimension of real algebraic sets. It is able to tackle previously intractable instances.

[+]
Philippe Malbos and Isaac Ren.
Completion in operads via essential syzygies.
Abstract:
We introduce an improved Gröbner basis completion algorithm for operads. To this end, we define operadic rewriting systems as a machinery to rewrite in operads, whose rewriting rules do not necessarily depend on an ambient monomial order. A Gröbner basis of an operadic ideal can be seen as a confluent and terminating operadic rewriting system; thus, the completion of a Gröbner basis is equivalent to the completion of a rewriting system. We improve the completion algorithm by filtering out redundant Spolynomials and testing only essential ones. Finally, we show how the notion of essential Spolynomials can be used to compute Gröbner bases for syzygy bimodules. This work is motivated by the computation of minimal models of associative algebras and symmetric operads. In this direction, we show how our completion algorithm extends to the case of shuffle operads.

[+]
Taihei Oki.
Computing Valuations of the Dieudonné Determinants.
Abstract:
This paper addresses the problem of computing valuations of the Dieudonné determinants of matrices over discrete valuation skew fields (DVSFs). This problem is an extension of computing degrees of determinants of polynomial matrices and has an application to the analysis of degrees of freedom of linear timevarying systems. Under a reasonable computational model, we propose two algorithms for a kind of DVSFs, called split.
Our algorithms are extensions of the combinatorial relaxation of Murota (1995) and the matrix expansion by MoriyamaMurota (2013), both of which are based on combinatorial optimization. While our algorithms require an upper bound on the output, we show that the bound is easily obtained if the input is skew polynomial matrices. 
[+]
J. Maurice Rojas and Yuyu Zhu.
A Complexity Chasm for Solving Univariate Sparse Polynomial Equations Over $p$adic Fields.
Abstract:
We reveal a complexity chasm, separating the trinomial and tetranomial cases, for solving univariate sparse polynomial equations over certain local fields. First, for any fixed field K in {Q_2,Q_3,Q_5,...}, we prove that any polynomial f, with degree d, integer coefficients of absolute value at most H, and exactly 3 monomial terms, can be solved over K in deterministic time O((log(dH))^9) in the classical Turing model. (The best previous algorithms were of complexity exponential in log d, even for just counting roots in Q_p.) In particular, our algorithm generates approximations in Q, with bitlength O((log(dH))^8) to all the roots of f in K, and these approximations converge quadratically under Newton iteration. On the other hand, we give a unified family of tetranomials requiring Omega(d log H) bits to distinguish the basep expansions of their roots in K.

[+]
Andrey Cherkasov and Dmitri Piontkovski.
Wilf classes of nonsymmetric operads.
Abstract:
Two operads are said to belong to the same Wilf class if they have the same generating series. We discuss possible Wilf classifications of nonsymmetric operads with monomial relations. As a corollary, this would give the same classification for the operads with a finite Groebner basis.
Generally, there is no algorithm to decide whether two finitely presented operads belong to the same Wilf class. Still, we show that if an operad has a finite Groebner basis, then the monomial basis of the operad forms an unambiguous contextfree language. Moreover, we discuss the deterministic grammar which defines the language. The generating series of the operad can be obtained as a result of an algorithmic elimination of variables from the algebraic system of equations defined by the Chomsky–Schutzenberger enumeration theorem. We then focus on the case of binary operads with a single relation. The approach is based on the results by Rowland on pattern avoidance in binary trees. We improve and refine Rowland's calculations and empirically confirm his conjecture. Here we use both the algebraic elimination and the direct calculation of formal power series from algebraic systems of equations. Finally, we discuss the connection of Wilf classes with algorithms for calculation of the Quillen homology of operads.