# Jürgen Gerhard: Asymptotically fast algorithms for modern computer algebra

Mathematical software has long advanced beyond its academic nursery and has become a standard tool in industrial design and analysis practice. The size of industrial problems made it necessary not just for numerical computation packages but also for computer algebra systems to increase their performance. More and more symbolic computation systems and packages, such as GMP, Magma, Maple, and NTL, now feature implementations of asymptotically fast algorithms that scale almost linearly with the input size. The goal of this tutorial is to fascinate the audience with the beauty and elegance of modern computer algebra.

At the heart of asymptotically fast polynomial arithmetic lies fast multiplication, e.g., via the fast Fourier transform. An abundance of higher-level computational problems for polynomials, including but not limited to interpolation, greatest common divisors, factorization, and symbolic integration can be reduced to polynomial multiplication, and any speedup in the underlying basic polynomial arithmetic immediately translates into a speedup of about the same order of magnitude for these advanced algorithms as well.

The techniques for fast symbolic computation have some of their roots in numerical analysis (fast Fourier transform, Newton iteration) and computer science (e.g., divide-and-conquer, work balancing). A somewhat unique ingredient in computer algebra, however, is the omnipresent scheme of modular algorithms.

The tutorial will start with basic dense univariate polynomial arithmetic via fast Fourier transforms, Newton iteration, and divide-and-conquer methods for evaluation and interpolation. We will then discuss the powerful algorithmic paradigm of modular algorithms. Subsequently, we will advance into the realm of greatest common divisors and factorization. We will conclude the algorithmic journey with higher-level applications from symbolic integration and symbolic summation.

The techniques presented in the tutorial work well not only for polynomial arithmetic, but can be extended (with some notable exceptions) to integer arithmetic as well, and often the same or similar techniques apply or can be used in linear algebra.

Optimizations make asymptotically fast algorithms practical and powerful. Determining the break-even points between classical and fast algorithms for hybrid schemes can be challenging and platform dependent. The Golden Rules of Schönhage et al. for the development of fast algorithms apply, notably:

• Do care about the size of O(1)!
• Do not waste a factor of two!
• Don't forget the algorithms in object design!
• Clean results by approximate methods is sometimes much faster!