Algorithms and Barriers for Fast Matrix Multiplication

Speaker

Josh Alman November 19, 2021.

Abstract

Matrix multiplication is one of the most basic algebraic operations. Since Strassen’s surprising breakthrough algorithm from 1969, which showed that matrices can be multiplied faster than the most straightforward algorithm, algorithmic problems from nearly every area of computer science have been sped up by clever reductions to matrix multiplication. It is popularly conjectured that two n × n matrices can be multiplied in roughly O(n^2) time, but we don’t yet have the algorithmic techniques needed to achieve this.

In this talk, I’ll give an overview of what’s known about matrix multiplication, including some exciting recent progress. I’ll talk about the techniques used to design these algorithms, leading up to the fastest known algorithm for matrix multiplication, which runs in time O(n^2.37286), which I recently designed with Virginia Vassilevska Williams. I’ll then describe new barrier results which help to explain why we’ve been stuck for so long on this problem, and describe what kinds of insights are needed for further improvements.




Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • Deterministic identity testing paradigms for bounded top-fanin depth-4 circuits
  • Brooks' Theorem in Graph Streams, A Single-Pass Semi-Streaming Algorithm for Δ-Coloring
  • Are quantum speedups for learning expressive classes possible?
  • Derandomization from Time-Space Tradeoffs
  • Why we couldn't prove SETH hardness of CVP for even norms, Subset-SUM, and Many more!