A 2023 Special Semester organized by Geordie Williamson
The Mathematical challenges in AI seminar is the successor of the Machine Learning for the Working Mathematician special semester.
The main focus of the seminars this year will be to explore the mathematical problems that arise in modern machine learning. For example, we aim to cover:
1) Mathematical problems (e.g. in linear algebra and probability theory) whose resolution would assist the design, implementation and understanding of current AI models.
2) Mathematical problems or results resulting from interpretability of ML models.
3) Mathematical questions posing challenges for AI systems.
Our aim is to attract interested mathematicians to what we see as a fascinating and important source of new research directions.
The seminar is an initiative of the Sydney Mathematical Research Institute (SMRI).
Speaker list and schedule
‘The unreasonable effectiveness of mathematics in large scale deep learning’
Greg Yang (xAI) September 13 2023
Abstract: Recently, the theory of infinite-width neural networks led to the first technology, muTransfer, for tuning enormous neural networks that are too expensive to train more than once. For example, this allowed us to tune the 6.7 billion parameter version of GPT-3 using only 7% of its pretraining compute budget, and with some asterisks, we get a performance comparable to the original GPT-3 model with twice the parameter count. In this talk, I will explain the core insight behind this theory. In fact, this is an instance of what I call the Optimal Scaling Thesis, which connects infinite-size limits for general notions of “size” to the optimal design of large models in practice. I’ll end with several concrete key mathematical research questions whose resolutions will have incredible impact on the future of AI. Watch the recording.
‘Mathematical views on Modern Deep Learning Optimization’
Sadhika Malladi (Princeton University) September 28 2023
Abstract: This talk focuses on how rigorous mathematical tools can be used to describe the optimization of large, highly non-convex neural networks. We start by covering how stochastic differential equations (SDEs) provide a rigorous yet flexible model of how deep networks change over the course of training. We then cover how the SDEs yield practical insights into scaling training to highly distributed settings while preserving generalization performance. In the second half of the talk, we will explore the new deep learning paradigm of pre-training and fine-tuning large language models. We show that fine-tuning can be described by a very simplistic mathematical model, and insights allow us to develop a highly efficient and performant optimizer to fine-tune LLMs at scale. The talk will focus on various mathematical tools and the extent to which they can describe modern day deep learning. Watch the recording.
‘Mechanistic Interpretability & Mathematics’
Needl Nanda (Deep Mind) October 12 2023
Abstract: Mechanistic Interpretability is a branch of machine learning that takes a trained neural network, and tries to reverse-engineer the algorithms it’s learned. First, I’ll discuss what we’ve learned by reverse-engineering tiny models trained to do mathematical operations, eg the algorithm learned to do modular addition. I’ll then discuss the phenomena of superposition, where models spontaneously learn to use the geometry of high-dimensional spaces to use compression schemes and represent and compute more features than they have dimensions. Superposition is a major open problem in mechanistic interpretability, and I’ll discuss some of the weird mathematical phenomena that come up with superposition, some recent work exploring it, and open problems in the field. Watch the recording.
‘Formalizing Explanations of Neural Network Behaviors’
Paul Christiano (Alignment Research Center) October 26 2023
Abstract: Existing research on mechanistic interpretability usually tries to develop an informal human understanding of “how a model works,” making it hard to evaluate research results and raising concerns about scalability. Meanwhile formal proofs of model properties seem far out of reach both in theory and practice. In this talk I’ll discuss an alternative strategy for “explaining” a particular behavior of a given neural network. This notion is much weaker than proving that the network exhibits the behavior, but may still provide similar safety benefits. This talk will primarily motivate a research direction and a set of theoretical questions rather than presenting results. Watch the recording.
‘Transformers for maths, and maths for transformers’
Francois Charton (Meta AI): November 23 2023
Abstract: Transformers can be trained to solve problems of mathematics. I present two recent applications, in mathematics and physics: predicting integer sequences, and discovering the properties of scattering amplitudes in a close relative of Quantum Chromo Dynamics. Problems of mathematics can also help understand transformers. Using two examples from linear algebra and integer arithmetic, I show that model predictions can be explained, that trained models do not confabulate, and that carefully choosing the training distributions can help achieve better, and more robust, performance. Watch the recording.