See the calendar below for future seminars and events.
Following every Thursday seminar, attendees are welcome to come to one of our SMRI Afternoon Teas which take place on Thursday afternoons at 2pm on the Quadrangle Terrace, accessed through the entry in Quadrangle Lobby P and via the SMRI Common Room on level 4.
Upcoming and current events: seminars, workshops and course
Workshop: Statistical Learning Theory
Speaker: Yunwen Lei, University of Hong Kong
Dates & times: Friday December 5, Monday December 8, Tuesday December 9, Tuesday December 16, Wednesday December 17, 2 – 4 pm AEDT
Location: The Quad Seminar Room S418 (McRae) A14.04.S418
Abstract: This workshop introduces fundamental tools in statistical learning theory to analyze the prediction behavior of machine learning algorithms. Topics include generalization analysis—such as population risk decomposition, uniform convergence, and Rademacher complexity—as well as the convergence analysis of optimization algorithms, including stochastic gradient descent and its early-stopping strategies. Applications to neural networks are also discussed.
Special Seminar, ‘Gradient optimization methods: the benefits of instability’ (Part of Mathematical Science of AI Safety Focus Period)
Speaker: Peter Bartlett, UC Berkeley
Date & time: Wednesday December 10, lecture from 5:30 – 6:30 pm AEDT, with the opportunity for post-lecture discussion from 6:30 – 7:30 pm (with light refreshments)
Location: Sydney Nanosciences Hub Lecture Theatre 4002 (Messel) ***Note: Updated location***
Abstract: Deep learning, the technology underlying the recent progress in AI, has revealed some major surprises from the perspective of theory. These methods seem to achieve their outstanding performance through different mechanisms from those of classical learning theory, mathematical statistics, and optimization theory. Optimization in deep learning relies on simple gradient descent algorithms that are traditionally viewed as a time discretization of gradient flow. However, in practice, large step sizes – large enough to cause oscillation of the loss – exhibit performance advantages. This talk will review recent results on gradient descent with logistic loss with a step size large enough that the optimization trajectory is at the “edge of stability.” We show the benefits of this initial oscillatory phase for linear functions and for multi-layer networks.
Based on joint work with Pierre Marion, Matus Telgarsky, Jingfeng Wu, and Bin Yu.
About the speaker: Peter Bartlett is Professor of the Graduate School in Statistics and Computer Science at UC Berkeley and Principal Scientist at Google DeepMind. At Berkeley, he is the Machine Learning Research Director at the Simons Institute for the Theory of Computing, Director of the Foundations of Data Science Institute, and Director of the Collaboration on the Theoretical Foundations of Deep Learning, and he has served as Associate Director of the Simons Institute. He is President of the Association for Computational Learning and co-author with Martin Anthony of the book Neural Network Learning: Theoretical Foundations. He was awarded the Malcolm McIntosh Prize for Physical Scientist of the Year, and has been an Institute of Mathematical Statistics Medallion Lecturer, an IMS Fellow and Australian Laureate Fellow, a Fellow of the ACM, a recipient of the UC Berkeley Chancellor’s Distinguished Service Award, and a Fellow of the Australian Academy of Science.

Focus Period, ‘Mathematical Science of AI Safety’
Some aspects of intelligence are becoming a commodity. They are bought and sold by the token and piped from large datacenters hosting artificial neural networks to our phones, laptops, cars and perhaps soon domestic robots. However our understanding of what neural networks do, and how they “learn” is limited. This makes it difficult to assess the downside risks of rapid adoption of AI across the economy and in our personal lives. The goal of this focus period will be to come to grips with these questions from a mathematical perspective. Many mathematicians want to contribute, but lack a clear entry point into the subject. A primary aim will be to articulate guiding questions, in consultation with experts at the forefront of AI development. We also aim to bring together some of the most interesting thinkers in this nascent field.
Organisers: Daniel Murfet, Susan Wei & Geordie Wiliamson
November 3rd – December 12th, 2025, The University of Sydney
More information on the website.
Workshop: Algebraic Geometry
A workshop organised by Paolo Cascini (Imperial), Ivan Cheltsov (Edinburgh), Svetlana Makarova (ANU), Evgeny Shinder (Sheffield), & Behrouz Taji (UNSW).
February 9th – 13th, 2025, The University of Sydney
More information on the website.
