At the 10th International Summer School on AI and Big Data, Prof. Andrea Agazzi will talk about Correcting SGD for Scalable Bayesian Inference.
Besides its use as an optimization algorithm for training neural network models, Stochastic Gradient Descent (SGD) has been widely applied as a scalable algorithm for Bayesian inference and uncertainty quantification. This approach is particularly interesting in the large dataset regime, where most traditional Markov Chain Monte Carlo (MCMC) samplers become computationally intractable.
In this talk, I will first introduce the use of SGD for Bayesian inference, discuss some limitations, in terms of accuracy, of these MCMC methods, and then how such limitations can be overcome, asymptotically, through a post-hoc correction of the algorithm’s output. Finally, if time permits, I will discuss how this correction method can be naturally extended to correcting further sources of bias in covariance estimation, such as the (random) approximation of intractable integrals in likelihoods of Generalized Linear Mixed Models. Based on joint work with Sayan Mukherjee and Samuel Berchuk.
Andrea Agazzi is an Assistant Professor in the Mathematics Department at the University of Pisa. He received his PhD in Theoretical Physics at the University of Geneva, and was then hired as a Griffith Research Assistant Professor at Duke University. Before that, he obtained his Bsc degree in physics at ETH Zurich and his Msc in theoretical physics at Imperial College London. His main research focus is in applied probability theory, using techniques from statistical mechanics and stochastic analysis to gain insight in the (stochastic) behavior of complex dynamical models emerging in real world applications. For example, he has worked on scaling limits of machine learning models seen as interacting particle systems, on the behavior of large networks of chemical reactions, focusing on the relations between their stochastic dynamics and their structure, and on stochastic approximations of complex fluid models.