Skip Navigation

Fall 2022 Data Science Seminar Series: Preetum Nakkiran: The Deep Bootstrap Framework

November 21, 2022

2:00 pm

Location: Blocker 220


Also online via Zoom:
Meeting ID: 998 4499 3279
Password: 724615

Speaker: Preetum Nakkiran, Ph.D., Research Scientist at Apple

Faculty Host: Venkatesh Shankar, MKTG

Abstract: We propose a new framework for reasoning about generalization in deep learning. The core idea is to couple the Real World, where optimizers take stochastic gradient steps on the empirical loss, to an Ideal World, where optimizers take steps on the population loss. This leads to an alternate decomposition of test error into: (1) the Ideal World test error plus (2) the gap between the two worlds. If the gap (2) is universally small, this reduces the problem of generalization in offline learning to the problem of optimization in online learning. We then give empirical evidence that this gap between worlds can be small in realistic deep learning settings, in particular supervised image classification. For example, CNNs generalize better than MLPs on image distributions in the Real World, but this is “because” they optimize faster on the population loss in the Ideal World. This suggests our framework is a useful tool for understanding generalization in deep learning, and lays a foundation for future research. This talk is based on this paper.

Biography: Dr. Preetum Nakkiran is a Research Scientist at Apple, in the Machine Learning Research Group. His research builds conceptual tools for understanding learning systems, including deep learning—using both theory and experiment as appropriate. His past works include Deep Double Descent, the Deep Bootstrap Framework, and Distributional Generalization. Preetum obtained his PhD in Computer Science at Harvard, advised by Boaz Barak and Madhu Sudan. During his PhD, he co-founded the Harvard ML Foundations Group, and co-ran the corresponding seminar series. He did his postdoc at UCSD with Misha Belkin, and was part of the NSF/Simons Collaboration on the Theoretical Foundations of Deep Learning. He has also worked with Google Research and OpenAI, and is the prior recipient of the Google PhD Fellowship and the NSF GRFP. Dr. Nakkiran did his undergraduate work in EECS at UC Berkeley.

Link to PDF Version

You can also click this link to join the seminar

For more information about TAMIDS Seminar Series, please contact Ms. Jennifer South at