Hosted by the DSI Foundations of Data Science Center


Speaker

headshot of Andrew Gordon Wilson, Professor, Courant Institute of Mathematical Sciences and Center for Data Science, NYU

Andrew Gordon Wilson, Professor, Courant Institute of Mathematical Sciences and Center for Data Science, NYU

Seminar Chairs:

  • Micah Goldblum, Assistant Professor, Department of Electrical Engineering, Columbia University
  • Daniel Hsu, Co-Chair, DSI Foundations Center; Associate Professor of Computer Science, Columbia Engineering

Event Details

Wednesday, April 9, 2025 (1:30 PM – 2:30 PM ET)

Location: Armen Avanessians Data Science Institute Conference Room

Address: Northwest Corner Building, 14th Floor – 550 W 120th St, New York, NY 10027

REGISTRATION DEADLINE: The Columbia Morningside campus is open to the Columbia community. If you do not have an active CUID, the deadline to register is at 12:00 PM the day before the event.

Register


Talk Information

Deep Learning is Not So Mysterious or Different

Abstract: Deep neural networks are often seen as different from other model classes by defying conventional notions of generalization. Popular examples of anomalous generalization behaviour include benign overfitting, double descent, and the success of overparametrization. We argue that these phenomena are not distinct to neural networks, or particularly mysterious. Moreover, this generalization behaviour can be intuitively understood, and rigorously characterized using long-standing generalization frameworks such as PAC-Bayes and countable hypothesis bounds. We present soft inductive biases as a key unifying principle in explaining these phenomena: rather than restricting the hypothesis space to avoid overfitting, embrace a flexible hypothesis space, with a soft preference for simpler solutions that are consistent with the data. This principle can be encoded in many model classes, and thus deep learning is not as mysterious or different from other model classes as it might seem. However, we also highlight how deep learning is relatively distinct in other ways, such as its ability for representation learning, phenomena such as mode connectivity, and its relative universality.

Bio: Andrew Gordon Wilson is a Professor at the Courant Institute of Mathematical Sciences and Center for Data Science at New York University. He is interested in developing a prescriptive foundation for building intelligent systems. His work includes the discovery of mode connectivity, the SWA optimization procedure, the GPyTorch library for scalable Gaussian processes, informative generalization bounds for billion parameter neural networks, Bayesian optimization for biological sequence design, the first LLM for time-series forecasting, understanding generalization under distribution shifts, numerical linear algebra, equivariances, Bayesian model selection, and many contributions to Bayesian deep learning. His website is https://cims.nyu.edu/~andrewgw.