About

This seminar series invites experts from across the country to come to Columbia and present the latest cutting-edge research in the field of Machine Learning and Artificial Intelligence. Running the gamut between theory and empirics, the seminar provides a single, unified space to bring together the ML/AI community at Columbia. Topics of interest include, but are not limited to, Language Models, Optimization for Deep Learning, Reinforcement and Imitation Learning,  Learning Theory, Interpretability and AI Alignment, AI for science, Probabilistic ML, and Bayesian methods.

Hosts & Co-Sponsors: DSI Foundations of Data Science Center; Department of Statistics, Arts and Sciences, Columbia Engineering

Registration

Registration for all CUID holders is preferred. If you do not have an active CUID, registration is required and is due at 12:00 PM the day prior to the seminar. Unfortunately, we cannot guarantee entrance to Columbia’s Morningside campus if you register following 12:00 PM the day prior to the seminar. Thank you for understanding!

Please contact Erin Elliott, DSI Events and Marketing Coordinator at ee2548@columbia.edu with any questions.

Register

Next Seminar

Date: Friday, December 12, 2025 (11:00 AM – 12:00 PM)

Location: Columbia School of Social Work, Room 311/312

Jason Weston Headshot

Jason Weston, Research Scientist at Facebook, NY and a Visiting Research Professor at NYU

Title: Self-Improvement of LLMs

Abstract: Classically, learning algorithms were designed to improve their performance by updating their parameters (weights), while keeping other components, such as the training data, loss function, and algorithm, fixed. We argue that fully intelligent systems will be able to self-improve across all aspects of their makeup. We describe recent methods that enable large language models (LLMs) to self-improve in various ways, increasing their performance on tasks relevant to human users. In particular, we describe methods whereby models are able to create their own training data (self-challenging), train on this data using themselves as their own reward model (self-rewarding), and train themselves to better provide their own rewards (meta-rewarding). We then discuss the future of self-improvement for AI and key challenges that remain unresolved.


Upcoming Seminar Schedule (Spring 2026)

Please save the below dates and times to attend the seminar series.

Friday, February 6 (11:00 AM – 12:00 PM) 

  • Location: School of Social Work, Room C03
  • Speaker: Lerrel Pinto, Assistant Professor of Computer Science at NYU Courant 

Friday, February 20 (11:00 AM – 12:00 PM)

Friday, March 13 (11:00 AM – 12:00 PM)

  • Location: School of Social Work, Room C03
  • Speaker: Danqi Chen, Associate Professor of Computer Science, Co-Leader of Princeton NLP Group, Associate Director of Princeton Language and Intelligence, Princeton University

Friday, March 27 (11:00 AM – 12:00 PM)

Friday, April 10 (11:00 AM – 12:00 PM) 

  • Location: School of Social Work, Room C03
  • Speaker: He He, Associate Professor of Computer Science and Data Science, NYU