The Columbia University Data Science Institute’s Foundations of Data Science Center is hosting a workshop designed to foster collaboration and knowledge sharing among researchers. Through talks and posters, Columbia scholars will showcase their work in the diverse realms of data science methods and applications.


Event Details

Friday, April 26, 2024 (9:30 AM – 1:00 PM ET) – In-Person Only

Location: School of Social Work Building (Room 311-312)
Address: 1255 Amsterdam Ave, New York, NY 10027

REGISTER HERE


Program

9:30 AM: Keynote: Assaf Zeevi, Columbia Business School (60 min)

Title: Robustness and Adaptivity in Bandit Algorithms

Abstract: Multi-armed bandits are widely studied abstractions of sequential decision making problems that allow, among other things, a straightforward study of the so-called exploration-exploitation tradeoff in online learning. Various families of algorithms have been developed over the years and many are now deployed on scale at various technology companies.  

In this talk we will present a few vignettes that pertain to robustness and adaptivity properties of common multi-armed Bandit learning algorithms. In particular, we will examine cases under which some “breakdown” phenomena is observed, elucidate distinctions among common algorithms and the manner in which they “break down” or exhibit “robustness.”

10:30 AM: Coffee Break (15 min)

10:45 AM: Short Talks (20 min – 10 min each)

  • Shubhangi Ghosh, Statistics – Minimax Risk of Sparse Linear Regression and Higher-Order Asymptotics
  • Tianyu Wang, IEOR – On the Need of a Modeling Language for Distribution Shifts: Illustrations on Tabular Dataset

11:05 AM: Posters and Lunch (1 hour, 45 min) 

  • Lunch served at ~11:30-11:45 AM

1:00 PM: End of Event


List of Posters

P01: Fair algorithms with unfair predictions
P02: The Effect of Model Capacity on the Emergence of In-Context Learning
P03: Optimizing Adaptive Experiments: A Unified Approach to Regret Minimization and Best-Arm Identification
P04: Efficient model evaluation on out-of-support distribution shifts
P05: Fast Hyperboloid Decision Tree Algorithms
P06: Bayesian Priors for Efficient Multi-task Representation Learning
P07: Robust Auction Design with Support Information
P08: Analyzing the Impact of Power on Emotion Through Computer Vision and Natural Language Processing
P09: Model Assessment and Selection under Temporal Distribution Shift
P10: Advancing Synthetic Control: Incorporating Donor Pool and Feature Selection
P11: Constrained Learning for Causal Inference and Semiparametric Statistics
P12: Fourier-Based Bounds for Wasserstein Distances and Their Applications in Data-Driven Problems
P13: Lower Bounds on Block-Diagonal SDP Relaxations for the Clique Number of the Paley Graphs
P14: Leveraging Offline Data for Online Decision-Making in Bayesian Multi-Armed Bandits
P15: Attend in the Lab
P16: Replay can provably increase forgetting
P17: Neyman-Pearson Multi-class Classification via Cost-sensitive Learning
P18: Transformers Learn State-Action Values from Sequence Predictions
P19: Inference of Chromosomal Instability in Cancer from DNA-sequencing Data
P20: On the Limited Representational Power of Value Functions and its Links to Statistical (In)Efficiency
P21: Posterior Sampling via Autoregressive Generation
P22:
Minimax Risk of Sparse Linear Regression and Higher-Order Asymptotics
P23:
On the Need of a Modeling Language for Distribution Shifts: Illustrations on Tabular Dataset