Tuesday, April 29, 20259:30 am - 1:00 pm
The Columbia University Data Science Institute’s Foundations of Data Science Center is hosting a workshop designed to foster collaboration and knowledge sharing. Through talks and posters, Columbia researchers will showcase their work in the diverse realms of data science methods and applications.
Location: Columbia Engineering Innovation HubAddress: 2276 12th Ave, New York, NY 10027 – ManhattanvilleRegistration Deadline: If you do not have an active CUID, the deadline to register is at 12:00 PM the day before the event.
Register
Christopher Harshaw, Assistant Professor of Statistics, Graduate School of Arts and Sciences, Columbia University
Talk Title: The Conflict Graph Design: Estimating Causal Effects Under Network Interference
Abstract: From political science and economics to public health and corporate strategy, the randomized experiment is a widely used methodological tool for estimating causal effects. In the past 15 years or so, there has been a growing interest in network experiments, where subjects are presumed to be interacting in the experiment and their interactions are of substantive interest. While the literature on interference has focused primarily on unbiased and consistent estimation, designing randomized network experiments to ensure tight rates of convergence is relatively under-explored. Not only are the optimal rates of estimation for different causal effects under interference an open question but previously proposed designs are created in an ad-hoc fashion. In this talk, I will present a new experimental design for network experiments called the “Conflict Graph Design” which, given a pre-specified causal effect of interest and the underlying network, produces a randomization over treatment assignment with the goal of increasing the precision of effect estimation. Not only does this experiment design attain improved rates of consistency for several causal effects of interest, it also provides a unifying approach to designing network experiments. We also provide consistent variance estimators and asymptotically valid confidence intervals which facilitate inference of the causal effect under investigation. Joint work with Vardis Kandiros, Charis Pipis, and Costis Daskalakis at MIT.
See team member information in the list of exhibitors below.
P01: Learning Interpretable Optimal Treatment Regimes Using Kolmogorov-Arnold Networks
P02: Geometric Causal Models
P03: Fast, Accurate Manifold Denoising by Tunneling Riemannian Optimization
P04: Scalable Computation of Causal Bounds
P05: Probing adaptive decision-making under uncertainty using extended Hidden Markov Models
P06: Low regret Bayesian learning for Q-functions
P07: ClusterSC: Advancing Synthetic Control with Donor Selection
P08: Experiment Design for Assortment Optimization
P09: Adaptive and Efficient Learning with Blockwise Missing and Semi-Supervised Data
P10: A real-time EEG neurofeedback platform to predict Attend level via Muse-S
P11: Synthetic Blip Effects: Generalizing Synthetic Controls for the Dynamic Treatment Regime
P12 & Short Talk: Testing Causal Models with Hidden Variables in Polynomial Delay via Conditional Independencies
P13 & Short Talk: Uncertainty Quantification for LLM-Based Survey Simulations
P14 & Short Talk: Randomized Quasi-Monte Carlo Features for Kernel Approximation
P15 & Short Talk: Distributional Matrix Completion via Nearest Neighbors in the Wasserstein Space