As AI systems become embedded in everything from health care to national infrastructure, trust isn’t a feature—it’s a foundation. That was the central theme of the second panel at Data Science Day 2025, the flagship event of the Columbia University Data Science Institute. 

The session, called Secure Data Science: Detect, Defend, Deter brought together researchers and policy experts who are not only building AI systems, but shaping how those systems perform under pressure, explain their actions, and recover when things go wrong. Moderator Daniel Richman, Paul J. Kellner Professor of Law at Columbia, framed the stakes clearly: “These aren’t just technical systems. They’re decision-making systems. And when they fail, the consequences aren’t abstract.”

Read 5 Takeaways from Data Science Day 2025

Important Takeaways

1. AI Detecting AI: Making the Invisible Visible

Junfeng Yang, Professor of Computer Science at Columbia Engineering, introduced Raidar, a system that uses one AI model to detect content generated by another. Instead of scanning for watermarks or relying on model provenance, Raidar takes a more intuitive approach: it prompts a second AI to rewrite the input, because attackers behind AI-generated phishing, spam, and deepfakes are unlikely to opt in to watermarking or provenance schemes.

If the model barely changes the content, it’s likely that the original came from a machine.

“We’re measuring how machines see other machines,” Yang said. “It’s not about blocking content—it’s about making the source legible, so that individuals or organizations can apply their own policies on AI-generated content.”

The goal is not control or restriction—but clarity. Raidar offers a way to understand what we’re looking at, especially as synthetic content becomes harder for humans to detect on their own.

2. Accountability Starts with Design

Three panelists onstage. Rebecca Wright, the Druckenmiller Professor and Chair of Computer Science and Director of the Vagelos Computational Science Center at Barnard College, is speaking.

Rebecca Wright, Druckenmiller Professor and Chair of Computer Science at Barnard College and Director of the Vagelos Computational Science Center, emphasized that accountability is most effective when it’s a core part of a system’s architecture—not an afterthought.

“It’s not just about identifying a single point of failure,” she said. “It’s about building systems that help us understand what happened—and what to do next.”

That includes designing infrastructure for recourse, even in open, distributed, or anonymous environments—where clear responsibility can be harder to trace, but no less important to uphold.

3. Measuring Progress Means Rethinking Metrics

Jason Healey, Senior Research Scholar in the Faculty of International and Public Affairs; Adjunct Professor of International and Public Affairs

Jason Healey, Senior Research Scholar at Columbia’s School of International and Public Affairs, questioned how we measure cybersecurity success. “We’re flooded with metrics,” he said, “but very few tell us whether defenders are gaining ground.”

His team is developing new frameworks to identify whether adversaries are being forced into harder, less effective tactics—a stronger signal of long-term resilience.

4. Error Is Human. Protection Should Be Systemic.

Four panelists on stage discussing cybersecurity at Data Science Day 2025

The panelists were unified in rejecting the idea that users are to blame when systems fail.

“Calling it user error when someone clicks a phishing link is a cop-out,” said Healey. “That’s a system failure.”

Wright agreed: “People will make mistakes. The system should still protect them.”

Final Word

This panel was about more than cybersecurity. It was about resilience—and the structures that support it.

At the Columbia University Data Science Institute, designing for trust means asking harder questions—and doing the structural work to answer them.

More News From Data Science Day 2025

Five Insights from Data Science Day 2025

Panel 1 Recap: Where AI Is Headed — And Who’s Steering It

Panel 3 Recap: When the System Is the Patient: AI in Health Care

Keynote Highlights: Building Systems that Scale