At DSI’s inaugural Data Science and AI Exchange event (DAX), leaders from academia, industry, and government shared insights about the realities of using advanced AI systems in both research and business practice. These three core themes emerged from the day’s unique conversations.
AI systems are being rapidly integrated into workflows and commercial products, but integrating AI technologies without sufficient attention to core consumer interests – like security and privacy – could transform those quick AI gains into business liabilities just as fast. Part of the challenge is that while AI systems that incorporate external–potentially untrusted–inputs can dramatically increase efficiency, they also necessarily violate the traditional “perimeter” security models that companies and technologies have relied on for decades. As a result, said Barnard computer science professor Rebecca Wright, it’s essential for business leaders to “make sure you know how [agentic AI] is implemented and…that anything you give it access to can be compromised or used in a way you didn’t expect.”
At the same time, “whether we have success with AI or not depends on whether consumers engage and trust it,” emphasized Bob Hedges, retired Global Chief Data Officer at Visa, in his opening keynote. Right now, though, Americans’ outlook on AI is grim, as Hedges illustrated by citing a recent NBC News poll that found just 26% of respondents had a positive view of AI. To change that, Hedges said, companies “should choose to innovate on behalf of consumers.”
One simple way to ensure that innovations truly serve consumers, Hedges said, is simply to ask them what they want. Referencing Steve Jobs, Hedges said: “People are smart. Ask them. Ask them every time.”
This type of approach–one that puts ethics and responsibility at the center of AI development–is precisely what Francesca Rossi, Global Leader for responsible AI & AI Governance at IBM, has been targeting in her work. Precisely because so many different teams are involved in the development of AI systems, she said, “Everybody has to be aware of [ethical AI] issues and what they mean.” From a practical standpoint, this means building a shared culture around ethical AI, through design-thinking workshops and ensuring the company is evaluating–and rewarding–people who embrace best practices. It also means taking a values-based approach to risk, rather than just a compliance-based one. By shifting the focus to “the risk for people and society…the consequence is that you are reducing the risks for the company,” said Rossi. “A values based approach is more sustainable in the long term.”
Many of today’s most powerful AI systems are literal “black boxes”: even the people who built them cannot fully account for the outputs they produce. But things become even more uncertain when they are released to the world. “It’s very hard to know how your end users are going to use your agent,” said Columbia computer science professor Zhou Yu. As a result, “It’s very hard to enumerate possible things to test your agents for.”
That level of uncertainty is a huge challenge for developing meaningful governance of these systems, according to Buka Gurgenidze-Steinau, Head of Centralized Intelligence at Memorial Sloan Kettering. “Governance is no longer just looking at the model performance,” she said. Instead, organizations need both a complete picture of what AI systems are in operation and answers to fundamental questions about them, said Gurgenidze-Steinau. “What are they doing? What do they have access to? And what actions can they take?”
At the same time, said Columbia law professor Talia Gillis, many of the challenges we are currently facing around AI governance are not unique to these technologies. “It’s humbling to think that this is not a new problem,” said Gillis. While terms like explainability are often “carrying a lot of weight,” said Gillis, that doesn’t mean there aren’t meaningful ways to think about AI systems. When evaluating AI systems, she suggested three key questions to ask: “What exactly do we want to know, who exactly are we telling this information to, and what do we expect to happen?”
At the end of the day, it’s also important to assess whether using governance methods designed for humans makes sense to apply to AI. “Language models have the ability to explain why they did what they did; those explanations are often lies,” said electrical engineering professor Micah Goldblum. “But humans are the same way.” On the other hand, while AI systems can commit real harm to people, they can’t truly be held accountable in the same way that humans can. As a result, said Gurgenidze-Steinau, “As fast as we’d like to go…bad actors out there are also using these tools. With governance, hopefully, we are a step ahead.”
Though the pace of AI systems development has created challenges for traditional models of expertise and training, it also underscores how essential human judgment remains. As engineering and operations research professor Rachel Cummings explained, educators and practitioners alike are now learning alongside their students and colleagues. Although many educators initially focused on students using large language models (LLMs) to “cheat” on assignments, she says, “[the] conversation quickly became: this is a skill they’re going to need in the workforce; how do we train them to use them effectively and responsibly?” And in the research space, “What does research integrity look like? How do you list AI as a co-author? How do I effectively use it?”
In organizational settings, understanding how to use AI tools to complement–rather than replace–human expertise becomes even clearer, as businesses must balance core responsibilities with innovation. When it comes to AI, “If you just want to try things where something is better than nothing, this is an incredible time,” said computer science professor Eugene Wu. “If you’re doing something where there are very high quality standards, we are not there yet.”
This creates a structural need for human decision-making about where and how AI should be applied, and managers, in particular, should take note. “While all of this is changing, everyone is trying to impress their boss,” said Omar Santos, Distinguished Engineer at Cisco. “This can create risks.” As such, integrating human expertise around identity management, system safeguards, and risk assessment remains an essential business practice. AI may process information at scale, but only humans can meaningfully define the boundaries within which it can operate safely and effectively.
The inherently boundary-breaking nature of AI systems also means that they require interdisciplinary human expertise to generate genuine intellectual and business value. AI systems cannot truly be designed by engineers alone, as Rossi noted, because “There are many different actors who play a part in the life cycle of the AI system…That creates very distributed responsibility.” This means that not only must researchers and practitioners become more interdisciplinary, but they must learn to communicate and collaborate across disciplines as well.
“People have to get good at speaking beyond their expertise,” said Cummings. “If I’m a software engineer, I can’t just communicate with other software engineers. I have to figure out how to talk to a lawyer or to an executive.” In an AI-forward environment, then, human expertise serves as the vital connective tissue for sustainable business development by aligning technical capability with ethical responsibility, organizational goals, and societal expectations. As Rossi put it: “For the success of the company, they need a responsible tech approach.”