Human Choices, AI Consequences: A Panel on Democracy’s Future
In a multidisciplinary panel on AI and elections, panelists emphasized the power—and responsibility—of human decision-making.
On October 22, students and faculty gathered for AI in the Ballot Box, a panel discussion examining AI’s impact on democratic systems, especially its potential to both support and challenge U.S. elections. The event, part of the Awakening Our Democracy series, was hosted by University Life in collaboration with the Data Science Institute, the Office of the Provost for Faculty Advancement, and Columbia Journalism School.
Melanie Bernitz, Interim Executive Vice President for University Life, opened the event, emphasizing the importance of all of us understanding AI’s role in democracy. Chris Wiggins, Chief Data Scientist at The New York Times and a member of the Data Science Institute’s Executive Committee, highlighted the need for collaboration. “Addressing issues like election integrity is going to take insights and partnerships from across all disciplines,” he said, also noting the transdisciplinary nature of the Data Science Institute, with which two panelists as well as the moderator are affiliated.
The Human Factor in AI: Misconceptions and Realities
Moderator Dhrumil Mehta, Associate Professor of Data Journalism, asked panelists to address AI misconceptions. Brian Smith, Assistant Professor of Computer Science, explained that models like ChatGPT don’t understand content. “These algorithms do not have original thought; they’re just replicating patterns they’ve seen,” he said. Smith stressed the risk lies in how people use AI, which can produce very convincing but misleading content.
He described his own experience seeing an AI-generated image of a floating ship on social media and thinking it was real. “The danger of AI is really the danger of people understanding what leads others to buy into social media content,” he said. The image of the ship was harmless, but those who know how to create misleading content can also choose to create harmful content.
Questioning AI’s Power Over Voter Beliefs
Yamil Velez, Assistant Professor of Political Science, expressed caution about AI’s influence on voters. “There is a lot of concern that these are super persuasive technologies,” he noted, but research shows limited effects. Even sophisticated campaigns rarely change minds significantly. “Persuasion is incredibly difficult,” Velez emphasized. He also warned that even though AI isn’t all-powerful, its ability to amplify misinformation at a very low cost still poses risks.
Technology Reflects Human Values, Not Destiny
Alma Steingard, Assistant Professor of History, provided context by comparing AI fears to past technological anxieties. “AI is not the first technology that created public fear,” she said, referencing reactions to the telegraph. She also noted that election polling was a major cause for concern when it was first adopted for U.S. Presidential elections in the 1930s. Steingard warned against technological determinism. “What it does is obscure the fact that behind these machines there are people with specific incentives,” she explained, underscoring that AI is shaped by human motives and social contexts.
Systemic Safeguards: The Real Work
Eugene Wu, Associate Professor of Computer Science, emphasized designing systems to manage AI safely. He compared AI’s risks to nuclear reactors, which require human-designed safeguards. “We need to think about the entire system, not just the models themselves,” Wu argued. The crucial work involves building protections to ensure reliability. “Important and interesting work lies ahead in designing these systemic protections,” he added, highlighting human responsibility in creating robust defenses.
AI Literacy: A Crucial Public Skill
The panelists called for increased public understanding of AI. Brian Smith urged people to interact with AI tools firsthand. “Try to generate content yourself, and see how convincing it can be,” he suggested. Understanding AI’s capabilities, he stressed, is key to recognizing and resisting manipulation.
Looking Ahead: A Human-Centric Vision
During the Q&A, speakers discussed AI’s potential to reshape work. Dhrumil Mehta shared insights from the journalism field, acknowledging both fears and possibilities. “There’s some trepidation around what this is going to do to our discipline,” he said, but also noted a more hopeful aspect of an AI future in journalism. “What are all the ways that we can leverage the time freed up from laborious, menial tasks?” He suggested that journalists might find more time to spend on reporting, and all fields could find similar new opportunities.
Humans Drive AI’s Future
The event emphasized a key theme: AI’s impact depends on the people who create, regulate, and use it. Technology is not autonomous; it’s shaped by human decisions and values. While AI brings opportunities and risks, proactive human action will be essential to ensure it supports democratic principles rather than undermines them. AI Literacy will continue to be an important skill post-election, as we navigate a changing Presidential administration and shifting political landscape.
####