From examining AI’s role in language preservation to building ethical chatbots, Myles Ingram (’22) focuses on AI designed with people in mind..

While some see AI as a threat to cultural heritage and linguistic diversity, Myles Ingram (’22), an alum and founder of the AI firm Vyrtices, sees the potential for a different future—one where technology, used deliberately, strengthens cultural connection.

In his chapter in the new anthology, “AI for Community: Preserving Culture and Tradition,” Ingram explores how AI could support languages at risk. His work draws on interviews with speakers of endangered or under-resourced languagesincluding Basque, Garifuna, and Yorubaabout how AI might might help sustain their linguistic traditions.

Ingram and co-authors Reza Moradinezhad and Lucretia Williams visited the Data Science Institute (DSI) in November for the anthology’s launch, discussing how community-driven approaches to AI could help protect culture and language.

Balancing Promise and Risk

“Everyone I spoke to was excited about AI’s potential,” he said. “They wanted their languages to survive, to be heard. But they were also wary. They wanted the work to be done with them, not for them.”

Ingram’s chapter examines these tensions and possibilities raised in these conversations. “AI can automate resource generation and make language learning more accessible,” he notes. “But without community oversight, it risks flattening the richness and authenticity of cultural expression.” 

The project reinforced a central theme of Ingram’s work as an AI software consultant: that the power and potential of AI must remain accountable to the people it serves. “I think of AI as a tool,” he says. “And like any tool, it requires human judgment to be effective. You need people to look over what it’s doing, to make sure it’s representing them and their goals accurately.”

When it comes to language preservation, that means focusing on how it is trained, the data it uses, and the extent to which native speakers are involved. “It is essential that AI-driven initiatives prioritize transparency, community consent, and cultural integrity,” he writes.  

From Researcher to Founder

Ingram, a Harvard-trained biophysicist, first came to Columbia as research staff at the Columbia University Irving Medical Center where he applied machine learning to cancer and public health research. That experience led him to pursue an MS in Data Science  part time. “Everything I learned in the classroom I could immediately apply to my lab work. DSI helped me bridge the gap between medical research and the world of AI, and it gave me the confidence and technical range to work across industries.”

Building Accountable Systems

After earning his degree in 2022, he founded Vyrtices (formerly MylesAI Consulting), his consulting firm which designs conversational systems—from customer-facing chatbots to internal knowledge tools—for clients across healthcare, e-commerce, finance, and nonprofits.

Like the people he interviewed for his book chapter, Ingram’s clients are excited about AI’s potential but want to ensure those systems are built responsibly. “Guardrails are central to every project,” he says. “You’re responsible for what your chatbot says, so we make sure it’s focused, safe, and ethical.”

To do so, Ingram has found, involves understanding the technology’s strengths and weaknesses, with a focus on producing outcomes with concrete societal benefit. “I think the best use of AI is in specific, structured scenarios that help people in their day-to-day lives.” 

Bringing AI Closer to People

Looking ahead, Ingram’s experiences as both researcher and entrepreneur have inspired him to expand into teaching and public engagement, helping people understand how AI can be used responsibly. 

 “Many of the people I talk to are really interested in AI, but are intimidated by it,” he says. “I’d like to share my experience to help make the technology more accessible and inviting.”

At the anthology’s launch, Ingram did just that. He fielded questions from students, alumni, and community members—including one retired attendee who asked how to keep up with AI’s rapid changes. He acknowledged the challenge, and the need for more educational resources to help older adults track AI’s rapid advances. In discussions of cultural preservation, he returned to accessibility and scale, suggesting smaller, community-trained models that live closer to the people who use them.

His remarks, and those of his co-authors, drove home a hopeful message: the future of AI is not just the work of major tech firms, but can be a collective effort—built through local, responsible models and people committed to keeping human benefit at the center of progress.