Glen Weyl, researcher and economist at Microsoft and co-founder of the Plurality Institute, does not think superintelligence is something we are waiting to build. He argues that it already exists—in the collective systems humanity has created, from markets and democracies to corporations and religious communities.

This view runs counter to the dominant narrative of artificial intelligence as a race between humans and machines, a story told through benchmarks from chess to professional exams.

Rather than imagining AI as a rival set on overtaking us, Weyl argues that superintelligence is embedded in the collective systems humanity has built—markets, democracies, corporations, religious communities.

Common Knowledge and Its Erosion

Weyl’s starting point is the idea that collective intelligence depends on not just what people believe, but what they understand others to believe as well. This shared awareness underpins coordination, trust, and democratic action. Without it, groups lose the ability to coordinate, even when individuals privately agree. He points to political science research on China’s so-called “50 Cent Army,” which floods social platforms with distraction and division. The aim is to break the public’s ability to form shared understandings.

Social psychologists studying Western platforms find a similar pattern. People dramatically misperceive how tolerant their political opponents are of violence, and those errors increase at higher levels of “who knows what about whom.” The result is a social environment where individuals may be well-informed, yet society becomes less able to act collectively.

If AI systems are trained on fractured environments like these, Weyl warns, they inherit the same instability. 

Rebuilding the Social Web

Weyl sees a counterexample in Taiwan. Under sustained disinformation pressure, Taiwanese technologists and civic groups built tools that aim not simply to correct facts but to restore shared context. One platform clusters users by viewpoint and surfaces posts that earn agreement across ideological divides. It exposes where a community aligns, where it fractures, and how parallel groups perceive an issue.

Glen Wely speaks to a full auditorium.

This logic inspired systems like Community Notes, which attach annotations to content only when people from different perspectives rate them as helpful. Weyl’s own proposals extend the idea by labeling all posts with the communities that endorse or reject them—a way of making the social meaning of content visible again.

He believes these systems could support new business models. Communities could pay to elevate material that strengthens internal cohesion; advertisers could aim for shared contexts instead of isolated individuals.

Where Researchers Can Intervene

The shift from fact-checking to context-building, from authoritative answers to situated perspectives, marks out a research agenda that is still largely open. Each of these proposals—bridging algorithms, community-labeled interfaces, pro-social ranking systems, pluralistic model alignment—raises empirical, technical, and institutional questions that have barely been mapped. He noted to researchers in the audience unexplored frontiers where they could contribute: There are opportunities to study Japan’s emerging forms of digital participation, stress-test pro-social algorithms across ideological environments, and design interfaces that make algorithmic governance feel participatory rather than imposed. These are mission-critical tasks that sit at the center of how collective intelligence functions in a world where digital systems increasingly mediate what communities know about themselves and each other.

The Democratic Design Challenge Ahead

The challenge ahead, Weyl said, is to treat AI not as an autonomous rival but as part of a larger democratic design problem: how to build institutions where human and machine intelligence circulate together. The goal is not a machine that surpasses us, but a system that helps people, organizations, and societies think together at a higher level.

Glen Weyl (left) pictured with Eugene Wu (right).

He urged the audience to embrace the idea that “we are the superintelligence,” and that the real work is not building a rival to humanity but “learning how to be the superintelligence you want to see in the world.”


“How to Be the Superintelligence You’ve Been Waiting For” was delivered by Glen Weyl at the Data Science Institute on December 1, 2025. The lecture was hosted by the DSI Data, Media, and Society Center. Full event details and the abstract are available here.