Four ways AI can upgrade our digital public square
20 researchers reveal how LLMs can build trust and collaboration online
Online conversations often feel more combative than collaborative. The loudest dominate, making it hard to deliberate, weigh trade-offs, and reach decisions we can defend. Imagine instead a space where technology helps people listen, find common ground, and build bridges. That’s the promise of large language models (LLMs): tools that hold real potential to make online spaces more collaborative and inclusive.
This optimism stems from an April 2024 convening led by Google’s Jigsaw unit, which brought together experts from nearly 20 organizations spanning academia, government, civil society, and tech. Building on the input from and additional 50 civil society experts and technologists, ‘AI and the Future of Digital Public Squares’ explores how LLMs can enhance digital public squares, highlights risks to address, and proposes directions for future research.
The concept of a digital public square draws inspiration from traditional offline public squares, which have long served as spaces for open dialogue, debate, and community building—key pillars of a healthy democracy where conflicts can be addressed constructively. Today, private social media platforms have taken on much of this role, but they were not designed to foster environments that are truly welcoming, inclusive, or safe.
For the digital public square to live up to its potential, it must be reimagined with intentional design—one that prioritizes plurality, encourages civil discourse, and creates the conditions for meaningful engagement and collective problem-solving.
Here are four ways LLMs could help create healthier digital communities.
1. Collective dialogue made easier
The Challenge: Online discussions involving thousands of participants often feel chaotic. How can we ensure everyone feels heard and finds value in participating?
Collective dialogue systems like Pol.is, Remesh, Make.org, All Our Ideas, CrowdSmart and CitizenLab/Go Vocal blend the nuance of focus groups with the scale of polling. They help communities identify shared priorities and uncover common ground. AI capabilities enable multilingual discussions in real time, synthesize and visualize complex discussions, empowering participants and facilitators to navigate deliberations to ensure inclusion, engagement and transparent decision-making.
Evidence shows that using these tools in processes - like participatory budgeting or deliberative polling - lead to increased democratic engagement and satisfaction. This has the effect of bolstering trust in political institutions and reducing misperceptions, shielding the public from misinformation and divisive content that fuel polarization.
The paper notes that LLMs should complement—not replace—human-led deliberations. Trust is built not through the technology in itself, but through the way in which it supports one or several components of a process.
2. Bridging divides with empathy and curiosity
The Challenge: How do we help people connect across divides without erasing important disagreements?
Online platforms which rely on engagement-based recommender systems often amplify the most divisive content. Bridge-based algorithms are behind X’s Community Notes and recently adopted by YouTube in the US, by upranking content which resonates across ideological boundaries. Jigsaw’s Perspective API offers an alternative way to rank comments by prioritizing attributes like curiosity, nuance, and respect that users value in constructive conversations.
When such tools are used, the top comments would more likely include “I see where you’re coming from, but…” rather than a reactive insult. Research suggests such constructive contributions can reduce animosity and increase understanding. However, platforms must ensure they don’t over-prioritize shallow or generic content at the expense of meaningful debate. The paper points out that bridging methods would benefit from combining user diversity and content quality for optimal outcomes.
Future innovations could gamify incentives for fostering empathy-driven dialogue. Tools could also highlight shared identities or interests, such as showing users they’re both parents or part of the same local community. In addition fostering empathy and understanding through surfacing common ground, the adoption of such ranking systems could benefit business goals. Consumers have indicated that toxic content pushes them off the platforms while higher quality content leads to long term usage gains.
3. Supporting community moderators
The Challenge: Moderating online spaces is exhausting, and volunteer moderators face high rates of burnout. How can we make this work more sustainable?
LLMs could act as co-pilots for moderators, helping them enforce tailored rules or providing real-time feedback to users before they post. For instance, moderators on Reddit often juggle hundreds of reports a day, using a mix of automated tools and manual labor. LLMs could alleviate this burden by flagging harmful content based on nuanced community norms or automating complaint and appeal processes, as seen with tools like AppealMod.
In addition to reactive tools, LLMs could support proactive efforts to maintain community norms. They could generate summaries of ongoing discussions, helping moderators understand the sentiment of their community and identify emerging issues. AI bots can handle repetitive tasks like answering common questions or reminding users of community guidelines. For new moderators, LLMs can simulate comment threads to prepare moderators for potential activity that may occur and allow them to explore different scenarios where they intervene.
4. Separating the bots from the humans
The Challenge: How do we distinguish real people from bots in a way that protects anonymity and ensures accessibility?
As bots become more sophisticated, tools like proof-of-humanity systems are gaining traction. Traditional CAPTCHAs are no longer effective, but newer cryptographic methods, such as zero-knowledge proofs, could allow people to verify their humanity without revealing sensitive data.
These systems safeguard digital spaces from manipulation while protecting user privacy. However, risks like coercion, data breaches, and exclusion of marginalized communities must be addressed to maintain trust. For proof-of-humanity systems to succeed, they must adhere to principles of self-sovereignty and inclusivity.
More experiments needed
The report offers extensive avenues for future experimentation and research with these tools, such as :
In the short term (1-2 years)
Gamify bridging: Explore rewarding users for creating bridging content (e.g., badges, points, or content amplification).
Open-source tools for deliberation: Create user-friendly, adaptable tools for collective dialogue that integrate privacy-preserving proof-of-humanity systems while ensuring accessibility.
In the medium to long-term (3+ years)
Richer collective dialogue formats: Test how adding video, audio, or synchronicity could improve collective dialogue systems.
Privacy-preserving proof-of-humanity systems: Develop scalable, universally accessible methods like zero-knowledge proofs to ensure authentic participation without compromising privacy.
Measuring impact: Develop metrics to assess the social cohesion and mental toll of AI-based moderation systems, including the cost of inaction and lack of participation.
Remember the risks
While LLMs offer exciting possibilities, they also bring risks that could undermine their impact:
Bias and representation: LLMs may perpetuate existing inequalities and underrepresent marginalized voices.
Privacy and surveillance: Proof-of-humanity tools and AI moderation could expose sensitive data or normalize invasive practices.
Polarization: Algorithms risk amplifying divisive content or promoting superficial consensus.
Erosion of trust: Users may distrust systems that lack transparency, especially if AI-driven tools misrepresent their inputs.
Exploitation by bad actors: LLMs can be weaponized for misinformation or spam, overwhelming public discourse.
LLMs aren’t magic wands. The report emphasizes that deploying LLMs responsibly requires collaboration among technologists, policymakers, and civil society. They have real limitations, and their deployment must prioritize fairness, transparency, and accountability.
With thoughtful design and investment, we can create spaces where people feel heard, communities thrive, and meaningful conversations flourish—bringing us closer to the ideals of an inclusive digital public square.
Lena Slachmuijlder is Executive Director of Digital Peacebuilding at Search for Common Ground and Co-Chair of the Council on Tech and Social Cohesion.