How AI can make us as smart as fish (or bees)
Deliberation and decision-making can be boosted when AI is designed to help, writes Lena Slachmuijlder
Fish and bees are constantly making really important decisions. Whether it’s avoiding being eaten by a predator or having to move en masse to a new home, these creatures manage to make decisions as a unified entity, seamlessly aligning their movements and responses.
This natural phenomenon, known as swarm intelligence, has inspired tech designers to build AI agents and chatbots to improve the way we humans deliberate and decide. With increasing polarization and widening divides between citizens and their leaders (elected or not), AI could be just what we need to generate more peaceful and effective deliberation and decision-making at scale.
Conversational swarm intelligence
Conversational Swarm Intelligence (CSI) leverages AI to enable large groups to deliberate in real-time, much like swarms of fish or bees. By deploying AI agents to send signals between multiple small groups in real time, researchers have shown that the quality of participation, and the outcomes, improved in comparison to standard chat rooms.
Unanimous AI's research Conversational Swarm Intelligence: Enhancing groupwise deliberation illustrates how CSI works by dividing large groups into smaller, manageable subgroups for optimal deliberation and discussion. These subgroups are then interconnected through AI Observer Agents, which monitor and distill salient points from each subgroup, facilitating cross-pollination of ideas and advancing collective thinking across the entire network.
The experiments showed positive results on engagement, balanced participation and the quality of decisions. For example, participants using CSI systems like Thinkscape™ contributed 51% more content than those in standard chat rooms. This engagement was also 37% more balanced, reducing the dominance of certain participants and ensuring more equitable contribution across all members. Lastly, CSI processes were shown to lead to accurate answers more than twice as high as when individuals were working alone. This means that the group using CSI reflected an effective IQ increase of 28 points.
AI helps generate consensus
AI can also be an ally in working across deep divisions, when it can be challenging for us humans to uncover consensus statements that both reflect the nuance AND are yes-able across divides.
In the experiment by Bakker et al. (2022), Fine-tuning language models to generate consensus statements that maximize agreement among groups with diverse opinions, participants first wrote their individual opinions on various political and moral issues, which were used as training data for the AI model. The researchers then trained a 70-billion parameter language model and a reward model to predict and rank consensus statements based on their appeal to the group. The fine-tuned model generated consensus statements that participants subsequently evaluated, with 65% preferring AI-generated statements over human-generated ones. This capability of AI to synthesize diverse viewpoints into broadly acceptable statements could be a game-changer for achieving consensus on contentious issues.
Improving the quality of dialogue
AI can also play a crucial role in moderating online political conversations to foster more respectful and productive dialogues. As explained in Argyle et al. (2023), Leveraging AI for democratic discourse: Chat interventions can improve online political conversations at scale, an AI chat assistant acted as an at-scale, real-time moderator in divisive political conversations. The assistant made tailored suggestions on how to rephrase specific texts in the course of a live, online conversation, without fundamentally affecting the policy content or position taken in the messages.
The suggestions are based on three key insights known to enhance deliberation and conflict transformation:
restatement, simply repeating back a person’s main point to demonstrate understanding;
validation, affirming the legitimacy of the others’ views, while not requiring agreement (e.g., “I can see you care a lot about this issue”); and
politeness, modifying the statement to use more respectful language.
How did the participants respond? In an experiment involving discussions about gun regulation, participants who received AI-suggested rephrasings found their conversations to be more positive and were more willing to acknowledge differing perspectives. The AI suggestions were accepted two-thirds of the time, demonstrating their effectiveness in fostering respectful dialogue. Importantly, the intervention significantly increased the perceived quality of the conversation and the willingness to consider others' viewpoints, all without manipulating participants' core opinions on the policy issue. This study highlights the potential of AI to enhance democratic discourse by promoting respect and understanding in polarized discussions.
Enhancing citizens' assemblies with AI
As electoral democracy’s popularity is sliding in many parts of the world, there’s a growing interest in citizens' assemblies. These bring together a cross-section of society to deliberate on important issues, following a specific well-tuned methodology, where each ingredient matters for the successful outcome. The potential for AI to lend a hand, thus, could be useful at various stages. This was explored in Deliberative Citizens’ Assemblies: Harnessing AI to Boost the Quality of Deliberations and Decision-Making, by Sammy McKinney.
McKinney found that AI can significantly enhance the effectiveness of these assemblies by:
Improving Participant Selection: AI can analyze demographic data to ensure a diverse and representative selection of participants, enhancing the legitimacy and inclusivity of the assembly.
Providing Real-Time Information: During the learning phase, AI can provide participants with real-time access to relevant information and expert opinions, ensuring that everyone has the necessary knowledge to engage in informed deliberation.
Facilitating Discussions: AI tools can assist facilitators in managing discussions, ensuring that all voices are heard and that the conversation remains focused and productive. This can help prevent dominance by a few individuals and promote balanced participation.
Synthesizing Deliberations: AI can analyze the discussions and generate summaries that highlight key points of agreement and disagreement. This can help participants and organizers understand the deliberative process's outcomes and identify areas where further discussion may be needed.
Predicting Consensus: By analyzing participants' inputs, AI can predict potential areas of consensus, helping the group to focus on these areas and work towards mutually acceptable solutions.
A dynamic emergent field
Each month, I am learning of new entrants to this dynamic field of enhancing our ability to dialogue across divides, deliberate on hard, high-stakes issues, and reach agreements that we will all defend.
A great example of this is Common Good AI, dedicated to enhancing collective intelligence through AI integration. Their platform aims to empower groups to collaboratively ideate, debate diverse ideas, identify shared goals, and find a path forward together. By leveraging AI, Common Good AI helps communities discover shared values and create collaborative conversations to solve pressing problems such as healthcare access, gun violence, and climate adaptation. This approach fosters inclusive civic engagement, transforming how communities find common ground and solve problems collectively. This video gives a great summary of their approach.
Another example is Harmonica AI, which calls its approach ‘Multiplayer AI’ as it builds engagement, breaks down silos, while delivering more efficient decision making. It’s undertaking experiments in a wide variety of use cases, looking closely at the existing research, and inspired by the Collective Intelligence Project’s insights about what brings about higher quality deliberation.
Yes, there are risks
It would be naïve to embrace all the ideas shared here without acknowledging the risks. And as we who are facilitators know, the potential of AI to break the TRUST with the very participants whose mutual respect you’re trying to build — well, that would be disastrous. Here are some key risks to keep in mind:
Bias and Misinformation: AI systems can inadvertently perpetuate biases present in their foundation model training data, leading to skewed outcomes. Ensuring transparency and continuous monitoring of AI algorithms is essential to mitigate this risk.
Manipulation: AI could be designed to intentionally manipulate deliberative processes, subtly steering discussions towards predetermined outcomes. Or it could do so accidentally. Maintaining human oversight and implementing safeguards against manipulation are crucial.
Erosion of Trust: Over-reliance on AI could erode trust in human judgment and the deliberative process itself. The lack of transparency inherent in the AI ‘black box’ could grow the trust deficit in deliberative processes, especially in conflict-affected contexts.
For me, what’s key is that we keep experimenting. Let’s be conscious of the risks but also ensure we DESIGN these AI agents or chatbots to do more of what we know works in real world dialogue across divides. As this field evolves, I believe we should prioritize more experiments in highly polarized contexts within the Global Majority where the social contract is increasingly fragile. This dynamic and evolving space holds great promise, but careful, conscientious development and deployment are key.
Lena Slachmuijlder is Co-Chair of the Council on Tech and Social Cohesion and Executive Director of Digital Peacebuilding at Search for Common Ground
Hi Lena, great piece.
you might like Puja Ohlhaver | What does communication, consensus and governance look like in a pluralistic future? (https://www.youtube.com/watch?v=Bkxx_GA9oWw)
Yes, love this. I’ve been tracking and mapping this space for years and would be happy to chat more on interoperability in this space. Would also love to collaborate on a research project on this topic, perhaps at plurality?