Here's how AI is enhancing digital participation
From dealing with toxicity to scaling up citizens' assemblies with deliberative tech, AI holds enormous potential to widen digital participation, but more testing is needed
Originally published by Laura Giesen on Digital Technologies, an online magazine for digital democracy.
Almost two years since the launch of ChatGPT, how much has AI been strengthening digital participation tools? Which AI features are well-established, which are under development, and what visions exist for the future? Although we found few AI functionalities being widely used, there are many tools in development and testing.
What the AI does to enhance participation processes
The most common AI-based functionalities in participation tools are toxicity screening, analysis of inputs and translation. The first two aim to lighten the workload of people who administer digital participation processes. They tend to be most relevant for large-scale projects with high participation rates, where the large number of inputs makes it challenging to keep up and organise the information. These are the functionalities that are in relatively widespread use.
Toxicity Screening
Toxicity screening is used on many popular digital participation platforms including make.org, Your Priorities, Go Vocal, Assembl and others to flag hateful or inappropriate inputs. Flagged entries are sent to a human administrator who decides whether to remove them or react otherwise. Participation platforms often rely on outside tools developed for this purpose. For example, Your Priorities uses the popular tool Perspective API, used in managing comments section by several big news organizations such as the New York Times.
Your Priorities uses a similar tool to scan images and videos uploaded to the platform. When content is flagged, it is blocked immediately. It is then sent to a human administrator who can then decide whether to unblock it. CEO of Citizens Foundation Robert Bjarnason, who built the platform, explains, “While text can be harmful, it is rarely illegal, unlike images and videos, which can more easily cross legal lines.”
Analysis / Organisation of inputs
Most of the platforms offering toxicity screening also offer some form of AI-based analysis of participants’ inputs. In its simplest form, these tools automatically sort inputs into different categories. In some cases, inputs within each category are then grouped with similar ideas.
Toxicity screening and the automated organisation of inputs are not new. In most cases, they apply Natural Language Processing (NLP) instead of the more complex and recent LLMs. However, the development in the field of LLMs has allowed for the introduction of new ways of analysing inputs. Besides clustering inputs into categories, some of these tools, such as Your Priorities, also now provide written summaries of the inputs or allow users to interact with them via chatbots.
While better at producing coherent-sounding text, LLM-based tools have the problem that they are less reliable than NLP-based tools, where the same input consistently produces the same output.
The popular open-source participation platforms do not offer this functionality yet. However, projects such as the Helsinki youth budget, which is using are already using external AI tools to analyse citizens’ contributions.
AI translation
Through the inclusion of AI translation, these platforms can also be used for multilingual processes. Participants can write in their own language, and read automated translations of comments written in other languages. Once again, this is not typically a built-in feature in the larger open-source platforms. However, they allow for existing machine translation services to be integrated. For example, users of Decidim work with DeepL Pro or, in the case of EU institutions, with the EU’s E-Translation.
Anomaly detection
On the make.org platform, users typically do not have to log in to participate, helping keep the barriers low. Instead, the platform uses AI-based anomaly detection to identify potential trolls whose votes can then be removed from the platform. It also uses an algorithm to ensure that all proposals are shown to the same number of participants. This way they avoid a situation where early proposals have a better chance of gathering high numbers of votes.
What’s being tested, and the future visions
With the widespread availability of generative AI, the providers of participation tools have become creative in coming up with new applications for the technology. While some are already in a beta testing phase, none of them are in widespread use.
Image generation
The makers of the map-based participation tool Senf.app are developing a tool called Urban Utopia. It uses AI image generation to visualise ideas for urban design. In the beta version, users can upload a picture or Google Street View screenshot of a place they would like to re-design and then use simple prompts to create visualisations of how it might be transformed. In an interview with Democracy Technologies, people from Senf.app and Urbanista told us about their vision of adding virtual reality to such a tool. It would allow participants and city planners to take a walk through alternative visualisations together.
AI and human combined policy making
The Citizens Foundation with The GovLab has developed the tool Policy Synth. Its declared purpose is to create policy-making processes in which humans collaborate with various AI agents. It allows the user to combine different AI agents to scale up processes as the one they call Smarter Crowdsourcing, which has different tasks such as identifying problems and their root causes and coming up and ranking solutions (using pairwise voting) based on selected human inputs.
“The problem today with LLMs is they are not always correct. In the context of democracy and citizen engagement that is a very serious issue,” says Robert Bjarnason. He is convinced however that agentic workflows - combining several AI agents to correct each other - will minimize the risk of errors.
AI improving formulations
Make.org and a few others are developing AI assistants which help participants in deliberative tech processes write better proposals or formulations. As make.org’s Chief AI officer, David Mas told us, the assistant would “propose reformulation, ask the citizen for context or more details about their proposal.”
Participation process design supported by AI
AI is also being used to improved process design, according to Wietse van Ransbeeck, founder of Go Vocal. Building on their large database of previous participatory processes, the AI would guide project managers through the steps of process design.
Scaling up deliberative processes with AI
How LLMs can help scale up deliberative processes such as citizens assemblies is getting attention from groups like Make.org, deliberAIde and others. The scaling would be enabled by AI facilitators running numerous small group discussions simultaneously at a much lower cost than with human facilitators.
An example of this large-scale AI-supported deliberative process was conducted by Stanford’s Deliberative Democracy Lab, whose researchers were able to conduct a deliberation with over 11,000 participants across the world. In this case, the AI moderator was considerably less active than a human facilitator would be, merely monitoring discussions for toxicity and sentiment, and ensuring that everyone got a chance to speak. It did not play the role of e.g. inviting people to clarify a point they were making, or adjudicating a discussion.
Spreading the results of citizens assemblies was another chance to harness AI. Make.org created Panoramic, a chatbot-like tool that answers questions about the proceedings of a citizens’ assemblies and offers links directly to relevant sections in video recordings and written documentation. This tool can already be tested for French Citizens Assembly on the End of Life.
Why open source adoption is cautious
AI functionalities are less common in most of the established open-source participation platforms. For many of them, it appears their reluctance stems from wanting to stay consistent with the open-source model and independent from third party models. The open source Your Priorities is a notable exception, with its AI functionalities rely on several proprietary and open-source models.
AI tools being developed for DIPAS
DIPAS, a map-based open-source participation tool developed by the City of Hamburg, will be doing internal tests on its own analytics tool next year, a custom pipeline consisting of both NLP and LLM tasks. According to their product owner, Mateusz Lendzinski, the tool would provide precise clustering into categories and subcategories, as well as showing their numeric distributions.
For Lendzinski, it’s worth it to go at the right pace, rather than rushing towards quick adoption. He highlighted the high priority of having full control over the architecture, data and fine-tuning as well as ensuring via an an external ethics audit that their tool complies with legal and ethical standards.
Citizen OS
Similar considerations have surfaced for other open-source providers the more they have experimented. For Sara Sinha from Citizen OS, their consultation with AI experts prompted a concern that delegating decision-making to algorithms, “could negatively impact people’s critical thinking skills. At the moment, AI applications are too unstable in terms of regulatory and ethical frameworks,” she added. Citizen OS continues to explore using AI applications to organise citizens’ inputs in large-scale participation projects, but Sinha adds, “this is contingent upon the population having a very high level of trust in AI, as well as a general consensus that all efforts have been made to ensure that AI models are bias-free.”
Liquid democracy
Liquid democracy, the organisation behind the open source platform adhocracy+ developed the moderation tool KOSMO based on their own language model developed in cooperation with the German Institute for Participatory Design. Next to flagging harmful content, it was trained to highlight constructive contributions. However, development was halted as it was too difficult to get hold of training data that met their requirements. While they are committed to the open-source approach, they did not want to resort to commercial models.
All an all, the hype around the developments in LLMs has certainly sparked new development in the field of participation tools. However, with the exception of translation, hardly any of the LLM-based functionalities are past the beta-testing phase, let alone in widespread use.
For further exploration of how AI is enhancing deliberative processes, see How AI can make us as smart as fish (or bees). Are you experimenting with AI in deliberative tech? We’d love to hear from you! Comment here, or reach out to lenas@sfcg.org