Here's How AI Makes Comment Sections More Compassionate
A new AI classifier by Google Jigsaw's Perspective API scores social media posts' likelihood to foster human compassion, curiosity, reasoning and respect
There's a prevailing concern that AI might make social media more toxic, but what if AI tools could also transform these platforms into kinder, more understanding spaces? That’s exactly what Google's Jigsaw is aiming to achieve with its latest iteration of Perspective API, which not only filters out negativity but actively promotes positive interaction.
Since its inception, Perspective API has been integral in moderating online toxicity, utilized by over a thousand partners globally to process comments in 18 languages nearly two billion times a day. However, simply removing harmful content isn't enough to roll back the toxic polarization plaguing online discourse. The latest tools from Jigsaw evaluate comments for qualities like reasoning, personal storytelling, and curiosity—traits that contribute to more thoughtful and empathetic exchanges.
In a press release, Yasmin Green, the CEO of Jigsaw, emphasizes this shift towards nurturing better interactions: "We're moving beyond the traditional approach of moderating bad content to proactively encouraging good interactions."
“It's starting to become feasible to talk about something like building a classifier for compassion, or curiosity, or nuance,” says Jonathan Stray, a senior scientist at the Berkeley Center for Human-Compatible AI, speaking to TIME in a story about the classifiers. These AI tools leverage large language models (LLMs) that understand and process complex human qualities, fostering a more nurturing online environment.
In Perspective API’s study ‘Re-Ranking News Comments by Constructiveness and Curiosity Significantly Increases Perceived Respect, Trustworthiness, and Interest’ the re-ranking of comments was tested with 460 English-speaking US-based news readers. Based on tests with a political op-ed and a dining article the findings support the potential of incorporating prosocial qualities of speech into ranking to promote healthier, less polarized dialogue in online comment sections.
A key component of these AI classifiers is their use of "bridging attributes," designed to increase mutual understanding and trust across divides. It builds on research by Aviv Ovadya and Luke Thorburn on bridging systems: “systems which increase mutual understanding and trust across divides, creating space for productive conflict.” Similar academic work by academics such as Jonathan Stray, Lisa Schirch, and Natalie Stroud is demonstrating how such systems help people engage with and learn from differing viewpoints. Such evidence is reinforced by what peacebuilders know as the importance of sharing personal stories and fostering qualities like curiosity and compassion in weaving trust across divides.
Initiatives such as this, particularly when they emerge from cross-fertilization and transparent experimentation are encouraging to the Council on Tech and Social Cohesion. In advocating for design approaches to foster more prosocial technology, collaboration between technologists, academics, peacebuilders, and policy influencers is essential. An upcoming workshop co-hosted by the Council and the Plurality Institute in May will bring together technologists and academics to explore research on the use of prompts, coaches, nudges, tags, buttons, and AI mediators in comment sections.
Council Co-Chair Ravi Iyer, Managing Director at the USC Neely School, points to the potential to incentivize more constructive speech through such initiatives. Speaking to TIME, he said, "Elevating more desirable forms of online speech could create new incentives for more positive online—and possibly offline—social norms. If a platform amplifies toxic comments, then people get the signal they should do terrible things. If the top comments are informative and useful, then people follow the norm and create more informative and useful comments."
Despite these advancements being developed within Google, there are no current plans to implement them for YouTube comments. Instead, Jigsaw is making these classifiers available for smaller and independent platforms to use. This strategic choice aims to build an evidence base that demonstrates the popularity of these features with users, encouraging broader adoption.
While there may be skepticism about why companies would shift away from engagement-driven models, there is a growing recognition that long-term engagement is often linked to higher quality content, not just content that garners the most clicks. This offers hope for a future where platforms can be commercially successful while fostering a more positive and constructive environment.
By focusing on the design of our technology, we can transform the internet into a space that not only communicates more but communicates better. This isn't just about combating negativity; it's about cultivating an online world where positive interactions are the norm, shaping a healthier, safer, and more compassionate internet.
Lena Slachmuijlder is Executive Director of Digital Peacebuilding at Search for Common Ground, and Co-Chairs the Council on Tech and Social Cohesion