Could we stop toxic content before it's ever posted?
AI keyboard prompts to social media and messaging users BEFORE they post, signaling toxicity, show promise, says new research.
Have you ever paused right before hitting "send" on a message, second-guessing if what you typed might be a tad too harsh? Imagine if your keyboard could give you a heads-up, suggesting, "Hey, that might come off stronger than you intend." This is precisely the experiment studied in ‘Key to Kindness: Reducing Toxicity In Online Discourse Through Proactive Content Moderation in a Mobile Keyboard,’ undertaken by six UK-based researchers: Mark Warner, Angelika Strohmayer, Matthew Higgs, Husnain Rafiq, Liying Yang, and Lynne Coventry.
This approach introduces a proactive twist to battling online toxicity—instead of cleaning up the mess after it's spilled, why not prevent the spill in the first place? The study tested a mobile keyboard embedded with AI that actively scans for toxicity in messages as they're being typed. When potentially harmful language is detected, the system intervenes with prompts, alerting users about the potential impact of their words on the quality of conversations and relationships. Based on an understanding of how conflicts escalate, the prompt tests the theory that pausing, reflecting, and slowing down can stop that fiery reply before they leap off our screens and into someone else's day.
Participants in the study encountered different design variations of the moderation prompts, including variations in timing (during typing vs. before sending), friction (the effort required to bypass the prompt), and the presentation of the AI model's feedback. Using Perspective API to determine the toxicity of the user comment, the experiment tested the feature both in a social media setting as well as within a private messaging setting.
Users liked it, but it was a bit creepy
Users generally appreciated the heads-up, especially when it occurred during typing and with minimal friction. One participant noted, "It made me think twice before sending something I might regret."
Not surprisingly, several users were a bit uncomfortable with their unfinished messages being analyzed, citing privacy concerns. Others complained that the prompts disrupted the spontaneity of conversation. A few pointed out that the AI prompt was wrongly determining something ‘toxic’ when the user was ‘just kidding’. Cleary, there’s a delicate balance between guiding constructive dialogue and maintaining the fluid, dynamic nature of human interaction.
The research paper is rich not only in its presentation of the experiment, but also laying out the landscape of documented research of a wide range of similar efforts to prevent online toxicity. The citations in this research complement other ongoing initiatives to test prosocial design features, notably that of the ProSocial Design Network, which curates and researches evidence-based design solutions to bring out the best in human nature online.
Easing the load on content moderation
It’s crucial to pay attention to experiments with prosocial features such as this, as we consistently see that reactive approaches to content moderation—eg. flagging, demotion and take-downs —simply can't keep up with the deluge of digital discourse. The researchers highlight a critical shift in thinking: "Preventative measures can not only alleviate the burden on content moderation systems but also transform the very landscape of online interaction." It's a compelling argument for embedding empathy into technology, transforming our keyboards into tools for social cohesion.
What stands out in this innovative approach is its potential as an educational tool, subtly coaching us to communicate more mindfully. As one participant reflected, "It's like it's teaching me to be a better communicator without forcing it on me." This encapsulates the promise of AI in enhancing our digital lives—not as an overlord policing our every word but as a guide nudging us towards more harmonious interactions.
The research suggests a future where technology doesn't just connect us more efficiently but more kindly. The AI keyboard prompts aren't just about avoiding conflict; they're about elevating the quality of our conversations, ensuring that our digital spaces are conduits for understanding and respect.
Exploring prosocial design affordances don’t just offer a solution to online toxicity; it reimagines the role of technology in our lives — towards a digital ecosystem where empathy is coded into the very fabric of our interactions.
Lena Slachmuijlder is the Executive Director, Digital Peacebuilding at Search for Common Ground and Co-Chair, Council on Tech and Social Cohesion