Forget 'Like'. Here's what happens when we can click 'Respect'
Experiments on the design of comment sections reveal how small changes can increase civility while retaining engagement
We’ve all seen toxic comment sections. They are full of hatred and vitriol, and can make us lose hope in humanity. But what if some simple changes — like replacing the Like button with Respect - might be the key to transforming that toxicity?
This is one of multiple experiments led by Talia Stroud, Director of the Center for Media Engagement at the University of Texas at Austin, aimed at tweaking the design of online comment spaces and measuring shifts in human behavior. She presented the results of a series of experiments earlier this year at a Symposium on Comment Section Research and Design, co-hosted by the Plurality Institute and the Council on Tech and Social Cohesion.
Hostile comment sections polarize us
Hostile comment sections affect us on multiple levels. There’s the anguish and pain from hostile exchanges and hate speech, which then deters more thoughtful voices from participating. This results in a third effect: a growing public perception about our own polarization, the idea that we’re intractably doomed for division, sharing no common ground with the ‘other’.
This isn’t just a social issue; it’s a reputational one. When newsrooms or other organizations host comment sections filled with incivility, it reflects poorly on them, potentially driving users away. Stroud’s work is focused on identifying solutions that can reverse these trends and create healthier, more welcoming digital spaces.
Experiment #1 : The power of a 'Respect’ button
Stroud tested a simple yet powerful idea: replacing the Like button with a Respect button in comment sections. The results were intriguing. With the Respect button in place, users were less likely to react in a partisan manner. This led Stroud’s team to conclude that news organizations interested in promoting less partisan behaviors should consider using a Respect button rather than the Like or Recommend button in comment sections. In several instances, the Respect button yielded more willingness to click on comments from a differing political perspective compared to the Recommend or Like buttons. This small change in the interface nudged users toward more thoughtful and less polarized interactions. It’s a compelling example of how even minor tweaks in design can have outsized effects on user behavior.
Experiment #2: Remove everything vs reduce toxicity
One of Stroud’s key studies involved a partnership with 24 Gannett newsrooms to explore how different commenting systems could impact user behavior. They relied on Coral, which offers a suite of tools helping moderators easily identify disruptive comments and surface or uprank the best submissions, supported by smart AI technology. Here’s more about why Coral was built and how it works.
The experiment tested four scenarios:
keeping Facebook comments,
switching to comments with the Coral platform,
limiting the Coral comment experience to subscribers, and
turning off comments entirely.
The results were telling. When newsrooms turned off comments, user engagement dropped—people spent less time on the site. Yet, most users didn’t even notice the absence of comments, and those who did often felt that the site experience had worsened. However, the most significant findings came from the newsrooms that switched to Coral. Here, the toxicity in comments decreased, engagement improved, and journalists found that they were better able to connect with their readers. The design of the commenting system, it turns out, has a substantial impact on the quality of discourse.
Experiment #3: When big ain’t better
Stroud also explored the challenge of balancing depth and scale in online communities. As digital spaces grow larger, they often lose the personal connections that make discussions meaningful. Stroud partnered with "The Weeds," a Vox podcast group, to test this dynamic. With a membership of 21,000, the group struggled with maintaining civil discourse. To address this, half of the members were placed in smaller Facebook Messenger groups, while the others remained in the larger group.
The findings were clear: smaller groups fostered more respectful and civil discussions. This suggests that large-scale online communities might benefit from being broken down into smaller, more manageable groups to encourage deeper, more meaningful interactions.
Experiment #4: The power of humility
In this experiment, Stroud examined the type of language used in comments expressing opposing views. She found that when these comments were framed with humility, they were received more positively. Humility in this instance refers to showing openness to different perspectives, admitting the possibility of being wrong, and a willingness to engage constructively.
What’s particularly striking is that this effect held true regardless of the commenter’s political orientation. Whether the audience leaned left or right, humble comments were more likely to be viewed favorably. This insight is crucial for those looking to bridge divides in polarized online spaces.
Stroud’s research also highlighted the importance of respectful engagement from authoritative figures in online spaces. In a study involving a local US politics reporter, respectful interactions by the reporter led to a 15% reduction in incivility and a 15% increase in the use of evidence in comments. This effect was not observed when the interactions came from a generic station logo, emphasizing the power of personal engagement in shaping online discourse.
Emergent products to healthify our discourse
There’s a growing number of products such as Coral emerging to detoxify, or healthify, our comments section, many relying on AI classifiers. They are aimed at companies as well as prominent public figures, celebrities, or anyone who might want to improve the quality of online discourse.
TrollWall is one of these. Described as an app which ‘‘hides toxic comments on your social media 24/7’’, they use human-guided AI classifiers to hide comments which meet the toxicity criteria agreed upon with the client. They are well aware of the sensitivity around content moderation which often rubs up against understandable sensitivities around censorship. They offer several distinctions between moderation and censorship, including: “Moderation seeks to safeguard a healthy online community. Censorship aims to control and often limit information for wider political or cultural dominance. Platforms practicing moderation typically outline clear content guidelines and inform users about the reasoning behind content removal. Censorship tends to operate without transparency, implemented without publicly stated rules.”
Jigsaw’s Perspective AI is another initiative which helps mitigate toxicity and ensure healthy dialogue online. They also have created Harassment Manager, an open source codebase for a web application that allows users to document and manage abuse targeted at them on social media, starting with Twitter.
Toxicity Mod Filter was offered by Disqus to their clients several years ago. Stroud’s work offers a blueprint for improving the quality of online interactions through thoughtful design choices. Whether it’s introducing a "respect" button, fostering smaller, more intimate discussion groups, or encouraging humility in language, these interventions show that platform architecture can be a powerful tool for promoting prosocial behavior.
As evidence of pervasive abuse of women and girls online consistently surfaces from across the globe, it’s increasingly clear that the ‘Chilling Effect’ resulting from online abuse is pushing women and girls off line or into self-muted mode. Prioritizing respect and civility in design, tech platforms can reduce self-censorship and disengagement, particularly among women, girls, and others who find the toxicity in comment sections alienating."
We know that no tech design is neutral. As Lisa Schirch of the University of Notre Dame highlights, it's crucial to ask three questions: What does the design allow you to do, prevent you from doing, and persuade you to do? Many of the experiments presented here may not directly challenge the profit-driven incentives of social media companies, and prior research has pointed to long-term engagement growth with a rise in quality content. Maybe we as users need to advocate for these features more often and more forcefully?
Lena Slachmuijlder is Co-Chair of the Council on Tech and Social Cohesion and Executive Director of Digital Peacebuilding at Search for Common Ground