Bluesky lets you choose your algorithm
But will user choices lead to a more prosocial social media experience?
Bluesky’s user base skyrocketed this month, topping 21 million. Many of these new users are fleeing X, frustrated with their "For You" feed being flooded with Elon Musk posts, even when they weren't following him. Bluesky's promise of customizable feeds has been a major draw.
Bluesky's approach to algorithmic control is rooted in decentralization, giving users more autonomy over their content. As stated in a 2023 Bluesky blog post, "Algorithmic choice empowers individuals, fosters transparency, and breaks away from the one-size-fits-all model of social media."
Bluesky offers various feed options, including:
A "friends" feed, showcasing popular content among friends
Algorithmically-curated feeds for specific interests, such as science or Black voices
A "quiet posters" feed, highlighting infrequent posters who might otherwise be overlooked
But do these options make Bluesky a more prosocial experience?
Prosocial design is a “set of design patterns, features and processes which foster healthy interactions between individuals and which create the conditions for those interactions to thrive by ensuring individuals' safety, wellbeing and dignity,” according to the Prosocial Design Network.
Giving users control over their feeds is a step in this direction, but it's not a new concept. The Panotpykon Foundation's Safe by Default briefing advocates for human-centric recommender systems that prioritize conscious user choice and empowerment. They propose features like:
Sliders for content preferences (e.g., informative vs. entertaining content),
A "hard stop" button to suppress unwanted content, and
Prompts for users to define their interests or preferences.
Bluesky's harm reports spike
As Bluesky's user base grew, so did reports of harm. On November 15th, Bluesky Safety reported an all-time high of 42,000 reports in 24 hours. “We’re receiving about 3,000 reports/hour. To put that into context, in all of 2023, we received 360k reports," they reported.
The spike in harms is also a reminder that even with self-curated algorithms in Bluesky, the signals of negative feedback need to work better than those of its competitors. Panoptykon Foundation’s Algorithms of Trauma study found that Facebook’s explicit feedback tools such as "See fewer posts like this," failed to significantly impact the types of content shown to users. Despite flagging unwanted material over 100 times, the frequency of problematic anxiety-inducing posts actually increased.
Sparkable: prosocial by design
For people ready to leave X but not sure Bluesky’s offer goes far enough, there’s Sparkable, a social media network deliberately designed to foster positive engagement across diverse groups, encouraging empathy and dialogue rather than polarization.
In a recent interview on the Technically Human podcast, founder Vardon Hamdiu explained that the design of Sparkable, which is not-for-profit, was prompted by a distinct question: “If we have this vision of a platform that brings people together, that exposes you to other viewpoints, but always in a respectful, civil way, how would we design that?”
Answering this question led him to anchoring the user experience with bridge-based ranking, which he says shifts the focus from individual preferences to collective well-being. “Bridging-based ranking works because it highlights content that resonates with people who normally disagree.”
Bluesky’s algorithmic marketplace and Sparkable’s bridge-based ranking could enable users to more easily see content that reflects common ground across divides.
It also could empower users to more confidently curate their feed, away from things that they know are harming them online and triggering offline violence. For example, young people from from Birmingham England recently lamented how social media was aggravating real world violence, calling for more controls and guardrails by the companies. They described how the humiliation from personal videos being shared was driving a desire for revenge, undermining efforts off line to avoid cycles of violence through mediation or dialogue.
What should we be asking for if we want our social media spaces to be more prosocial, incentivizing more empathic and collaborative relationships?
Do algorithms foster empathy or exacerbate division? While Bluesky decentralizes algorithmic power, Sparkable’s bridging-based ranking directly addresses how content influences interpersonal understanding. Algorithmic plurality may mitigate some harms, but not be prosocial. With more power to personalize feeds, users may self-select into insular bubbles, further fracturing the online public sphere.
How are marginalized voices protected? With deliberate attention to those most at risk, designs may perpetuate harm, argues the Design From the Margins approach by Afsaneh Rigot of Article 19. Designs with margins in mind could reap benefits for all users, argues Rigot.
What are the platform’s incentives? Profit over people has been decried by many who point at how engagement-based ranking incentivizes harmful content at the expense of safety. Although Bluesky may not be running ads now, but some have criticized Bluesky's terms of service for giving the platform a "perpetual" and "irrevocable" license to all user content.
Would it comply with the DSA?
Policy frameworks like the European Union’s Digital Services Act (DSA) are beginning to nudge platforms toward greater transparency and user empowerment. The DSA mandates risk assessments for recommendation systems and encourages user agency, setting the stage for broader adoption of features like algorithmic choice.
Bluesky’s marketplace of algorithms could possibly be construed as part of risk mitigation measures required under Article 35 of the DSA, and offering a glimpse of what user empowerment could look like. It could also trigger a snowball effect from other platforms, notably Threads, who announced that users would be able to choose a ‘followers’ feed rather than the platform-curated ‘For You’ feed.
While algorithmic choice empowers users, it does not inherently make a platform prosocial. The broader critique of engagement-based systems highlight the need for deliberate, value-driven design. Transparency and the sharing of platform experiments using diverse algorithms can surface key insights. As writes the Prosocial Design Network, “even the subtlest design changes can influence human behavior. It’s not a leap to believe that, through better design, we could change the Internet to be a better place,”
What’s your take on the promise of algorithmic plurality? Have you tried out Bluesky’s marketplace? We’d love to hear from you in the comments!
Lena Slachmuijlder is Executive Director of Digital Peacebuilding at Search for Common Ground and Co-Chair of the Council on Tech and Social Cohesion.
"Bluesky lets you choose your algorithm: Will user choices lead to a more prosocial social media experience?"
The short answer is no. At least not a significantly more prosocial one. The fundamental problem with Twitter/X and Bluesky (and Threads/Mastodon/etc.) is not the algorithm or moderation. The core problem is the basic structure of these services: the combination of short public posts and replies, endless threading, anonymity, low barriers to entry and exit, and the ability to post instantly and continuously will inevitably lead to snark, tribalism, and incivility when discussing controversial issues.
And while all threaded discussion platforms and forums (including Facebook, Reddit, and even these Substack comments) will have this problem to some degree, when it comes to controversial issues, Twitter and its clones are almost perfectly designed to drive incivility. Which doesn't mean that they can't be guilty pleasures, but they will ever and always be hellscapes when it comes to politics.