What would you tell your algorithm (if it listened)?
With experiments underway and regulatory mandates emerging, users may get more say in their feeds.
Platforms have always watched what you click — but what if they asked what you wanted?
You might now have that chance.
Meta is testing a feature on Threads called Dear Algo, where users can post plain-language requests like “show me more about climate” or “less celebrity gossip.”
For years, recommender systems have optimized for engagement over user wellbeing. But that’s beginning to change. Platforms are experimenting with giving users more say in what they see. And regulators are beginning to insist on it.
From settings to signals
YouTube has its own experiment underway, offering users a “Your Custom Feed” prompt that lets them type in what they want more or less of. TikTok offers users a Manage Topics section where they can use an interest ‘slider’ to choose more or less of content categories, or reset their For You page.
Elon Musk announced that by the end of the year, X users will be able to customize their timelines by asking X’s AI chatbot Grok for adjustments. Instagram just announced users can add and remove topics based on their interests to influence what the algorithm serves in Reels, with plans to expand the feature to Explore.
Bluesky has gone furthest: it allows users to switch between feed algorithms entirely, choosing among a growing marketplace of independently developed “feed generators.”
That’s especially important because we’ve seen what happens when feeds don’t respond — or pretend to. In the Panoptykon Foundation’s 2023 Algorithms of Trauma report, a user struggling with health anxiety was repeatedly shown distressing medical content on Facebook — despite explicitly requesting “less of this” over 100 times.
Studies have shown that algorithms optimized purely for engagement often promote content that is emotionally charged or polarizing, because outrage and controversy drive clicks — even if they undermine well-being, agency, or social cohesion.
Regulators are asking for it
Lawmakers around the world are beginning to see recommender systems for what they are: high-impact systems that influence public discourse, mental health, and democratic cohesion.
Under the EU’s Digital Services Act, Article 38 requires large platforms to offer users “at least one option” for a feed not based on profiling. Articles 25 and 27 further require clear explanations of how feed algorithms work — and accessible ways for users to modify them. A recent Dutch court ruling reinforced these obligations, requiring Meta to make non-profiling feed options easy to access and persistent — and calling the company’s practice of switching users back a “dark pattern” under the DSA.
In the United States, Minnesota has passed legislation such as the Prohibiting Social Media Manipulation Act, that would require social platforms to provide accessible controls for users to signal content preferences, and limit algorithmic optimization that ignores those preferences.
The Knight-Georgetown Institute’s Better Feeds Model Bill goes even further. It proposes that:
“A covered online platform shall provide an accessible user interface that enables users to expressly and unambiguously communicate their preferences… and shall take all reasonable steps to ensure that the output… is consistent with those preferences.”
In other words: platforms shouldn’t just offer controls — they should be required to respect them. The KGI model law also recommends:
Good default design, optimizing for long-term user value,
Public holdout testing, so algorithm changes can be independently evaluated, and
Special protections for minors, including default non-profiling feeds and restrictions on exploitative design
The Council on Tech and Social Cohesion’s Prosocial Tech Design Regulation: a Practical Guide recommends policymakers treat recommender systems not as neutral infrastructure but as powerful shapers of civic life. It calls for user agency tools, transparent defaults and a shift toward long-term user and societal value — not just short-term engagement.
But will users bother?
Some sceptics argue that users won’t bother to adjust feed settings even if given the chance. But recent research challenges that assumption:
In CTRL-Rec, a 2025 study on natural-language control, users who typed prompts like “more local news” reported greater satisfaction and control than those given only menus or sliders. (Carroll et al., 2025)
A 2021 study found that combining explainability and user control measurably increased users’ feelings of competence, autonomy, and trust in feed systems — reinforcing the idea that agency builds credibility and satisfaction. (Tsai and Bruselovsky, 2021)
These studies collectively undercut the idea that users are apathetic. People may not dive into settings daily, but when given tools that are intuitive, they use them — and feel better about the platform when they do.
Pinterest is asking users
Pinterest is one of the few platforms making its recommender experiments public — and building them around direct user feedback. Its new ‘Pinner Surveys’ ask users what they actually find helpful, beautiful, or inspiring. That data is then used to train recommendation models across its Home, Search, and Related Pins features.
“Surveys are a way for Pinners to tell us exactly what they think and for us to build that intentionality directly into our platform,” the Pinterest engineering team wrote. “Surveys provide an excellent avenue for de-biasing the system.”
This approach doesn’t assume that engagement equals satisfaction. It tries to test it — and document what people want in ways that are legible, not just measurable. As a founding signatory of the Inspired Internet Pledge, Pinterest has also committed to sharing what it learns — a transparency step that few platforms have matched.
Friction as a feature, not a failure
Not everyone needs to become a feed customizer. That’s why defaults matter. Just as phones auto-update for safety, and many cars now won’t start without a seatbelt fastened, we may be approaching a moment where recommender systems are treated as civic infrastructure — not just code.
It’s a reminder that friction isn’t failure. It’s intention. A platform that checks in with users — even once a month — can build trust and deliver better outcomes. It doesn’t require everyone to opt in manually. It requires design that listens by default.
The KGI Better Feeds report puts it clearly:
“Users should not have to navigate a maze of obscure menus to access the most important decisions about their feeds. Good defaults, visible controls, and meaningful choices must be the new standard.”
When people feel that they’re being heard, they trust the system more. When trust grows, so does the sense that the platform is something worth shaping.
The shift underway isn’t just about personalization — but a sense of participation. Not just a better feed — but a healthier digital experience.
Lena Slachmuijlder is Senior Advisor for digital peacebuilding at Search for Common Ground, a practitioner fellow at the USC Neely Center, and co-chair of the Council on Tech and Social Cohesion.


Really strong piece on the shift toward user-driven feed control. The distinction between engagement optimization and user agency is spot on, and the Pinterest survey approach feels like they're actually testing what people value versus just measuring clicks. I ran into this firsthand when trying to curate feeds for education content but kept getting dragged back into rage-bait by the algortihm. The idea that friction can be a feature rather than failure is what makes these tools workeable.