Can AI break the engagement trap?
Integrity Institute Co-Founder Jeff Allen reflects on TrustCon 2025, the evolution of trust and safety, and why regulatory pressure—and transparency—still matter most.
“There are people inside these AI companies saying: ‘Let’s not measure success by time spent.’ That’s different. That’s hopeful.”
That’s how Jeff Allen, Co-Founder and Chief Research Officer of the Integrity Institute and a long-time leader in trust and safety, described one of his biggest takeaways from this year’s TrustCon—the central gathering for the trust and safety community, now in its fourth year. This year’s themes included AI systems, child safety, and platform governance.
I sat down with Allen to unpack those themes, explore whether upstream, design-focused mitigation strategies are gaining traction, and hear what the Integrity Institute is focusing on next.
Rethinking AI success
Allen is cautiously optimistic about shifts in how AI teams are thinking about success.
“There was a real openness,” he said. “Especially in the way they were talking about measurement.” Rather than optimize for engagement—once a proxy for platform success—some AI teams are asking deeper questions: What do users want in the long run? How do we measure value, not just stickiness?
Allen noted that many in the space are admitting they don’t yet know how to measure AI success—but they recognize it shouldn’t be based on time spent or usage alone. That awareness, he said, marks progress.
He also pointed to growing interest in understanding first-order vs. second-order preferences: recognizing that what users want in the moment may not align with their long-term well-being.
“That’s a sophistication we didn’t really have early in the social media era,” Allen noted.
From moderation to product design
Much of today’s regulatory momentum is centered on child safety—not just because it’s urgent, but because it garners political consensus.
“It’s one of the few places where companies and regulators align,” Allen said.
What’s new is growing interest in upstream product design, not just removing bad content. “Things like infinite scroll, autoplay, and engagement loops are finally being questioned,” he said.
This proactive approach marks a promising evolution, though Allen notes it’s mostly limited to youth protections. “We haven’t seen much movement yet in applying those same insights to adults,” he said.
The regulatory push
“If you want to know what’s driving change,” Allen said, “it’s mostly regulation.”
He also acknowledged the role of consumer pushback—like the rise of Bluesky as an alternative to X—but said regulation has more systemic impact.
Still, he cautioned that most regulatory focus remains on transparency and reporting, not deeper interventions like changes to ranking systems.
“No one’s yet saying: ‘You have to change your recommender engine because it’s creating systemic risk.’ We’re not there,” Allen said.
A key barrier? A knowledge gap.
“There’s still a huge asymmetry. Platforms understand how these systems work. Regulators and auditors are trying to catch up.”
Despite challenges, Allen sees promise in the variety of regulatory models taking shape:
The EU’s Digital Services Act (DSA) mandates risk assessments but allows flexibility in implementation.
The UK’s Ofcom is setting clearer expectations for how those assessments should be done.
Australia’s eSafety Commission is using a Q&A model—asking companies direct questions and expecting public answers.
“I think it’s actually good that regulators are trying different things,” Allen said. “We don’t know yet what works best.”
DTSP Framework: promising
One framework that’s starting to shape the conversation is the DTSP Safe Framework, created by the Digital Trust and Safety Partnership and recently adopted as an international standard (ISO/IEC 25389).
Allen sees it as a useful tool.
“It gives structure. It gives language. It lets people say, ‘Here’s what we’re doing on trust and safety, and here’s how we assess it,’” he explained.
Importantly, it encourages proportional assessment—allowing companies to tailor evaluations based on risk, product scope, and user base.
But Allen also warns: “It’s only meaningful if it’s taken seriously.” Without transparency, such frameworks risk becoming internal paperwork instead of public accountability tools.
Transparency at the core
“We’ve entered a new era of partial transparency. The platforms and regulators now have more data than ever,” Allen said. “But civil society and researchers? We’re still on the outside.”
This partial transparency risks stalling progress. If only a few actors can audit systems or measure harm, accountability remains limited.
To push things forward, the Integrity Institute is working on a guide for researchers—designed to help academics and civil society actors make stronger, more precise Article 40 data access requests under the DSA.
“Right now, a lot of researchers don’t know how to ask for the data they need in the language of internal datasets,” Allen said. “So platforms can say, ‘That data doesn’t exist.’ We’re trying to bridge that gap.”
The guide will outline common datasets, connect them to social risk questions, and help researchers frame their requests in actionable terms.
Slow but steady progress
As we wrapped up, Allen reflected on the broader landscape.
“I think we’re in a phase of slow but steady movement,” he said. “More people are thinking about upstream design. More regulators are getting involved. The frameworks are better.”
But we’re still far from where we need to be.
“We don’t yet know what a responsibly designed ranking system looks like. Or how to prove harm. Or what structural changes are needed.”
Until then, Allen said, the work is to keep pushing: for clearer metrics, greater transparency, and stronger accountability.
“We may not have all the answers,” he concluded. “But at least now, more people are asking the right questions.”
Lena Slachmuijlder is Senior Advisor at Search for Common Ground and Co-Chairs the Council on Tech and Social Cohesion.

