When Polarization Pays
An investigation by Foundation Maldita reveals how creators are incentivized to inflame divisions for profit
Polarizing content pays.
Even without an ideological stake, creators are discovering that inflaming civic tensions is a fast route to platform monetization. Enforcement — both by platforms and under Europe’s Digital Services Act — is not slowing that dynamic.
A recent investigation by Fundacion Maldita.es documents 550 TikTok accounts that posted more than 5,800 AI-generated protest videos across 18 countries, generating more than 89 million views . These were not coordinated geopolitical actors. They were individual creators simply optimizing for growth.
Here’s how one creator explained their strategy to the Maltida investors:
“We get an idea of the trends from the news. We will create videos and march content focused on that country so that more people will see them.”
The formula was simple: produce emotionally charged protest content tied to trending political events; grow followers rapidly; surpass 10,000 followers to qualify for TikTok’s Creator Rewards Program; or sell the account once it becomes valuable.
The ideology of the content hardly mattered. Maldita found accounts posting content supporting opposing candidates, sometimes within hours. The investigation states that “the political cause or ideology is not decisive” and that accounts “jump between different topics depending on the political news cycle” .
These actors are “not necessarily ideologically motivated, they’re financially motivated,” according to Maldita Associate Director for Public Policy Carlos Hernández-Echevarría speeaking on the Tech Policy Press podcast about the investigation.
Manufacturing virality
Maldita describes the AI-generated videos as exploiting “emotional, polarizing, topical, and false content.” Here are two examples of captions on the posts:
“People of Iran raise their voices for freedom, justice, and human rights”
“A multitude of Venezuelans celebrate with tears and shouts of freedom”
These AI-generated protest scenes, presented as real events, amplified political tensions with imagery optimized for engagement.
Maldita reports that protest-related videos significantly outperformed non-political content on the same accounts. More than 60 of the identified accounts surpassed the 10,000-follower eligibility threshold associated with monetization.
The Maltida report also highlights a deeper risk for targeted influence, including during elections. “Accounts that only publish videos about a specific cause or political figure can, even unintentionally, segment their audience by political affiliation and then sell those accounts to those seeking that type of audience.”
Capturing TikTok’s rewards
TikTok’s Creator Rewards Program is available only in a limited set of countries: the United States, the United Kingdom, Germany, Japan, South Korea, France, Mexico, and Brazil.
According to Maldita, some creators described using VPNs or account configuration strategies so that TikTok believed they were located in one of these eligible regions in order to access monetization.
TikTok’s Community Guidelines (Integrity and Authenticity) prohibit “synthetic or manipulated media that misleads users by distorting the truth of real-world events and causes harm.” Its terms of service also make clear that users are not permitted to transfer or sell accounts or otherwise transfer their contractual rights.
“The policies are not the problem… the policies are good, it’s just that there is this massive hole in enforcement,” said Hernández-Echevarría.
The DSA’s monetization blind spot
Under the EU’s Digital Services Act, Very Large Online Platforms must assess and mitigate systemic risks to civic discourse.
The European Commission has already imposed significant penalties on X for transparency failures in advertising repositories. But advertising transparency addresses paid political ads. The Maldita case concerns a different layer of financial incentives: creator monetization.
The DSA requires platforms to prevent risks to “civic discourse and electoral processes.” Allowing profiles to profit from fake protest videos runs counter to that obligation.
This is precisely the concern raised by What to Fix. In its analysis of DSA risk assessments, What to Fix argues that platforms have largely failed to address risks stemming from “the design, functioning and use of their monetization services.” Revenue sharing, creator rewards, amplification-linked payouts — remains underexamined in systemic risk reporting.
What to Fix recommends adding dedicated monetization sections to platform transparency centers, establishing a monetization publisher archive, and reporting enforcement data specifically for accounts that platforms themselves monetize.
Polarization footprint
The recent polarization footprint implemented in Kenya by the digital peacebuilding organization Build Up quantifies exposure to affective, identity-based hostility in social feeds. It does so by scoring three signals: attitudes (identity attacks or “othering” language), norms (whether such language is challenged or normalized), and interactions (how fragmented or siloed content communities are), producing a composite polarization score.
Whereby the footprint measures what users experience in their feed, Maldita’s investigation points to the incentives behind that type of polarizing content in the first place. This does not imply that all polarization is engineered for profit. It does show a repeatable model in which creators discover that divisive civic content performs — and pays.
When polarization pays, efforts to create trust and collaboration must extend to the incentive structures that determine who gets rewarded for producing it.
Lena Slachmuijlder is Senior Advisor for digital peacebuilding at Search for Common Ground, a practitioner fellow at the USC Neely Center, and co-chair of the Council on Tech and Social Cohesion.

