AI can reduce anti-immigrant prejudice
Researchers show how chatting with an LLM reduced prejudice, but raise questions about AI's manipulative power
Originally posted in the Better Conflict Bulletin substack, by Eve Sneider and Jonathan Stray.
One of the more effective strategies for reducing prejudice among people is “deep canvassing,” or fostering thoughtful, empathetic conversations that encourage participants to share something of themselves and to consider the personal stories of their interlocutors. But this approach is time intensive and doesn’t scale well. So recently, a group of researchers tested whether AI could be employed to perform deep canvassing, and found that it can.
The researchers asked over a thousand diverse participants to have a dialogue with an LLM trained to conduct deep-canvassing-style, narrative-driven conversations about unauthorized immigrants and support for immigration policy reform. For example, the model prompted participants to articulate their stances on immigration and share the observations and experiences that led them to hold their beliefs. It also shared relevant stories to drive the conversation.
You can browse the hundreds of conversations here; below is an excerpt from one of the conversations where the AI shares a personal story based on the narratives it’s been prompted to share.
Researchers found that right after the conversations, anti-immigrant prejudice fell and pro-immigrant policy support rose. Those attitude shifts held in a follow-up assessment five weeks later, which happened to be during the final month of the 2024 United States election cycle.
This demonstrates the persuasive potential of AI. Employing humans to conduct deep canvassing is comparatively time- and energy-intensive, so this could be a way to scale these prejudice-reduction techniques.
Nonetheless, we should expect a fight over the use of this kind of technology.
While treating individual immigrants as people rather than stereotypes is hard to oppose, the observed effects on immigration policy may raise concerns about political influence.
The AI was provided with “three exemplar narratives illustrating immigrant hardships.” If it was instead provided with negative personal stories about people’s experience with immigrants (draining local government resources, housing shortages, crime, etc.) would the persuasive effects be similarly large in the other direction?
The study may also be scrutinized for how people’s views were counted as being ‘pro-immigrant’. Many people do not agree with the below policy ideas considered “pro-immigrant” in this study (bear in mind items 2 and 3 are phrased in reverse):
“The government should provide legal aid to undocumented immigrants who cannot afford an attorney for deportation proceedings.”
“Local police should automatically turn undocumented immigrants over to federal immigration officers.”
“The federal government should work to identify and deport all undocumented immigrants, including in the workplace.”
“The federal government should grant legal status to people brought to the U.S. illegally as children…”
“Undocumented immigrants should be able to become citizens after five years of work and tax-paying.”
So in the end, this will probably be a fight over which kinds of personal stories are considered representative (e.g. undocumented immigrant crime rates are lower than citizens, but not zero) and whose rights and suffering are considered important. This sort of persuasive technology will never be limited to one side, and each side will say the other side’s use is misleading, unethical, or unfair.
And yet… increasing empathy and understanding of the other seems like it isn’t a bad thing. Can we reliably draw a line between deeper mutual understanding and machine-driven political persuasion?