[Your shopping cart is empty

News

AI can change belief in conspiracy theories, study finds

Research challenges conventional wisdom that evidence and arguments rarely help to change believers’ minds
Whether it is the mistaken idea that the moon landings never happened or the false claim that Covid jabs contain microchips, conspiracy theories abound, sometimes with dangerous consequences.
Now researchers have found that such beliefs can be altered by a chat with artificial intelligence (AI).
“Conventional wisdom will tell you that people who believe in conspiracy theories rarely, if ever, change their mind, especially according to evidence,” said Dr Thomas Costello, a co-author of the study from American University.
That, he added, is thought to be down to people adopting such beliefs to meet various needs – such as a desire for control. However, the new study offers a different stance.

“Our findings fundamentally challenge the view that evidence and arguments are of little use once someone has ‘gone down the rabbit hole’ and come to believe a conspiracy theory,” the team wrote.
Crucially, the researchers said, the approach relies on an AI system that can draw on a vast array of information to produce conversations that encourage critical thinking and provide bespoke, fact-based counterarguments.
“The AI knew in advance what the person believed and, because of that, it was able to tailor its persuasion to their precise belief system,” said Costello.
Writing in the journal Science, Costello and colleagues reported how they carried out a series of experiments involving 2,190 participants with a belief in conspiracy theories.
While the experiments varied slightly, all participants were asked to describe a particular conspiracy theory they believed and the evidence they thought supported it. This was then fed into an AI system called “DebunkBot”.
Participants were also asked to rate on a 100-point scale how true they thought the conspiracy theory was.
They then knowingly undertook a three-round back-and-forth conversation with the AI system about their conspiracy theory or a non-conspiracy topic. Afterwards, participants once more rated how true they thought their conspiracy theory was.
The results revealed those who discussed non-conspiracy topics only slightly lowered their “truth” rating afterwards. However, those who discussed their conspiracy theory with AI showed, on average, a 20% drop in their belief that it was true.
The team said the effects appeared to hold for at least two months, while the approach worked for almost all types of conspiracy theory – although not those that were true.
Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.
The researchers added that the size of the effect depended on factors including how important the belief was to the participant and their trust in AI.
“About one in four people who began the experiment believing a conspiracy theory came out the other end without that belief,” said Costello.
“In most cases, the AI can only chip away – making people a bit more sceptical and uncertain – but a select few were disabused of their conspiracy entirely.”
The researchers added that reducing belief in one conspiracy theory appeared to reduce participants’ belief in other such ideas, at least to a small degree, while the approach could have applications in the real world – for example, AI could reply to posts relating to conspiracy theories on social media.
Prof Sander van der Linden of the University of Cambridge, who was not involved in the work, questioned whether people would engage with such AI voluntarily in the real world.
He also said it was unclear if similar results would be found if participants had chatted with an anonymous human, while there are also questions about how the AI was convincing conspiracy believers, given the system also uses strategies such as empathy and affirmation.
But, he added: “Overall, it’s a really novel and potentially important finding and a nice illustration of how AI can be leveraged to fight misinformation.”
theguardian
Sep 15, 2024 11:20
Number of visit : 70

Comments

Sender name is required
Email is required
Characters left: 500
Comment is required