Fact-Checkers Are Scrambling to Fight Disinformation With AI

Regional elections in Spain are almost four months away, but Irene Larraz and her team at Newtral are already prepared for impact. Each morning, half of Larraz’s team at the Madrid-based media outlet schedules political speeches and debates in preparation for fact-checking politicians’ statements. The other half debunking disinformation, scouring the internet for viral untruths and working to infiltrate groups that spread lies. Once the May elections wrap up, national elections will need to be called before the end of the year, likely to lead to a spate of online untruths. “It’s going to be pretty tough,” says Larraz. “We’re already preparing.”
The proliferation of misinformation and propaganda online means an uphill battle for fact-checkers worldwide who need to sift through and verify massive amounts of information in complex or fast-paced situations such as the Russian invasion of Ukraine, the Covid-19 pandemic, or election campaigns. This task has become even more difficult with the advent of chatbots that use large language models like OpenAI’s ChatGPT, which can produce natural-sounding text at the click of a button, essentially automating the production of misinformation.
Faced with this asymmetry, fact-checking organizations must develop their own AI-driven tools to automate and accelerate their work. It’s far from a complete solution, but fact-checkers are hoping these new tools will at least prevent the rift between them and their opponents from widening too quickly at a moment when social media companies are scaling back their own moderation activities.
“The race between fact-checkers and those who check them is unequal,” says Tim Gordon, co-founder of Best Practice AI, an artificial intelligence strategy and governance consultancy and trustee of a UK fact-checking charity.
“Fact-checkers are often tiny organizations compared to those who produce disinformation,” says Gordon. “And the scale of what Generative AI can produce and the pace at which it can do it means this race is only going to get tougher.”
Newtral began developing its multilingual AI language model, ClaimHunter, in 2020, funded by profits from its TV division, which produces a politician fact-checking show and documentaries for HBO and Netflix.
Using Microsoft’s BERT language model, ClaimHunter’s developers used 10,000 statements to train the system to recognize sentences that appear to contain statements of fact, such as dates, numbers, or comparisons. “We taught the machine to play the role of fact-checker,” says Rubén Míguez, Newtral’s chief technology officer.
It is an arduous task to identify claims made by political figures and social media accounts that need to be verified. ClaimHunter automatically detects political claims on Twitter, while another application transcribes video and audio reports from politicians into text. Both identify and highlight statements that contain a claim that is publicly relevant and can be proven or disproved – such as statements that are not ambiguous, questions or opinions – and report them to Newtral’s fact-checkers for review.
The system isn’t perfect and occasionally marks opinion as fact, but its flaws help users constantly retrain the algorithm. It cut the time it takes to identify statements worth checking by 70 to 80 percent, Míguez says.
https://www.wired.com/story/fact-checkers-ai-chatgpt-misinformation/ Fact-Checkers Are Scrambling to Fight Disinformation With AI