A new report highlights the impact of AI-generated content on election narratives.
- The Alan Turing Institute conducted a year-long study on AI’s effect on elections globally.
- Deceptive AI content spread conspiracies, yet lacked evidence of altering election outcomes.
- The report calls for improved measures to combat AI-driven disinformation.
- Researchers emphasise the importance of accessing social media data for better threat monitoring.
A comprehensive report by the Alan Turing Institute has surfaced, detailing the influence of AI-generated content amid a year marked by numerous elections worldwide. The study, conducted over twelve months, aimed to understand how generative AI tools have impacted the democratic processes globally.
The Centre for Emerging Technology and Security (CETaS) reported that while AI-driven content did not conclusively alter the outcomes of significant elections, including the recent US presidential election, concerns persist. The growing presence of AI threats and its associated hype is perceived to be undermining trust within the information landscape, perpetuating harmful narratives.
The researchers observed various examples of viral AI disinformation, highlighting issues such as AI bot farms simulating voter behaviour and propagating false endorsements from celebrities. These instances underscore the potential risks that AI technologies pose in political contexts.
As a response, the report suggests several measures. These include enhancing barriers to the creation of disinformation, advancing technologies to better detect deepfake content, providing clearer guidance for media reporting on major incidents, and empowering society’s ability to uncover and combat disinformation effectively.
Sam Stockwell, the lead author, stated, “More than 2 billion people participated in elections this year, offering insight into the AI-enabled threats we encounter and presenting a crucial chance to protect upcoming elections.” Stockwell also suggested that while AI has not shown evidence of directly changing election outcomes, vigilance remains essential. Access to social media data is crucial for researchers to properly evaluate and mitigate the most severe threats to voters.
Moreover, the report notes difficulties in accurately assessing AI’s impact on recent elections. Despite the absence of evidence that AI swayed voters in the UK general election, apprehensions linger regarding AI-powered misinformation.
Some AI labs are taking preventative steps by incorporating safeguards into their tools to avoid non-consensual mimicry of public figures. However, concerns have been raised regarding startups like the London-based generative AI Haiper, which reportedly may not have stringent enough safeguards, thereby potentially increasing the risk of harmful content creation.
Ongoing vigilance and proactive measures are essential to mitigate AI-related threats and ensure electoral integrity.