Artificial intelligence (AI) has become an increasingly prominent tool in political campaigning, presenting both perils and potential. Computer engineers and tech-inclined political scientists have long warned about the potential misuse of AI to create and spread misleading content, manipulate voters and potentially even impact elections. As AI capabilities advance, including the creation of synthetic media or ‘deepfakes’, concerns are growing regarding the potential for dissemination of fake news, impersonation of candidates, and the erosion of trust in democratic processes.
The Rapid Evolution of AI in Political Campaigns
The rapid advancement of generative AI tools now enables the creation of cloned human voices, hyper-realistic images, videos, and audio. These tools, can be used for seemingly legitimate purposes such as creating campaign images or videos, however even such use has raised alarm recently due to the high risk involved in the political process.
However, of even greater concern is that when combined with social media algorithms, AI technology now has the potential to spread fake and digitally created content rapidly and to highly specific audiences. This raises concerns about the manipulation of campaign messaging and the potential for campaigns to employ deceptive tactics on an unprecedented scale.
Alarming Scenarios and Implications
AI experts envision alarming scenarios in which generative AI is used to confuse voters, slander candidates, and incite violence. These scenarios include automated robocall messages instructing voters to cast ballots on the wrong date, audio recordings of candidates confessing to crimes or expressing racist views, and video footage of candidates delivering speeches they never gave. Furthermore, the creation of fake images resembling local news reports can spread misinformation, falsely claiming a candidate dropped out of the race. These scenarios highlight the potential dangers associated with AI in political campaigns.
Perspectives on AI in Political Campaigning
Conservative and progressive politicians in general appear to hold different views regarding AI's role in political campaigning. Some conservatives have argued for the protection of individual liberties and the preservation of the freedom of speech, expressing concerns about potential restrictions on campaign tactics. Progressives, on the other hand, have so far prioritized the prevention of misinformation and the safeguarding of democratic processes, calling for stricter regulations to combat the misuse of AI. It is likely there is a lot of common ground shared politically, particularly over the more concerning issues of deceptive, fake, and misleading use of AI by those outside of the political system.
Current Examples
The use of AI-generated content in political campaigning is already evident. The Republican National Committee released an AI-generated dystopian campaign ad in April. The online ad opens with a strange, slightly warped image of Biden and the text “What if the weakest president we’ve ever had was re-elected?” A series of AI-generated images then follows, depicting apocalyptic scenes of boarded up storefronts and soldiers patrolling US streets, and mass immigration.
Former President Donald Trump also allegedly shared AI-generated content on his Truth Social platform in the form of a manipulated video of his CNN Town hall meeting. The video distorted host Anderson Cooper's reaction and was created using an AI voice-cloning tool.
Democratic candidates have also used AI in the past to create campaign advertisements. In 2020, Pete Buttigieg's campaign used AI to create a personalized video for each voter in Iowa. The video used data about the voter's interests and demographics to create a message that was tailored to them.
In 2018, Beto O'Rourke's campaign used AI to create a chatbot that could answer questions from voters. The chatbot was able to access a vast amount of information about O'Rourke's policies and positions, and it could provide voters with answers to their questions in a matter of seconds.
Concerns for the future
The above examples have sparked greater concerns over the future of political campaigning and the current lack of regulation in the area. The following examples highlight some potential ways in which AI could foreseeably be weaponized in political campaigns:
Automated robocall messages:
AI-generated voice cloning technology could be used to create robocall messages in a candidate's voice, instructing voters to cast their ballots on the wrong date. This could lead to confusion and potential voter disenfranchisement.
Fake confessions or expressions of views:
AI could be used to create fake audio recordings of a candidate supposedly confessing to a crime or expressing racist views. This could be highly damaging to a candidate's reputation and influence public opinion based on false information.
Manipulated video footage:
AI-generated video footage ‘deepfakes’ could show someone giving a speech or interview they never actually gave. This can be used to spread false information, misrepresent a candidate's positions, and manipulate public perception.
Fake news reports:
AI can be utilized to create fake images resembling local news reports, falsely claiming that a candidate has dropped out of the race. This can lead to confusion among voters and impact the perception of a candidate's viability.
Impersonation of influential individuals:
The potential for political parties, corporations, or cybercriminals to impersonate influential individuals using AI raises concerns about the authenticity of messages delivered by these figures. This could result in the dissemination of false information and the erosion of trust in public figures.
All these examples would serve to create misinformation, confusion, and the manipulation of public opinion. The concerns arise from the ability of AI to create convincing content of this nature that can be disseminated rapidly and at a large scale, targeting specific audiences. This challenges the integrity of democratic processes, as voters may be exposed to false information that can impact their decision-making and trust in the electoral system.
International Manipulation
Even deeper concerns have been expressed over the scenario of foreign adversaries utilizing AI and synthetic media to erode trust in the US democratic process. The potential impact of international entities, terrorist organizations, or even nation-states impersonating political figures to sow misinformation simply further complicates the issue. This could evolve to be a whole new form of cyber warfare.
Addressing the Challenges and Solutions
In response to the RNC's AI ad, Rep. Yvette Clarke (D-NY) introduced a bill that would mandate disclosures of the use of AI-generated content in political ads. The bill, which has been referred to the House Committee on Energy and Commerce, is still in its early stages. This event simply highlights the fact that addressing the challenges posed by AI in political campaigning requires proactive measures. Some states have also offered their own proposals for addressing concerns about ‘deepfakes’.
In general, more safeguards and clarity will need to be considered to prevent to worst case scenarios from coming true. Keeping up with technological advancements will be necessary in order to win an arms-race for AI supremacy over any nefarious actors. Options such as a proposal for watermarking synthetic images are being considered. Other proposals such as setting up guardrails around AI technology, and increasing public awareness are crucial steps to mitigate the disruptive effects of AI-driven misinformation and manipulation.
Conclusion
The use of AI in political campaigning presents both perils and potential. As AI capabilities evolve, the potential for the creation and dissemination of fake and misleading content increases. It is essential to acknowledge the concerns surrounding AI-generated ‘deepfakes’, synthetic media, and their impact on voter trust and democratic processes. By considering the perspectives of a diverse range of Americans, implementing effective regulations, and fostering public awareness of the dangers, we can better navigate the challenges posed by AI and work towards ensuring the integrity of political campaigns and elections.
Have Your Say, Secure Your Future
The Voices of America Scholarship is now inviting submissions on 10 prompts. Just answer 3 prompts to have a chance of winning a share of the scholarship funds. See our social media channels below or this blog for more information on the questions and how to win.
(Remember to quote the question at the start so people know the context by saying, "this is my BattlePACs submission for the prompt of …")
Social Platforms Accepted: Instagram or TikTok
Hashtags: #BattlePACs #BattlePACsScholarship
Tag: IG - @battlepacsofficial or TikTok - @battlepacs
Follow: BattlePACs socials below