It’s getting harder and harder to sort truth from visual representations made through artificial intelligence. That may prove to be a big challenge in the coming election cycles.
The Federal Election Commission is taking up the matter of the use of artificial intelligence in political campaigns at its regular meeting on June 22.
The commission received a petition for rulemaking from Public Citizen, a non-profit consumer advocacy organization, requesting that the FEC use the existing rules that cover fraudulent misrepresentation and apply these rules to deceptive artificial intelligence-generated campaign advertisements.
“The extraordinary advances in ‘Artificial Intelligence’ (AI) now provide political operatives with the means to produce campaign ads with computer-generated fake images of candidates that appear real-life to portray fraudulent misrepresentation of those candidates. Public Citizen requests that the Federal Election Commission clarify when and how 5 U.S.C. §30124 (‘Fraudulent misrepresentation of campaign authority’) applies to deliberately deceptive AI campaign ads,” Public Citizen wrote.
The FEC has drafted a public comment period, seek public input on whether the commission should initiate a full rulemaking on the proposal. The process will be published in the Federal Register, allowing the public to provide perspective and inform the process. It is almost certain that computer-generated input will be sent in through the use of artificial intelligence, which is starting to show up in the public comment process all over the country.
The petitioners argue that while current technology allows some viewers to spot deepfakes created with AI, advancements may make it increasingly difficult for the average person to distinguish deepfake media from authentic content.
“Deepfake technology, driven by generative artificial intelligence (AI), has seen remarkable advancements, raising concerns about its potential for political deception. With each passing day, new and more convincing deepfake audio and video clips are being disseminated, blurring the lines between reality and fiction. Recent examples include a fake audio recording of President Biden, a video featuring the likeness of actor Morgan Freeman, and an audio clip of actress Emma Watson reading Mein Kampf,” Public Citizen said.
While careful examination can sometimes reveal flaws in deepfakes, the quality of these fabricated media pieces is increasingly impressive, capable of fooling even discerning listeners and viewers. It raises the question of whether digital technology experts can reliably detect and expose falsified creations, the group said.
Imagine, for instance, a high-quality deepfake video that goes viral just before an election, without voters having the time or bandwidth to know if it is real.
“The implications of this technological evolution are far-reaching, particularly in the realm of politics. Deepfake videos and audio clips could be exploited by political actors to deceive voters, transcending the boundaries of First Amendment protections that safeguard political expression, opinion, and satire. Political opponents may employ AI technology to craft videos purportedly showing their rivals making offensive statements or engaging in corrupt activities. These manipulated media pieces would not merely characterize opponents but deceitfully convey that they genuinely uttered or performed the depicted actions, despite the falsehood,” the group’s president Robert Weismann said.
“In view of the novelty of deepfake technology and the speed with which it is improving, Public Citizen encourages the Commission to specify in regulation or guidance that if candidates or their agents fraudulently misrepresent other candidates or political parties through deliberately false AI-generated content in campaign ads, that the restrictions and penalties of 52 U.S.C. §30124 are applicable,” Weismann said.
