The Biden administration has a firm The government plans to warn people about deepfakes and other disinformation during the 2024 elections, unless they are clearly foreign and pose a serious threat, current and former officials said.
Cyber experts inside and outside the government expect an onslaught of disinformation and deepfakes during this year’s election campaign, while FBI and Department of Homeland Security officials worry that if they intervene they would face accusations that they are trying to rig the election in favor of President Joe Biden’s reelection.
Lawmakers from both parties have called on the Biden administration to take a tougher stance.
“I’m worried that we’re going to be so concerned about appearing partisan that we won’t take the action that’s necessary,” Sen. Angus King, D-Maine, an independent who caucuses with the Democratic Party, told cybersecurity and intelligence officials at a hearing last month.
Senator Marco Rubio, Republican of Florida, asked how the government would respond to deepfake videos. “If this happens, who responds? Have we thought about the process for what to do when these scenarios happen?” he asked. “‘I want you to know that the video isn’t real.’ Who’s going to be responsible for that?”
A senior U.S. official familiar with the government’s deliberations said federal law enforcement agencies, particularly the FBI, have been reluctant to prosecute domestically generated disinformation.
The FBI will investigate possible election law violations but is not ready to make public statements about disinformation and deepfakes created by Americans, officials said.
“The FBI is not in the business of finding the truth,” the official said.
The official said during an interagency meeting on the issue that it was clear the Biden administration had no concrete plan for how to combat disinformation about domestic elections, whether it be deepfakes impersonating candidates or false reports of violence or polling station closures that could discourage people from going to the polls.
In a statement to NBC News, the FBI acknowledged that even when investigating possible criminal activity involving false information, it is unlikely to immediately identify it as false.
“The FBI can and does investigate allegations that people are spreading false information with the intent to deny or undermine Americans’ right to vote,” the statement said. “The FBI takes these allegations seriously and must follow logical investigative procedures to determine whether federal law has been violated. These investigative procedures are not completed ‘on the spot.'”
The FBI added that it “works closely with state and local election officials to share information in real time; however, because elections are administered at the state level, the FBI deferes to state-level election officials regarding their current plans to combat disinformation.”
A senior official at the Cybersecurity and Infrastructure Security Agency, the federal agency charged with protecting election infrastructure, said state and local election officials are in the best position to inform the public about false information spread by other Americans, but did not rule out the agency issuing public warnings if necessary.
“I wouldn’t say we won’t talk publicly about anything. I wouldn’t say definitively. I think it depends on the situation,” the official said.
“Is this specific to one state or jurisdiction? Is this happening across multiple states? Is this actually impacting elections infrastructure?” the official said.
CISA is focused on educating the public and training state and local election officials about tactics used in disinformation campaigns, the official said.
“CISA will continue to prioritize this as a threat vector that we take very seriously during this election cycle,” the official said.
Latest deepfakes
Robert Weissman, president of the pro-democracy group Public Citizen, which has urged states to criminalize political deepfakes, said the federal government’s current approach will only lead to confusion.
He said his biggest fear is that deepfakes could emerge later to discredit candidates and influence the outcome of elections, something that currently has no plans to be addressed by government agencies, from county election boards to federal authorities.
“If political activists have a tool at their disposal and it’s legal, they’re likely to use it, even if it’s unethical,” Weissman said. “It would be foolish to expect anything other than a tsunami of deepfakes.”
While false information created to discourage people from voting is illegal, deepfakes that misrepresent the actions of candidates are not prohibited under federal law or the laws of 30 states.
The Department of Homeland Security has warned election officials across the US that generative artificial intelligence could allow bad actors, both domestic and foreign, to impersonate election officials and spread misinformation, as has happened in other countries around the world in recent months.
In recent meetings with tech executives and nonpartisan watchdog groups, top federal cybersecurity officials acknowledged that fake AI-generated videos and audio clips could pose a potential risk in an election year, but they said CISA would not step in to warn the public because of the polarized political climate.
Intelligence agencies say they closely track disinformation spread by foreign adversaries, and officials have said recently that they are prepared to make public statements about specific disinformation if necessary if the author is clearly a foreign actor and the threat is “serious” enough to jeopardize the election results, though officials have not clearly defined what “serious” means.
At a Senate Intelligence Committee hearing last month on the threat of disinformation, senators said the government needs to develop a more coherent plan for how to tackle “deep fakes” that could cause damage during election campaigns.
Sen. Mark Warner (D-Va.), the committee’s chairman, told NBC News that the threat posed by generative AI is “serious and pervasive” and the federal government needs to be prepared to respond.
“While I continue to call on technology companies to do more to curb malicious AI content of all kinds, I believe it is appropriate for the federal government to have a plan in place to alert the public if there are serious threats posed by foreign adversaries,” he said. “In a domestic context, state and federal law enforcement may be in the position to determine whether election-related disinformation constitutes criminal activity, such as voter suppression.”
Reactions from other countries
Unlike the U.S. government, Canada has published an explanation of its decision-making protocols for how the Canadian government will respond to incidents that could jeopardize elections. The government’s website promises that “if an incident or series of incidents occurs that threatens the integrity of our elections, we will communicate clearly, transparently and fairly to Canadians during the election period.”
Other democracies, including Taiwan, France and Sweden, have taken a more proactive approach to tackling disinformation, flagging false reports and working closely with nonpartisan groups that fact-check and educate the public, experts say.
Prompted by Russia’s information war, Sweden, for example, established a special government agency to combat disinformation in 2022 and has sought to educate the public on what to watch out for and how to recognize attempts to spread lies.
France has set up a similar body, the Service for Monitoring and Protection against Foreign Digital Interference (known as Visinum), which regularly publishes detailed public reports detailing fake government websites, news sites and social media accounts that are spewing Russian-backed propaganda and false reports.
The European Union has followed France and other member states in setting up an information and research sharing center between government agencies and nonprofit private organizations to track the issue.
But those countries are not as plagued by political divisions as the United States is, according to David Salvo, a former U.S. diplomat and now managing director of the Alliance for Securing Democracies at the German Marshall Fund, a think tank.
“That’s a tough one to come by because best practices tend to be found in places where trust in government is much higher than it is here,” he said.
Disunity Hinders U.S. Efforts
After Russia spread disinformation through social media during the 2016 election, U.S. government agencies began working with social media companies and researchers to help identify potentially violent or destabilizing content. But a 2023 federal court ruling recommended that federal agencies refrain from even communicating with social media platforms about content.
The Supreme Court is scheduled to hear the case as soon as this week, and if it rejects the lower court’s ruling, more regular communication between federal agencies and tech companies could resume.
Early in President Joe Biden’s term, his administration tried to address the dangers posed by disinformation spread on social media, with the Department of Homeland Security setting up a disinformation task force led by experts from a bipartisan think tank in Washington. But Republicans in Congress have criticized the Disinformation Control Board for being too vague and posing a threat to free speech, and have threatened to defund it.
Following political pressure, the Department of Homeland Security shut down the committee in August 2022, but Nina Jankowitz, the expert who ran it, said she and her family received numerous death threats during her short tenure.
Experts say the polarized U.S. climate makes even informal cooperation between the federal government and private nonprofits politically difficult.
Nonpartisan organizations can be accused of partisan bias if they collaborate or share information with federal or state agencies, and many face accusations of stifling free speech by simply tracking online misinformation.
In recent years, threats of lawsuits and intense political attacks from pro-Trump Republicans have led many organizations and universities to move away from research on disinformation. Stanford University’s Internet Observatory, which published influential research during the election about how disinformation spreads through social media platforms, recently laid off most of its staff following a flurry of lawsuits and political criticism.
The university denied on Monday that it would close the centre because of outside political pressure, but said in a statement that the centre was “facing funding challenges as its founding grant will soon dry up”.
Given the federal government’s reluctance to speak publicly about disinformation, state and local election officials They’ll have to make quick decisions about whether to raise public alarm during election season, and some are already turning to a coalition of nonprofits that have hired tech experts to detect AI-generated deep fakes and provide accurate information about voting.
Two days before the New Hampshire presidential primary in January, the state’s attorney general’s office issued a statement warning people about AI-generated robocalls featuring fake audio clips of Biden telling voters not to go to the polls, after which the New Hampshire secretary of state called on news organizations to provide accurate information about voting.
