Tech companies sign accord to combat AI-generated election trickery

FILE - Meta's president of global affairs Nick Clegg speaks at the World Economic Forum in Davos, Switzerland, Jan. 18, 2024. Adobe, Google, Meta, Microsoft, OpenAI, TikTok and other companies are gathering at the Munich Security Conference on Friday to announce a new voluntary framework for how they will respond to AI-generated deepfakes that deliberately trick voters. (AP Photo/Markus Schreiber, File)

Major technology companies signed a pact Friday to voluntarily adopt 鈥渞easonable precautions鈥 to prevent artificial intelligence tools from being used to disrupt democratic elections around the world.

Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok gathered at the Munich Security Conference to announce a new framework for how they respond to AI-generated deepfakes that deliberately trick voters. Twelve other companies 鈥 including Elon Musk's X 鈥 are also signing on to the accord.

鈥淓verybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own,鈥 said Nick Clegg, president of global affairs for Meta, the parent company of Facebook and Instagram, in an interview ahead of the summit.

The accord is largely symbolic, but targets increasingly realistic AI-generated images, audio and video "that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can lawfully vote.鈥

The companies aren't committing to ban or remove deepfakes. Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. It notes the companies will share best practices with each other and provide 鈥渟wift and proportionate responses鈥 when that content starts to spread.

The vagueness of the commitments and lack of any binding requirements likely helped win over a diverse swath of companies, but disappointed advocates were looking for stronger assurances.

鈥淭he language isn't quite as strong as one might have expected,鈥 said Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center. 鈥淚 think we should give credit where credit is due, and acknowledge that the companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary, and we'll be keeping an eye on whether they follow through.鈥

Clegg said each company 鈥渜uite rightly has its own set of content policies.鈥

鈥淭his is not attempting to try to impose a straitjacket on everybody," he said. "And in any event, no one in the industry thinks that you can deal with a whole new technological paradigm by sweeping things under the rug and trying to play whack-a-mole and finding everything that you think may mislead someone.鈥

Several political leaders from Europe and the U.S. also joined Friday鈥檚 announcement. European Commission Vice President Vera Jourova said while such an agreement can鈥檛 be comprehensive, 鈥渋t contains very impactful and positive elements.鈥 She also urged fellow politicians to take responsibility to not use AI tools deceptively and warned that AI-fueled disinformation could bring about 鈥渢he end of democracy, not only in the EU member states.鈥

The agreement at the German city's comes as more than 50 countries are due to hold . , , and most recently have already done so.

Attempts at AI-generated election interference , such as when U.S. President tried to discourage people from voting in New Hampshire鈥檚 primary election last month.

Just days before in November, AI-generated audio recordings impersonated a candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false as they spread across social media.

Politicians also have experimented with the technology, from using to communicate with voters to adding AI-generated images to ads.

The accord calls on platforms to 鈥減ay attention to context and in particular to safeguarding educational, documentary, artistic, satirical, and political expression.鈥

It said the companies will focus on transparency to users about their policies and work to educate the public about how they can avoid falling for AI fakes.

Most companies have previously said they鈥檙e putting safeguards on their own generative AI tools that can manipulate images and sound, while also working to identify and so that social media users know if what they鈥檙e seeing is real. But most of those proposed solutions haven't yet rolled out and the companies have faced pressure to do more.

That pressure is heightened in the U.S., where Congress has yet to pass laws regulating AI in politics, leaving companies to largely govern themselves.

The Federal Communications Commission AI-generated audio clips in robocalls are against the law, but that doesn't cover audio deepfakes when they circulate on social media or in campaign advertisements.

Many social media companies already have policies in place to deter deceptive posts about electoral processes 鈥 AI-generated or not. says it removes misinformation about 鈥渢he dates, locations, times, and methods for voting, voter registration, or census participation鈥 as well as other false posts meant to interfere with someone's civic participation.

Jeff Allen, co-founder of the Integrity Institute and a former Facebook data scientist, said the accord seems like a 鈥減ositive step鈥 but he'd still like to see social media companies taking other actions to combat misinformation, such as building content recommendation systems that don't prioritize engagement above all else.

Lisa Gilbert, executive vice president of the advocacy group Public Citizen, argued Friday that the accord is 鈥渘ot enough鈥 and AI companies should 鈥渉old back technology鈥 such as hyper-realistic 鈥渦ntil there are substantial and adequate safeguards in place to help us avert many potential problems.鈥

In addition to the companies that helped broker Friday's agreement, other signatories include chatbot developers Anthropic and Inflection AI; voice-clone startup ElevenLabs; chip designer Arm Holdings; security companies McAfee and TrendMicro; and Stability AI, known for making the image-generator Stable Diffusion.

Notably absent is another popular AI image-generator, Midjourney. The San Francisco-based startup didn't immediately respond to a request for comment Friday.

The inclusion of X 鈥 not mentioned in about the pending accord 鈥 was one of the surprises of Friday's agreement. Musk sharply curtailed content-moderation teams after taking over the former Twitter and has described himself as a 鈥渇ree speech absolutist.鈥

In a statement Friday, X CEO Linda Yaccarino said 鈥渆very citizen and company has a responsibility to safeguard free and fair elections."

鈥淴 is dedicated to playing its part, collaborating with peers to combat AI threats while also protecting free speech and maximizing transparency,鈥 she said.

__

The Associated Press鈥痳eceives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP鈥檚 democracy initiative . The AP is solely responsible for all content.

The 春色直播 Press. All rights reserved.

More Science Stories

Sign Up to Newsletters

Get the latest from 春色直播News in your inbox. Select the emails you're interested in below.