As the US gears up for the 2024 presidential election next month, Donald Trump and his outspoken supporter Elon Musk are the most frequently deepfaked public figures, according to new research by video content platform Kapwing.

The study tracked deepfake video requests using a popular text-to-video AI tool. It found that 64% of the deepfaked videos of the top ten most deepfaked figures were of politicians and business leaders.

Donald Trump, the former president and current Republican candidate, topped the list with 12,384 deepfake videos. He was followed closely by CEO of Tesla and X (formerly Twitter) Elon Musk, with over 9500 deepfakes.

Current US President Joe Biden and Meta CEO Mark Zuckerberg also made the top ten.

What are deepfake videos?

Deepfake videos use AI to superimpose one person’s appearance onto another’s, producing fake content that gives the illusion of people saying or doing things they never actually did.

These videos, enhanced by GenAI, are becoming more and more convincing and challenging to identify, posing significant risks to public trust.

The threat of deepfakes to democracy

 

The prominence of Musk and Trump as deepfake targets underscores the growing risk this technology poses to business leaders and politicians alike, particularly with the 2024 US election just around the corner.

Eric Lu, co-founder of Kapwing, who conducted the study, says: “Our goal with this study is to bring hard data to the conversation about the potential dangers surrounding deepfake technology.”

According to Lu, deepfakes could be weaponised to spread misinformation, influence public opinion, or even deceive voters.

“The findings of our study clearly show that video deepfakes have already gone mainstream, and so have the tools that can be used to make them. These tools need to be made available in a safe way.”

Read more: Only one-third of Brits are confident they could identify a deepfake call from their boss

Last year, London Mayor Sadiq Khan called for action against disinformation after a deepfake audio of his voice making inflammatory remarks was leaked last year.

Similarly, US voters in New Hampshire received a deepfake robocall earlier this year purporting to be from President Joe Biden, leading the New Hampshire attorney general’s office to release a statement debunking the hoax.

 

Social media’s role

 

Social media platforms are often the main distribution channels for deepfakes and proliferate their popularity.

Kapwing’s study urges platforms to take responsibility for the dissemination of deepfaked media.

“Social media platforms like YouTube, Instagram, Facebook and X have an important responsibility to add checks and labels for AI-generated content before it can do damage,” Lu noted.

"Preventing fake news or financial scams early on, before the posts go viral, will be an important problem.” He adds that platforms could establish expert review teams to vet questionable content, reducing the likelihood of widespread damage.

 

The growing risk to businesses

 

While politicians are no strangers to public scrutiny, business leaders are seemingly not immune to the consequences of AI-driven digital manipulation.

Mark Zuckerberg and Bill Gates were also among the most deepfaked figures in 2024, featuring 1,738 and 526 deepfake video requests, respectively.

Bill Gates, co-founder of Microsoft, was also the subject of a deepfake video that was widely circulated, which appeared to show Gates abruptly ending an interview after being questioned about his role in COVID-19 vaccine distribution.

Misusing deepfake technology in corporate settings can lead to reputational damage, stock manipulation, and fraudulent announcements that could have a significant impact.

Kapwing’s report highlights the accessibility of GenAI tools — with them becoming more accessible and requiring minimal technical expertise, even individuals with limited skills can create highly convincing fake content.

Deepfakes for good? How synthetic media is transforming business

The deepfake regulatory challenge

 

Efforts to regulate deepfakes are already facing hurdles. A California law allowing individuals to sue over election-related deepfakes was blocked by a federal judge this week on the grounds of First Amendment concerns.

This legal challenge illustrates the difficulty of crafting effective regulations that address the threats posed by deepfake technology without infringing on free speech.

With Generative AI becoming more sophisticated and readily available, the onus is on both tech platforms and regulators to strike a balance between innovation and security.

Lu's call for watermarked AI-generated content and social media platforms to add clear labels to deepfake videos is a step in the right direction, but a comprehensive solution remains elusive.

To spot deepfakes, Lu says: “My top three tips are looking for a blurry mouth area or inconsistent movement of the teeth, watching out for unnatural blinking or lack of blinking, and listening for monotone voices and unnatural breathing patterns.”

Personalized Feed
Personalized Feed