How synthetic media can upend reality
How can we tell a real image from a fake one? Before the digital age, doctored photos made it all the way to the front pages of newspapers before they were identified as forgeries, but with smarter technology available now, it is like playing an unwinnable game of Spot the Difference.
Deepfakes, digital forgeries that manipulate audio and video to misrepresent individuals and hijack reality, are becoming a common phenomenon that threatens to uproot society’s tumultuous information environment.
As technology evolves rapidly, it opens Pandora’s box of perilous implications that could upend public trust, democratic integrity, financial systems, creative industries, and beyond.
Experts predict deepfakes will become more sophisticated and widespread in the coming years, necessitating informed awareness and proactive countermeasures to safeguard society from deception.
Seeing is believing
As deepfake technology becomes more advanced and accessible, security experts caution that criminals will increasingly weaponise it for malicious activities like financial fraud and cybercrime.
Ivana Bartoletti, a privacy and data protection expert at Wipro, says incidents of hackers utilising deepfakes for theft and deceit are becoming more common.
She describes an emerging tactic of fraudsters creating fake audio or video of executives to dupe employees into transferring funds. Bartoletti advises companies to implement robust cybersecurity defenses like multi-factor authentication for financial transactions, provide deepfake awareness training for personnel, and invest in AI detection tools to combat this novel crime.
“This is a very serious concern,” says Bartoletti. “Generative AI is evolving quickly and becoming more accessible. And so the ease of manipulating content and spreading untruths on a large scale has increased, regardless of whether it’s by malicious actors, covert beginners, or others.”
Drew Rose, a cybersecurity expert and CSO at Living Security, affirms that while deepfake cybercrime has yet to become mainstream, incidents are rising as the technology becomes more consumerised and automated.
Rose points to an example where fraudsters could exploit deepfakes to impersonate a close relative of a vulnerable individual to extract money, creating a far more convincing scam with falsified audio and video.
Rose emphasised that beyond typical cyber protections, organisations must educate personnel that deepfakes represent an evolution in cyber threats that will become increasingly deceptive and difficult to detect.
Beyond cybercrime, experts gravely caution that deepfakes could be weaponised to manipulate democratic processes, sway elections, and erode public trust.
Bartoletti described deepfakes as a “very serious concern”, explaining that generative AI can manufacture and spread fake videos and audio of political figures with alarming authenticity.
One example she cites is of a deepfaked video that depicted Barack Obama verbally abusing Donald Trump. Bartoletti warns that customised AI-generated misinformation targeting voter groups could intensify polarisation, stigma, and erosion of trust in institutions.
She noted studies showing people struggle to distinguish AI-generated text and media from human-created content, with AI fabrications often perceived as equally or more credible.
Similarly, Rose declares that deepfakes represent an “enhanced level of deceit” beyond ordinary fake news hoaxes. He envisions a scenario where a deepfaked video surfaces during a heated election, portraying a candidate in a compromising scenario that could have monumental implications on the race.
Rose emphasises that deepfakes will become more accessible to everyday individuals, enabling wide dissemination of deceptive media aimed at political manipulation.
“Picture this: amidst a heated election campaign, a deepfake surfaces, falsely portraying a candidate in a compromising situation. The implications? They could be monumental,” says Rose.
“Public education is crucial. People need to know the capabilities of deepfakes and be skeptical of surprising videos, especially if they’re intended to incite strong emotional reactions or are not corroborated by reputable sources.”
Bartoletti explains that while AI models for detecting misinformation are advancing, human-centered solutions like public fact-checking and education remain imperative.
She advocates for increased government investment in education, saying it is vital to counter the risks of AI disinformation. Rose stresses that even with improving technology, public awareness is crucial, as people must learn to be skeptical of dubious videos intended to provoke strong reactions.
He said raising awareness to non-technical audiences remains an immense challenge, needing coordinated efforts across institutions.
Grappling with synthetic media
Beyond cybercrime and disinformation campaigns, experts say deepfakes raise thorny issues regarding intellectual property and the viability of creative professions.
Technology ethicist Arturo Perez-Reyes (also a senior vice president and cyber lead strategist at cyber brokerage firm Newfront) explains that while re-appropriating and remixing content has occurred for centuries, AI synthesis represents a far more significant challenge.
Perez-Reyes cautioned synthetic media could lead to economic and intellectual harm, as AI systems trained on human-made content threaten to displace human creators eventually.
He compares previous innovations like self-publishing and digital distribution that initially empowered creators but led to market saturation. He warns that AI-generated content would cannibalize and crowd out authentic human creations over time through a process he likened to “Gresham’s law”, where counterfeits reduce the value of original work.
Perez-Reyes says deepfakes enable anyone to digitally puppeteer public figures, illustrating the technology’s potential for infringing personality rights. He discusses ongoing legal disputes surrounding AI-generated content and deepfakes, noting courts are conflicted on issues of copyright protections.
While deepfakes provide entertainment opportunities, Perez-Reyes predicted a stratified industry would emerge, with only a few winning creators among many failing to find an audience in an oversaturated market.
“Deepfakes are a subset of the synthetic-media problem, so there are real issues with misappropriation,” says Perez-Reyes. “There will not be a simple solution.”
Experts uniformly express deep concern over the duplicitous dangers of deepfakes. As technology becomes more sophisticated and accessible, they urge society to prioritise awareness, education, regulation, and oversight to prevent manipulation.
Subscribe to our Editor's weekly newsletter