This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Apple co-founder warns of rise in AI-authored scams
Apple co-founder Steve Wozniak has warned that artificial intelligence (AI) could lead to an increase in cyber security scams because it emulates text so convincingly, it makes them harder to spot.
As most cyber experts will tell you – if ever you are unsure about the source of an email and suspect that it’s a phishing scam, tell-tale signs are typically poor spelling, bad grammar or odd-looking email addresses.
The issue with large language models like Chat GPT is that they can create convincing text that – as Wozniak says, “sounds intelligent.”
Expressing his concerns on the BBC’s Today programme (7.20am) Wozniak told technology editor Zoe Kleinman:
“The trouble is that AI is so intelligence it is open to the bad players. The ones that want to trick you about who they are. Are they trying to sell you something you don’t want? Are they trying to trick you to get your account information?…”
His comments are backed up by cyber experts at BlackBerry which has predicted that in less than a year there will be a successful cyber attack credited to ChatGPT.
The firm also found that 53% of decision makers fear that it will help hackers craft more believable and legitimate-sounding phishing emails.
We last month we reported on CISOs’ fears that AIs such as Bing and OpenAI’s Chat GPT are lowering the bar for attackers globally – for instance, it no longer requires any localisation or translation technology to compose a phishing attack in Japanese.
In his interview with the BBC, Wozniak said that malicious intrusions of this nature were only going to get worse the more AI progressed.
For this reason, the computer pioneer warned the tech industry that it needed to be careful about how it progressed with the technology and to try and avoid the mistakes that were made following the birth of the internet.
He wondered, for instance, what would happen if the industry could have designed it with some protocols that could have avoided spam.
Given this foresight, Kleinman asked if Wozniak thought that regulators were going to get it right this time – but of this he was doubtful.
“The forces that drive for money compared to the forces that drive for caring about us, for love – forces that drive for money usually win out. It’s sort of sad,” he said.
He added that anything published that was created by AI should come with a label stating it as such; and global media had a responsibility for ensuring that this happened.
Ultimately, Wozniak implied that it was the big tech companies rather than regulators or individual countries, which have the real power and responsibility to regulate their use of AI.
Wozniak was one of the signatories alongside Tesla founder and Twitter owner Elon Musk of an open letter published in March, which caused for a pause in the development of powerful AI models.
Last week another big tech pioneer – Geoffrey Hinton quit his job at Google to warn of the dangers of AI and the care that must be involved in advance it.
Yet AI is also advancing industries. Glasgow-based health data tech company Talking Medicines has developed a new tool, Drug-GPT, which claims to provide insights into doctor and patient views on various medications.
The insights gathered by the tool, claimed Talking Medicines, “helps agencies inform new business pitches, develop winning strategies, create creative content and monitor the impact of critical market events on customer behaviour”.
#BeInformed
Subscribe to our Editor's weekly newsletter