UK watchdog warns of Snapchat AI’s risk to children
The UK’s data watchdog has pressed instant messaging app Snapchat to assess its AI chatbot, stating that the artificial intelligence could pose a privacy risk to children using it.
The Information Commissioner’s Office (ICO) warned Snapchat that it will ban the bot, named “My AI”, if the US firm fails to address the regulator’s concerns.
Launched in April, the bot’s processing of the personal data of Snapchat users is concerning the ICO the most, with the app hosting roughly 21 million UK accounts, including children aged 13-17.
“The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching ‘My AI’”, said Information Commissioner John Edwards.
The regulator clarified that it does not necessarily mean that Snapchat has breached British data protection laws, or that the ICO will end up issuing an enforcement notice.
“My AI went through a robust legal and privacy review process before being made publicly available,” a Snap spokesperson commented.
“We will continue to work constructively with the ICO to ensure they’re comfortable with our risk assessment procedures.”
The chatbot is powered by OpenAI’s ChatGPT, which is already facing investigations globally over privacy and safety concerns.
Cyber security experts, for example, are advising businesses using ChatGPT to avoid inputting any sensitive information.
If private information were to be fed to the chatbot, experts believe those with bad intentions could gain the information back off the chatbot to “social engineer” the AI into providing the information.
Edtech platform Degreed, said, “because of privacy concerns around ChatGPT and to avoid the risk of sharing proprietary information in the public domain, we mostly use ChatGPT to ask for information, suggestions, and to summarise content.”
To read more information on AI click here
Subscribe to our Editor's weekly newsletter