This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Employees warned over AI chatbot data threat
New research claims that employees are sharing sensitive company information with AI chatbots such as ChatGPT without realising the risk of leaks and breaches.
The findings, discovered by cyber security firm Netskope, show that for every 10,000 enterprise users, an enterprise organisation is experiencing approximately 183 incidents of sensitive data sharing to the app per month.
Source code is the highest risk of being shared by employees, at a rate of 158 incidents per 10,000 users per month.
Other sensitive data being shared with ChatGPT includes regulated data, such as financial and healthcare data, and personally identifiable information, along with passwords and keys which are usually embedded in the source code.
While generative AI app usage grows (up by 22.5% over the past two months), Netskope predicts that the amount of sensitive data shared will only increase.
“It is inevitable that some users will upload proprietary source code or text containing sensitive data to AI tools that promise to help with programming or writing,” says Ray Canzanese, threat research director at Netskope Threat Labs.
“Therefore, it is imperative for organisations to place controls around AI to prevent sensitive data leaks. Controls that empower users to reap the benefits of AI, streamlining operations and improving efficiency, while mitigating the risks are the ultimate goal,” he adds.
According to Netskope, interactive user coaching and Data Loss Prevention (DLP) technologies, a cyber security solution that detects and prevents data breaches, are the most effective at controlling the risk of sensitive data leaks.
The cyber security firm also advises blocking access to apps that do not serve any legitimate business purpose and pose a risk.
Asking ChatGPT
When TI asked ChatGPT why employees shouldn’t share sensitive data with it, ChatGPT listed several reasons.
Firstly, chatbots operate on external servers, which means sensitive data may be stored or logged onto those servers. This poses a risk to data security, especially if the data falls into the wrong hands or is accessed without proper authorisation.
With this, companies lose control over how the data is handled, and while “external chatbot providers might have robust security measures, no system is completely immune to data breaches”.
There is also the risk the Chatbots like GPT-3 will reshare the sensitive information during conversations with external parties to the company.
“Overall, while chatbots like GPT-3 can be incredibly useful for tools for various tasks, they are not designed to handle sensitive company data,” ChatGPT wrote.
Instead, it advises organisations should prioritise using internal, secure systems for handling sensitive information and ensure employees are aware of the risks associated with sharing such data.
In relation to data breaches, companies can use ChatGPT in the event of a breach to find out more efficiently what the next move may be from an attacker.
Senior director at cyber security firm Cato Networks, Etay Maor, told TI that he asked the chatbot what he should do in the event of an attack that included credential stealing, and to also tell him what the next steps are for attackers.
The AI then listed ten similar attacks and predicted command scripting as their next most likely move, something that would take an analyst a day to discover.
#BeInformed
Subscribe to our Editor's weekly newsletter