This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Ethical hackers could be the answer to mitigating cyber-attacks, study finds
Ethical hackers may be the missing piece of the puzzle for security as they help 70% of organisations avoid significant cyber incidents in 2023.
According to a report from Hacker One — a security firm that specialises in attack resistance — a big focus for security leaders is trying to understand how to leverage generative AI while ensuring protection from inherent security issues and threats. Ethical hackers can help here.
GenAI has become a “significant tool” for 14% of ehtical hackers, and 53% of hackers are using it in some way — 62% of hackers plan to specialise in the OWASP Top 10 for large language models.
Over half of hackers also said that they do or will use genAI to write better reports, write code, and 33% will use it to reduce language barriers.
In terms of the risks genAI pose, almost a third of ethical hackers were most concerned about criminal exploitation, disinformation, and an increase in insecure code.
And while 38% of hackers said the technology will reduce the number of vulnerabilities in code, just under half said it will lead to an increase in vulnerabilities.
“There are now suddenly a whole host of attack vectors for AI powered applications that weren’t possible before,” said Joseph Thacker, a hacker specialising in AI.
At the simplest level, Thacker said it’s about tricking the AI into doing or revealing something it shouldn’t.
“For example, chatbots often have access to company documentation and frequently asked questions. You want the responses to be able to answer users’ questions intelligently but not completely siphon off internal documentation or leak data.”
He said that one of the reasons for AI security issues is that these applications are new and developing really fast, and they don’t necessarily take security seriously.
“The big companies that are creating their own versions of AI tools are more likely to bake security in because they already have a culture that prioritises security. Hackers are also more likely to be given alpha or beta access to these tools so they can see it first and reveal those vulnerabilities before it goes out the door.”
Some hackers are now experimenting with AI to write tools that will help hack better, but the AI models are getting harder to convince. To compete with the malicious actors, hackers said they need access to the same tools and techniques.
As things stand, 95% of hackers specialise in web application testing, 47% specialise in network application testing, 20% have experience with social engineering, and 63% do vulnerability research.
When it comes to the different types of technologies and applications that hackers specialise in, websites are still a key target (98%).
“If you want to remain proactive about new threats, you need to learn from the experts in the trenches: hackers,” said Chris Evans, HackerOne CISO and chief hacking officer.
“The Hacker-Powered Security Report makes clear that hackers are actively growing their skillsets to meet emerging threats. The versatility of hackers and the impact of the vulnerabilities they surface make them instrumental to how our customers anticipate and address risk.”
To read more stories on cyber security click here
#BeInformed
Subscribe to our Editor's weekly newsletter