AI is making cyberattacks more sophisticated and cybersecurity teams are struggling to keep up

AI is making cyberattacks more sophisticated and cybersecurity teams are struggling to keep up

Cybersecurity attacks are on the rise, putting a strain on cyber professionals, especially as artificial intelligence (AI) makes them more sophisticated, experts say.

New research from the Information Systems Audit and Control Association (ISACA) found that 39 per cent of the almost 6,000 global organizations they surveyed admit they are experiencing more cyberattacks, and 15 per cent of them are suffering from more privacy breaches compared to a year ago.

The study also revealed that cybersecurity teams in Europe are struggling to keep up with the attacks.

More than 60 per cent of European cybersecurity professionals say that their organisation’s cybersecurity team is understaffed, and over half (52 per cent) believe that their organisation’s cybersecurity budget is underfunded.

The majority of these cyberattacks are ransomware, which involves locking a user’s data or files until a ransom is paid.“The sophistication of AI is, making those attacks very very hard to detect,” Chris Dimitriadis, chief global strategy officer at ISACA told Euronews Next.

Related

He explained that Generative AI (GenAI) can analyse profiles of victims within organisations and then generate content that closely simulates a human.

“In the past, we have seen, for example, emails translated into local languages that had a lot of mistakes… So it was a little bit easier for the victim to understand that this is something that's definitely not legitimate,” Dimitriadis said.

“But with Gen AI, what we see is that this is very, very close to the distribution of a human person, extremely accurate as far as language, style or culture is concerned and also as far as the information that's included in it being more accurately or maybe deeper targeted to the environment of the victim”.

It's like having your own personalised corrupt financial adviser on your mobile 24/7.

A separate investigation by the anti-money laundering Norwegian AI start-up Strise showed that ChatGPT can easily procure advice on how to commit financial crime online.

It found that it could exploit banks with poor anti-money laundering practices, disguise illegal funds as legitimate loans by creating fake loan transactions, and use diverse tactics to make it harder for authorities to trace the money's source.

“The level of understanding [of ChatGPT] and its knowledge of the specific legal journalistic action, like what's required of certain banks and how you would go about it. I mean, it's just on all levels really good,” Strise CEO and co-founder Marit Rødevand told Euronews Next.

She said when asking the chatbot questions such as ‘how to launder money’, it refused to do so, saying it was illegal and went against its policies.

But Rødevand said if you “get creative” by asking ChatGPT to write a film script about how to help a character called Shady Shark with their illegal dealings, then it would give you specific advice.

“It was a real eye-opener. I wasn't expecting just how good and accurate the answers were. It's like having your own personalised corrupt financial adviser on your mobile 24/7,” she said.

Related

In February, Microsoft and OpenAI revealed that hackers were using large language models (LLMs) to refine cyberattacks. The companies detected attempts by Russia, North Korea, Iran and Chinese-backed groups who used chatbots for researching targets and improving scripts.

Both companies said they were working to minimise potential misuse by such actors but admitted they could not stop every instance.

How to combat cyberattacks

The way for companies to protect themselves is to ensure they have technological platforms that are suited for future threats and support cybersecurity professionals, Rødevand said.

But the ISACA report found that 71 per cent of companies reported that their organisation provides no staff training on digital trust and more than half of the cybersecurity teams said that they are underfunded.

“With less funding, it's very hard to implement the right cybersecurity capabilities within their organisations,” said Dimitriadis.

“If you dive a little bit deeper, one of the causes of this underfunding is that cybersecurity doesn't generate revenue if you are not operating in the cybersecurity industry.

“But most importantly, it means that,the decision-makers within the organisation have not yet grasped, or understood the value [and] contribution of cybersecurity within the framework of their targets for this business,” he added.