AI and the cyber threat: businesses must fight fire with fire
The UK’s National Cyber Security Centre (NCSC) warned last year that “Artificial intelligence (AI) will almost certainly continue to make elements of cyber intrusion operations more effective and efficient, leading to an increase in frequency and intensity of cyber threats.” It was a stark reminder that as criminals make use of generative AI, for example, to improve the reach and effectiveness of their social engineering attacks or AI-powered automated malware to seek and exploit system vulnerabilities, the cyber threat landscape is being reshaped by the rapid development of AI. But, conversely, it must also be recognised that AI is part of the solution for businesses and organisations seeking to strengthen their cyber defences and build their resilience.
“AI introduces a whole new level of complexity to cyber attacks and cyber defences must be realigned to counter this fast-evolving threat,” says Hiscox’s Tim Andrews – Cyber Line Underwriter. “The good news is, while AI can be cast as the villain of the piece, it can just as easily play the sheriff by offering effective new methods of defence against the very same threats it poses, in areas like threat detection and analysis, and predictive security.”
Criminals hit hard with AI-powered attack…
Last year, Anthropic, the business behind the AI-powered chatbot Claude, alleged that state-sponsored cyber criminals instructed the tool to carry out small, automated tasks. When combined, these tasks amounted to a “highly sophisticated espionage campaign” against a range of businesses from technology companies to financial institutions and chemical manufacturers. Another AI-powered deepfake scam saw a UK engineering group lose US$25 million in 2024 to fraudsters who digitally cloned a senior manager to instruct financial transfers during a video conference. “These examples illustrate the diversity of attack methods that can be deployed using AI,” says Andrews, and it’s clear that businesses are concerned about the potential power of the threat. According to Hiscox’s most recent Cyber Readiness Report, nearly two-thirds (60%) of the businesses surveyed believe that AI-driven social engineering, AI malware and phishing attacks, and AI taking control of their company’s data will be the top three emerging AI threats over the next five years.”
…while businesses can hit back with AI-powered defence
As much as AI is an attack threat, however, it is also at the vanguard of development when it comes to strengthening cyber defences – a view shared by the NCSC who, as well as highlighting the threat from AI, balanced that judgement by reiterating that AI will also “aid system owners and software developers in securing systems”; an observation not lost on business. Hiscox’s Cyber Readiness Report, for example, found that almost two-thirds (65%) of those responsible for cyber security in their business consider AI more of an asset than a vulnerability when it comes to security support, and they’re acting on that judgment.
Recent research from cyber security firm Trend Micro revealed that more than eight out of ten businesses have already adopted AI-powered tools as part of their cyber security strategies. “AI-powered is the buzz phrase for seemingly every piece of new software launched and cyber security software is no exception,” says Andrews, “particularly when it’s used in understanding irregularities that could have been missed by human observation alone.” Take a business managing thousands of employee system logins, for example: AI can instantly spot anomalies such as individuals keeping unusual hours, or logging in from new locations, despite it being impossible for them to have travelled to that location given the time it would take to get from where they previously logged in. “Having access to this type of intelligence provides a cyber security analyst with a heads-up that something could be wrong and allows them to investigate and take action if needed. There might be an innocent explanation, but AI is flagging potential issues before they can become a problem, while also using machine learning to hone its judgement, filter out any 'noise' and understand what needs to be escalated to a human analyst," says Andrews.
Here comes the next wave of AI innovation – agentic AI
AI development continues at pace, however, and the next wave of innovation involves the use of agentic AI; where AI can not only make the initial decisions but can take the necessary action autonomously. This technology is already being widely considered or adopted, with nearly 90% of security professionals recognising agentic AI as a “priority” for their business. “Removing human oversight from the security process could be problematic," cautions Andrews. “For an attacker using agentic AI, they have nothing to lose – if the technology fails, they’ll just keep trying with few repercussions. The potential downside for a business in relying purely on AI for every stage of their defence, however, is much greater if the attacker gets through.”
Competing AI
Ultimately, concludes Andrews, AI from a cyber security perspective is an important tool but businesses must use it as part of an overall cyber security strategy that is aligned with their risk: “It’s critical that cyber security teams educate themselves in how AI can both be a threat and an asset in the ongoing fight against cyber crime. While in many ways it’s an AI competition between attackers and defenders, there is much to gain from AI when it’s used as part of a well-structured and holistic cyber security approach.”