Hiscox Global Insight

AI in the workplace: maximising the business opportunity, minimising the business threat

Large language models (LLMs) like ChatGPT, Claude, Gemini and Grok are leading an AI-powered revolution. According to one study, more than 50% of adults in the US now use AI for anything from writing emails to creating presentations and even planning trips. Another report suggests that over a quarter (26%) of US employees use AI frequently in their workplace. But while AI is transforming the workplace for many, and introducing significant productivity and efficiency benefits, the use of tools like LLMs is also opening the door to a new range of cyber security and business threats. 

“Many organisations are integrating LLMs into their operations but may be unaware of the potential security risks that can come with releasing sensitive and confidential data onto a third-party platform," says Hiscox London Market’s Tim Andrews - Cyber Line Underwriter. “But it’s not just data security that’s at risk, LLMs can also be vulnerable to issues such as prompt injection as well the dissemination of incorrect information, which can lead to a whole range of negative consequences. It’s why, to fully and safely take advantage of the AI-driven opportunities, there must be an equal focus on good AI governance and human oversight.”  

AI gains

Some estimates suggest that AI has the potential to deliver significant annual labour productivity growth. The London School of Economics, for example, reports that its employees using AI are saving the equivalent of one day a week. While the Organisation for Economic Co-operation and Development has said the UK could see a benefit of up to 1.2% growth in annual labour productivity; a figure that would become significant as it compounds over the years. The International Monetary Fund further finds that up to 70% of UK workers are in jobs where AI could either “perform or enhance” their roles.  

“AI can automate tasks and pull-out key information from large data sets, for example, which allows human workers to spend less time on those tasks and more time in the value-added areas that require human judgement and expertise,” says Monika Delekta-Ebbage – Head of Data Science within Hiscox London Market. “These capabilities offer a huge opportunity for businesses to boost both productivity and innovation. For example, here at Hiscox, we launched the London insurance market’s first augmented lead underwriting model enhanced by generative AI, which helps to remove manual elements of the underwriting process and provide customers with faster insurance quotes.” 

AI risks 

While the potential is exciting, the use of AI is not risk free. According to the UK’s National Cyber Security Centre, the cyber risks in using generative AI and LLMs include the possibility that the technology simply presents the wrong answers as facts rather than admitting it does not know the answer (AI hallucination); the tendency for results to be biased; a vulnerability to ‘prompt injection attacks’ – where attackers can insert hidden instructions into LLM prompts and override the original instructions; and data poisoning, where attackers corrupt the source data used to train the model.  

Good governance

It’s why businesses must ensure that they have effective AI governance in place and take steps to minimise the risks. “If employees are allowed to use their own publicly available versions of LLMs like ChatGPT or Gemini to summarise work documents and emails, commonly referred to as shadow AI, information such as customer data and other confidential details could be accessible to a hacker,” says Delekta-Ebbage. “One way of countering that risk is to make sure these tools are enterprise protected. This means models are not trained with a business’s own data, and any information that goes into an LLM stays within a business’s own ‘four walls’ and cannot be accessed by third parties."  

It’s also important, adds Delekta-Ebbage, that access to these tools is appropriately protected: “Employees should only be able to see information relevant to their job function, and not confidential data that is not appropriate to them, such as the HR records of other employees.”  

Another consideration is the adoption of a standards-based approach to reduce potential risk which means looking at the adoption of standards such as the EU AI Act, NIST AI Risk Management Framework, ISO42001 and OWASP Top 10.  

Increasingly, it’s a combination of measures like these that cyber insurance underwriters will interrogate when looking at a business’s risk profile and their exposure to risk from tools like LLMs, adds Andrews: “It comes down to good AI governance and following best practice in the use of AI tools. We ask questions of our potential customers around their policies and processes, and how they ensure the right level of oversight and control, and we expect them to be aware of the risks and how they protect any sensitive information like client data. It’s also critical they have robust incident response plans in place should AI cause an issue, covering potential problems such as harmful outputs and hallucinations.”  

Strike the right balance

There is a recognition, however, that to fully exploit AI, best practice demands the right blend of freedom to use and push the technology, at the same time as having the guardrails in place to protect against any downside risk. "Cyber security talks about the CIA triad of confidentiality, integrity and availability, and the importance of getting the balance right between those three factors," says Andrews. "I see the approach to AI as similar in that there is a happy medium to strike between allowing a business to leverage the power of AI, while having the appropriate controls to safeguard against problems like data security and other potential fallibilities such as AI hallucinations.” 

It’s why having the right people to oversee AI is so important, adds Delekta-Ebbage: "Organisations should encourage curiosity around AI but ensure that everyone has access to the skills and training needed to experiment safely. Data scientists working with functional experts can translate LLM capabilities into practical, operationally sound solutions. While leadership plays a key role in understanding the risks and establishing the guardrails that empower people to innovate but, at the same time, protecting the business." 

Don’t let security lag

Crucially, those steps that allow the safe use of AI must keep pace with the fast-evolving technology. “Many CEOs are saying, ‘AI is a great opportunity. We can do more with less or more with the same’,” says Andrews. “That’s a big potential productivity driver for businesses, but the critical thing is to match that enthusiasm to push the operational capabilities of the business with equal attention paid to the security side. Historically, we’ve seen cyber security lag behind technological developments. Think back ten years ago and only the most mature businesses had a dedicated cyber security team. That's all changed, and cyber is now recognised as a high-priority risk to business. AI risk should be just as prominent today in terms of ongoing investment in the governance of its use and the need for continued human oversight.”

Find out more about how Hiscox is using AI throughout its business in its Report and Accounts 2025.