Emma Sansom, group head of captives at Zurich Insurance Company, on the evolving digital landscape and why captives are well suited to manage the complex and hard-to-predict threats arising from AI
Digitisation and artificial intelligence, while helping create business opportunities, are at the same time responsible for significant new risks.
Despite the challenges around reigning in potential threats from an emerging technology, the development of artificial intelligence (AI) is showing no signs of slowing.
On the contrary, new uses for the technology are regularly making headlines. This is of concern to insurers as they seek to provide meaningful solutions while navigating this rapidly evolving digital landscape.
Captive insurers, with a long history as a home for non-traditional risks, working with their parent organisations and insurers alike, are well-suited to manage the complex and hard-to-predict threats arising from the rapid development of artificial intelligence.
Defining the risk
Algorithmic risk and AI risk, terms generally used interchangeably, arise from the use of data analytics and cognitive technology-based software algorithms to make decisions.
The risk has grown as AI has seen widespread and rapid adoption. When such risks materialise and cause harm to companies or individuals, liability issues can soon follow.
There have been a number of studies where algorithms have been found to use skewed data to make decisions. For example, AI used to guide healthcare diagnosis decisions has been shown to produce a racial bias.
In another case, the US Equal Employment Opportunity Commission (EEOC) reached a settlement based on its claim that an AI system used for recruitment discriminated against age by rejecting applications where the candidate was above a certain age.
This appears to be the first AI-based antidiscrimination settlement case, however it seems that more will follow.
Regulators taking action
Misinformation and disinformation, in part due to AI systems, was the top short-term risk identified in the WEF Global Risks Report 2024, with adverse outcomes of AI technologies ranked the sixth most concerning risk over the next 10-year period.
Both were new entrants in this year’s report. As concern over harm that could potentially result from the use of AI has grown, legislators and regulators have begun to address the issue, anticipating both risks now and in the future.
On 14 December 2023, the EU Parliament and Council came to a provisional agreement on reforms to the nearly 40-year-old Product Liability Directive, with new provisions aimed at modernising the existing framework to consider digitisation and global value chains.
The revised Product Liability Directive introduces definitions that class software, including AI systems, as products.
The revised provisions seek to simplify the burden of proof in certain cases, via the new presumptions of defectiveness and/or causation, and broadens the scope of liability for EU companies even if the product was not manufactured or bought in the EU.
It also updates the definition of damage to include psychological health and the destruction or corruption of data, and increases the expiry period up to 25 years in exceptional cases when symptoms are slow to emerge.
Formal adoption is expected in the coming months, and the new rules will apply 12 months after entry into force.
In the US, several states have enacted legislation protecting citizens from potential harm caused by AI, and on 8 February 2024, the White House announced the formation of the US Artificial Intelligence Safety Institute Consortium (AISIC) to ensure the safe development and deployment of artificial intelligence.
Challenges facing insurers
A lack of historical information alongside a complicated legal and regulatory framework, which is different from country to country, makes it challenging to navigate and ensure compliance around not just the use of AI itself, but the risks that arise through operating in such a complex web of law and regulation.
Despite a lack of clarity right now, as insurers identify the scope of the risk and data accumulates, they will be expected to become more comfortable in committing capacity.
In the meantime, insurers have several concerns about AI due to the unique challenges and risks it presents. Lack of historical claims data and uncertain loss patterns as new types of risks emerge make it challenging for underwriters to assess risk accurately.
This can then result in uncertainty when pricing policies and setting coverage. AI itself relies heavily on vast amounts of data, raising concerns about data privacy and the potential for data breaches, which could result in significant liabilities.
In many cases, determining liability in AI-related incidents can be complex, especially when multiple parties are involved, such as the technology provider, the insurer and the insured.
On the flip side, there is also the potential for insurers to use AI themselves in their underwriting process and subject the aforementioned considerations; captives can leverage data analytics to better assess and predict risks, allowing for more precise underwriting and risk pricing.
To address concerns, insurers must adopt robust risk management practices, stay informed about AI developments, and work closely with legal and compliance teams to ensure that their underwriting processes align with regulatory requirements and ethical standards.
Additionally, ongoing monitoring and evaluation of AI systems are essential to identify and mitigate risks as they evolve.
It’s therefore important for insurers to have access to or employ experts with knowledge of AI technologies and the associated risks to underwrite policies effectively.
How a captive can support its parent organisation in managing and financing risk associated with AI
The uncertainties and, in some cases, fears, around a technology that many claim could lead to unexpected and perhaps calamitous outcomes, and the potential for gaps in cover either through restricted capacity or unforeseen circumstances, have left risk managers with the task of making sure their organisations are protected against potential damages that result from AI.
Before considering insurance, risk managers need to identify, understand and mitigate this emerging risk in order to ensure they understand the residual risks that can then be insured.
The captive can function as a risk management centre of excellence, providing a number of risk management services such as analysing algorithmic systems, identifying potential vulnerabilities and structuring business continuity plans.
They may also be able to provide analysis that can help determine liability and causation in the claims process.
This could be sourced in-house but given the highly technical and specialised nature of this analysis, it might be preferable to outsource this analysis to third-party suppliers.
In this way, captives can work with their parent companies to implement risk mitigation strategies, such as improving data security measures, enhancing AI algorithms through systematic risk management, and ensuring compliance with relevant laws and regulations.
This approach enables the captive to establish itself as the ‘go to’ central repository for education and training with regard to AI challenges, which can then act as a central risk management authority for the organisation, ensuring that minimum levels of risk management standards are adhered to, potentially providing enhanced coverages to those business units that can demonstrate best-in-class risk management controls.
Once the risks are understood, a captive can then help an organisation establish a long-term risk management strategy for AI, helping their parent companies stay ahead of emerging risks in the rapidly evolving AI landscape and empowering the business to innovate with confidence in the knowledge that a robust framework is in place to address the challenges that this fast-emerging technology is creating.
The captive can then turn to matters of risk transfer, and insurance can be put in place to address the residual risk.
A big part of the value captives can provide is insurance coverage tailored to an individual organisation’s needs, to protect against financial losses from algorithmic errors, systems failures and unintended consequences of AI decision making.
Aside from providing primary coverage layers and fill gaps in programmes at levels where capacity is hard to find, coverage can be structured to pay costs associated with mitigation, data recovery, system repair and legal liability, access reinsurance or other sources of capital to further spread the risk, and potentially reduce the cost of risk through arbitrage opportunities.
Once the cover has been established, in the event of an AI-related incident, captives must be able to ensure they can efficiently manage claims and provide financial support to their parent groups to cover losses and liabilities.
This can be done in-house, via a third-party claim adjuster or through a fronting carrier.
Working with a fronting carrier has the advantage that much of the legal and compliance considerations of policy issuance either on a local basis or via group covers is also outsourced, and the captive can take advantage of already existing infrastructure so as not to incur additional administration and/or resource costs.
As with any emerging risk, early engagement with any prospective fronter is key to designing a solution to ensure the interests of all parties are considered.
While a captive can play a valuable role in insuring AI risks for its parent company, it should carefully assess its risk exposure, stay updated on the rapidly evolving AI landscape, and continuously adapt its coverage and risk management strategies to address emerging risks effectively.
Seeking input from AI specialists and legal experts is advisable to navigate the complexities of AI insurance effectively.
However, with the proper controls and coverage in place, organisations can be more comfortable enlisting AI to help structure innovative systems and better serve their customers while making their own operations more efficient and resilient.