SAN DIEGO – Emerging risks associated with corporate use of artificial intelligence can be quantified and transferred to insurers through existing policies, new AI policies and captives, a panel of experts said.
Generative AI is increasingly being used by a variety of organizations for tasks such as customer service, said Michael Berger, head of insurance AI at Palo Alto, California-based Munich Reinsurance Co.
A common weakness of the tools is the risk of AI “hallucinations,” where they present false or misleading information as true, he said during a session Wednesday at RiskWorld, the annual conference of the Risk & Insurance Management Society Inc.
To measure risk, data sets showing answers to questions could be analyzed by an AI tool to determine hallucination rates, Mr. Berger said.
“If we’re using similar models for similar use cases, the error rates of those models may be correlated,” he said.
Once the risks are quantified, they can often be transferred through existing policies, Mr. Berger said. These policies range from property policies, where AI failure could result in property damage, to technology errors and omissions policies.
In addition, special AI insurance policies are being developed by companies including Munich Re to cover risks not covered by traditional insurance.
Joe Rosenberger, chief captive analyst at the North Carolina Insurance Department in Raleigh, said companies may also consider using captives to cover AI risks.
“Since captive insurance is self-insured, you are really able to personalize the policies,” he said.
In addition, captives can issue policies varying in terms to cover AI risks, Mr. Rosenberger said.