Insurers End Silent Coverage for Artificial Intelligence Risks

The rise of artificial intelligence (AI) is reshaping risk landscapes across industries, prompting insurers to reconsider how they cover AI-related exposures. Traditionally, many AI risks have been included implicitly under broader cyber, liability, or professional indemnity policies—a practice known as “silent AI” coverage. However, as the technology becomes more pervasive and complex, this approach is increasingly viewed as inadequate and potentially hazardous.

A recent insight by WTW, titled Insurance in the AI Age and authored by Dr. Anat Lior and Sonal Madhok, highlights that implicit coverage mirrors the early days of cyber insurance. At that time, emerging cyber risks were absorbed under traditional policies before the development of dedicated cyber products. Similarly, AI losses that do not clearly fit existing policy definitions can leave significant gaps, creating uncertainty for both insurers and policyholders.

To address this, insurers are now moving towards explicitly defined AI coverage. This shift is reflected in the introduction of AI-specific endorsements or exclusions, as well as the emergence of standalone AI insurance products, particularly tailored for small and medium enterprises. Larger technology firms, on the other hand, frequently opt for self-insurance due to the scale and sophistication of their AI operations.

Despite these developments, many AI risks can still be mapped to traditional insurance lines, though limitations persist. For instance, standard cyber policies may not cover losses arising from a company’s own data, while general liability policies typically exclude purely financial losses. Consequently, policy reviews at renewal have become critical, particularly as insurers tighten terms around autonomous decisions, algorithmic errors, and other AI-specific exposures.

Underwriting practices are evolving in parallel. Insurers now ask more detailed questions regarding AI governance, human oversight, and internal controls. Preference is given to systems with a “human-in-the-loop” for high-impact decisions, reflecting an emphasis on responsible AI deployment. Regulatory developments, including the EU AI Act, are also expected to influence liability exposure and shape future insurance requirements.

Dr. Lior emphasises that clearer policy language, strengthened governance frameworks, and enhanced underwriting data will ultimately reduce uncertainty. Such measures are anticipated to enable the insurance industry to support safer and more responsible AI adoption, helping organisations mitigate emerging risks while fostering innovation across sectors.

Leave a Comment