Data security, model reliability and intellectual property among the key issues with insurers among those under threat
Artificial intelligence is becoming deeply embedded in the operations of S&P 500 companies, including insurers, streamlining everything from underwriting and claims processing to customer service. However, as adoption accelerates, so do the associated risks, particularly around data security, model reliability and intellectual property.
A new report from Cybernews revealed that 65% of S&P 500 firms, including major players in the insurance sector, have integrated AI tools into their workflows. Researchers flagged 970 potential AI-related security issues across 327 companies, with 158 of those linked to financial services and insurance firms.
For insurers, the most pressing concern is data leakage. Of the 146 instances documented across all sectors, 35 involved financial and insurance companies. These often stemmed from AI systems inadvertently exposing sensitive customer information through poorly secured interfaces or prompt injection attacks.
“Insurers manage large volumes of personally identifiable information and financial data. A single AI system error can lead to a breach that triggers regulatory scrutiny, reputational damage, and customer distrust,” said Žilvinas Girėnas, head of product at nexos.ai.
Algorithmic bias is also emerging as a key issue. Cybernews identified 22 bias-related cases in insurance and financial firms, raising concerns about fairness in automated underwriting and pricing models. If models are trained on biased or incomplete data, they can replicate past discriminatory practices, resulting in skewed premium calculations or claim denials.
Insecure AI output remains a broader concern across the sector, with 32 flagged cases. This includes systems providing flawed advice to policyholders, misclassifying risk, or generating inaccurate claims recommendations. Without human oversight, these errors can affect both operational performance and customer outcomes.
Beyond customer-facing systems, insurers increasingly use AI to automate internal decision-making, fraud detection, and actuarial analysis. These applications make companies more efficient but also introduce new vulnerabilities. Intellectual property theft - flagged in 119 total cases - poses a risk to firms that have developed proprietary pricing algorithms or predictive risk models.
“The same tools that help insurers assess and manage risk are now introducing their own,” said Martynas Vareikis, a Cybernews security researcher. “If the models are compromised or exposed, it’s not just a technical issue - it becomes a core business risk.”
Experts urge insurers to apply strict access controls to AI models, classify data used in training and inference, and validate outputs before acting on them. Encryption, adversarial testing, and third-party risk assessments are also recommended, particularly when using external AI tools.
As regulatory attention on AI governance grows, especially in sectors like insurance, the cost of inaction could be steep. “Security can’t be bolted on later,” said Girėnas. “For insurers, the challenge is clear: embed safeguards from the start, or risk exposure that goes well beyond a single breach.”