February 3, 2026
AI Governance for Insurers: Navigating Risk, Regulation, and Frameworks
As insurers are experimenting with and adopting artificial intelligence (AI) supported processes, they need to ensure a robust governance program is in place. The key question is, “How do you build a governance program that supports innovation and adoption, while ensuring appropriate safeguards are implemented and documented to support regulatory compliance?” The goal is to build a governance program that supports trust in an AI system based on reliable evidence.
Regulatory Environment for AI Usage
The current regulatory environment for the use of AI in the United States by insurance companies is driven by state insurance regulators and the National Association of Insurance Commissioners (NAIC). There is no overarching federal law. The NAIC adopted the Model Bulletin on the Use of Artificial Intelligence Systems by Insurers (the AI Model Bulletin) in December 2023. This is not a Model Law, which creates a new, binding requirement. A Model Bulletin is essentially a regulatory memorandum advising insurers on how to comply with existing state laws (like non-discrimination or unfair trade practices) when using new technologies or practices (like AI). It sets forth the state regulator’s expectations for responsible conduct. This was done to promote speed, flexibility and leveraging existing laws. The AI Model Bulletin reminds insurers that their AI systems must comply with all applicable state laws. The AI Model Bulletin focuses on governance, risk management and documentation to demonstrate compliance with existing standards when using AI systems.
As of October 31, 2025, the AI Model Bulletin has been adopted by 24 jurisdictions and insurance specific regulation or guidance has been adopted in four jurisdictions. The AI Model Bulletin states that, “Decisions subject to regulatory oversight that are made by Insurers using AI Systems must comply with the legal and regulatory standards that apply to those decisions, including unfair trade practice laws.” It calls for the development, implementation and maintenance of a written ARtificial Intelligence Systems (AIS) Program for the responsible use of AI systems that make or support decisions related to regulated insurance practices. The AIS program should address the full insurance life cycle including product development and design, marketing, use, underwriting, rating and pricing, case management, claim administration and payment, and fraud detection.
AI Risks for Insurance Companies
AI related regulations and guidance are intended to protect consumers from the risks posed by complex AI systems and ensure fairness, transparency and accountability across the insurance lifecycle, from marketing and underwriting to claims processing and fraud detection:
- Fairness: Model bias is the most prominent driver of AI regulation in insurance. Models are trained on historical data that may reflect past biases. The AI model may learn and amplify that bias, which could lead to unfair or illegal discrimination against protected classes in pricing, risk assessment or policy approval/denial.
- Transparency: Insurers must be able to explain how decisions were made that adversely affect customers like a policy or claim denial. Advanced AI models may operate like black boxes that are difficult to explain and trace the logic and variables that contributed to a decision.
- Accountability: Insurers are responsible for AI system outcomes and impacts. Roles and responsibilities must be clearly defined. Data governance is critical to ensure data quality and integrity. Personally Identifiable Information (PII) must be protected from cyberattacks and data breaches. Insurers also retain accountability for the security and bias of tools provided by a third-party. Many insurers refer to “human in the loop” to ensure control in high-risk situations.
Additional risks include:
- Data Poisoning: Evaluate if malicious data could be injected into the data used to train the model, leading to biased or harmful outputs.
- Prompt Injection: Assess the risk of attackers manipulating prompts to elicit unintended responses from the large language model (LLM).
- Insecure Output Handling: Analyze whether the LLM output is properly validated and sanitized before being presented to users.
- Privacy Concerns: Identify potential risks of sensitive information leakage from the LLM’s input or output.
- Model Drift: Evaluate the LLM for potential deterioration in performance over time as conditions or data patterns shift after deployment. This can cause the model to become less effective or accurate over time.
- Over-reliance on Automation: Lack of human oversight, leading to failure to catch significant errors.
- Supply Chain Risks: Insurers remain responsible for the security and compliance of systems licensed from third-party providers.
Building an AI Framework
Recognized frameworks provide a structured, credible, and flexible approach to developing an AI governance program. The National Institute of Standards and Technology (NIST) published the final version of its AI Risk Management Framework (AI RMF 1.0) on January 26, 2023. This voluntary framework provides guidance for organizations to manage risks in developing and deploying trustworthy AI systems. ISO/IEC 42001:2023(en) was published in December 2023, making it the first international standard for Artificial Intelligence (AI) Management Systems (AIMS), providing a framework for organizations to manage AI responsibly, ethically, and securely.
These two frameworks may be used in tandem:
- The NIST AIRMF provides guidance on overall risk management, lifecycle management and data quality management which are appropriate for the specific AI use cases within scope. The framework serves as a proven roadmap to build an AI governance program that is structured, comprehensive, trustworthy, and scalable, allowing your organization to innovate with AI while responsibly managing its complex ethical and operational risks.
- ISO 42001 specifies the requirements and provides guidance for establishing, implementing, maintaining and continually improving an AI governance program within the context of an organization. The framework provides control objectives and detailed criteria that enable an organization to generate evidence of its responsibility and accountability regarding its role with respect to AI systems.
For an insurance company, starting with the NIST AI RMF can help rapidly embed risk practices into the underwriting and claims teams. Then, the organization can use ISO 42001 to formalize those practices into an auditable AI management system (AIMS), demonstrating to regulators (like NYDFS) and global partners that its AI governance program is robust, structured, and committed to continuous improvement.
How Johnson Lambert Helps Insurers
As insurance carriers accelerate the adoption of AI, the line between innovation and risk becomes increasingly fine. Insurers are now expected to demonstrate verifiable evidence that their AI systems are fair, transparent, and secure.
Building an AI governance program that satisfies these evolving standards—while still allowing for the speed necessary to compete—is no small feat. It requires moving beyond ad-hoc controls to a systematic framework where risk management is embedded into every stage of the AI lifecycle. By leveraging established standards like the NIST AI RMF and ISO 42001, insurers can transform compliance into a strategic advantage, proving to regulators and policyholders alike that their technology is trustworthy.
Whether you are in the early stages of designing an AI governance program or need to audit an existing program for regulatory alignment, our team brings the specialized knowledge required to manage algorithmic risk without stifling innovation. We assist organizations in:
- Governance Design and Implementation: Building scalable frameworks aligned with the NAIC Model Bulletin, NIST AI RMF, and ISO 42001
- AI Risk Assessments: Identifying vulnerabilities in model fairness, data integrity, and third-party dependencies
- AI Audit: Providing independent compliance attestation of your AI controls to satisfy board and regulatory requirements
Don’t let regulatory uncertainty slow your modernization. Connect with the Johnson Lambert team today to optimize your AI governance strategy. Speak to a specialist.