Pennsylvania Code & Bulletin
COMMONWEALTH OF PENNSYLVANIA

• No statutes or acts will be found at this website.

The Pennsylvania Bulletin website includes the following: Rulemakings by State agencies; Proposed Rulemakings by State agencies; State agency notices; the Governor’s Proclamations and Executive Orders; Actions by the General Assembly; and Statewide and local court rules.

PA Bulletin, Doc. No. 24-484

NOTICES

INSURANCE DEPARTMENT

Use of Artificial Intelligence Systems by Insurers; Notice 2024-04

[54 Pa.B. 1910]
[Saturday, April 6, 2024]

 The Insurance Department (Department) reminds all insurers that hold certificates of authority or are otherwise authorized to engage in the business of insurance in this Commonwealth that decisions or actions impacting consumers that are made or supported by advanced analytical and computational technologies, including Artificial Intelligence (AI) systems (as defined as follows), must comply with all applicable insurance laws and regulations. This includes those laws that address unfair trade practices and unfair discrimination. This notice sets forth the Department's expectations as to how insurers will govern the development/acquisition and use of certain AI technologies, including the AI systems described herein. This notice also advises insurers of the type of information and documentation that the Department may request during an investigation or examination of any insurer regarding its use of the technologies and AI systems.

SECTION 1: INTRODUCTION, BACKGROUND AND LEGISLATIVE AUTHORITY

Background

 AI is transforming the insurance industry. AI techniques are deployed across all stages of the insurance life cycle, including product development, marketing, sales and distribution, underwriting and pricing, policy servicing, claim management and fraud detection.

 AI may facilitate the development of innovative products, improve consumer interface and service, simplify and automate processes, and promote efficiency and accuracy. However, AI, including AI systems, can present unique risks to consumers, including the potential for inaccuracy, unfair discrimination, data vulnerability, and lack of transparency and explainability. Insurers should take actions to minimize these risks.

 The Department encourages the development and use of innovation and AI systems that contribute to safe and stable insurance markets. However, the Department expects that decisions made and actions taken by insurers using AI systems will comply with all applicable Federal and State laws and regulations.

 The Department recognizes the Principles of Artificial Intelligence that the National Association of Insurance Commissioners (NAIC) adopted in 2020 as an appropriate source of guidance for insurers as they develop and use AI systems. Those principles emphasize the importance of the fairness and ethical use of AI; accountability; compliance with state laws and regulations; transparency; and safe, secure, fair, and robust systems and processes. These fundamental principles should guide insurers in their development and use of AI systems and underlie the expectations set forth in this notice.

Legislative Authority

 An insurer's use of AI is subject to several existing Commonwealth laws and regulations, including but not limited to:

 • The Unfair Insurance Practices Act (40 P.S. §§ 1171.1—1171.15) and the Unfair Claims Settlement Practices (UCSP) Regulations, Chapter 146 of Title 31, Pennsylvania Code Subchapter A (31 Pa. Code §§ 146.1—146.10). The Unfair Insurance Practices Act (UIPA) regulates trade practices in the business of insurance by defining practices that constitute unfair methods of competition or unfair or deceptive acts and practices and prohibiting the trade practices so defined or determined. The UCSP regulations set forth standards for the investigation and disposition of claims arising under policies or certificates of insurance issued to residents in this Commonwealth. Actions taken by insurers in this Commonwealth must not violate the UIPA or the UCSP regulations, regardless of the methods the insurer used to determine or support its actions. As discussed as follows, insurers are expected to adopt practices, including governance frameworks and risk management protocols, that are designed to ensure that the use of AI systems does not result in: 1) unfair trade practices, as defined in the UIPA; or 2) unfair claims settlement practices, as defined in the UCSP regulations.

 • Corporate Governance Annual Disclosure (CGAD) Requirements, Chapter 39 of Title 40 of the Pennsylvania Consolidated Statutes (40 Pa.C.S. §§ 3901—3911). This Chapter requires insurers to report on governance practices and to provide a summary of the insurer's corporate governance structure, policies and practices. The content, form and filing requirements for CGAD information are set forth in Chapter 39. The CGAD requirements are applicable to the elements of the insurer's corporate governance framework that address the insurer's use of AI systems to support actions and decisions that impact consumers.

 • The Casualty and Surety Rate Regulatory Act (40 P.S. §§ 1181—1199), the Fire, Marine and Inland Marine Rate Regulatory Act (40 P.S. §§ 1221—1238), Article V-A of The Insurance Company Law of 1921—the Property and Casualty Filing Reform Act (40 P.S. §§ 710-1—710-19), Article VII of the Workers' Compensation Act (77 P.S. §§ 1035.1—1035.22) and Article VII of The Insurance Company Law of 1921 (regarding Title Insurance Companies; 40 P.S. §§ 910-1—910-55). These acts require that property/casualty (P/C) insurance rates not be excessive, inadequate or unfairly discriminatory. The requirements of these acts apply regardless of the methodology that the insurer used to develop rates, rating rules and rating plans subject to those provisions. That means that an insurer is responsible for assuring that rates, rating rules and rating plans that are developed using AI techniques and predictive models that rely on data and machine learning do not result in excessive, inadequate or unfairly discriminatory insurance rates with respect to all forms of casualty insurance—including fidelity, surety and guaranty bond—and to all forms of property insurance—including fire, marine and inland marine insurance, and any combination of any of the foregoing.

 • Article IX of The Insurance Department Act of 1921—Examinations (40 P.S. §§ 323.1—323.8). Article IX establishes an effective and efficient system for examining the activities, operations, financial condition and affairs of all persons transacting the business of insurance in this Commonwealth. An insurer's conduct in this Commonwealth, including its use of AI systems to make or support actions and decisions that impact consumers, is subject to investigation, examination and market analysis. Section 4 of this notice provides guidance on the kinds of information and documents that the Department may request in the context of an AI-focused investigation, including a market conduct action.

SECTION 2: DEFINITIONS

 The following terms are defined for purposes of this notice:

''AI system.'' A machine-based system or set of processes that can, for a given set of objectives, generate outputs such as predictions, recommendations, content (such as text, images, videos or sounds), or other output influencing decisions made in real or virtual environments. AI systems are designed to operate with varying levels of autonomy.

''Adverse consumer outcome.'' A decision by an insurer that is subject to insurance regulatory standards enforced by the Department that adversely impacts the consumer in a manner that violates those standards.

''Algorithm.'' A clearly specified mathematical process for computation or a set of rules that, if followed, will give a prescribed result.

''Artificial intelligence (AI).'' A branch of computer science that uses data processing systems that perform functions normally associated with human intelligence, such as reasoning, learning and self-improvement, or the capability of a device to perform functions that are normally associated with human intelligence such as reasoning, learning and self-improvement. This definition considers machine learning to be a subset of artificial intelligence.

''Degree of potential harm to consumers.'' The severity of adverse economic impact that a consumer might experience as a result of an adverse consumer outcome.

''Generative artificial intelligence (Generative AI).'' A class of AI systems that generate content in the form of data, text, images, sounds or video, that is similar to, but not a direct copy of, pre-existing data or content.

''Insurer.'' A risk-bearing entity acting under authority of the Department including an insurance company, association, exchange, interinsurance exchange, health maintenance organization, preferred provider organization, professional health services plan corporation subject to Chapter 63 of Title 40 of the Pennsylvania Consolidated Statutes (regarding professional health services plan corporations; 40 Pa.C.S. §§ 6301—6335), a hospital plan corporation subject to Chapter 61 of Title 40 of the Pennsylvania Consolidated Statutes (regarding hospital plan corporations; 40 Pa.C.S. §§ 6101—6127), fraternal benefit society, beneficial association, Lloyd's insurer or health plan corporation.

''Machine Learning (ML).'' A field within artificial intelligence that focuses on the ability of computers to learn from provided data without being explicitly programmed.

''Model Drift.'' The decay of a model's performance over time arising from underlying changes such as the definitions, distributions or statistical properties between the data used to train the model and the data on which it is deployed.

''Predictive Model.'' The mining of historic data using algorithms or machine learning, or both, to identify patterns and predict outcomes that can be used to make or support the making of decisions.

''Third Party'' An organization other than the insurer that provides services, data or other resources related to AI.

SECTION 3: REGULATORY GUIDANCE AND EXPECTATIONS

 Decisions or determinations subject to regulatory oversight that are made by insurers using AI systems must comply with the legal and regulatory standards that apply to those decisions or determinations, including those laws and requirements as outlined in Section 1. These standards require, at a minimum, that decisions made by insurers are not inaccurate, arbitrary, capricious or unfairly discriminatory. Compliance with these standards is required regardless of the tools and methods insurers use to make such decisions. However, because, in the absence of proper controls, AI has the potential to increase the risk of inaccurate, arbitrary, capricious or unfairly discriminatory outcomes for consumers, it is important that insurers adopt and implement controls specifically related to their use of AI that are designed to mitigate the risk of adverse consumer outcomes.

 Consistent therewith, all insurers authorized to do business in this Commonwealth that use AI systems are expected to develop, implement and maintain a written program (an ''AIS program'') for the responsible use of AI systems that make or support decisions related to regulated insurance practices. The AIS program should be designed to mitigate the risk of adverse consumer outcomes, including, at a minimum, to maintain compliance with the statutory and regulatory provisions set forth in Section 1 of this notice.

 The Department recognizes that robust governance, risk management controls and internal audit functions play a core role in mitigating the risk that decisions driven by AI systems will violate unfair trade practice laws and other applicable existing legal standards. The Department also encourages the development and use of verification and testing methods to identify errors and bias in predictive models and AI systems, as well as the potential for unfair discrimination in the decisions and outcomes resulting from the use of predictive models and AI systems.

 The controls and processes that an insurer adopts and implements as part of its AIS program should be reflective of, and commensurate with, the insurer's own assessment of the degree and nature of risk posed to consumers by the AI systems that it uses, considering: (i) the nature of the decisions being made, informed or supported using the AI system; (ii) the type and degree of potential harm to consumers resulting from the use of AI systems; (iii) the extent to which humans are involved in the final decision-making process; (iv) the transparency and explainability of outcomes to the impacted consumer; and (v) the extent and scope of the insurer's use or reliance on data, predictive models and AI systems from third parties. Similarly, controls and processes should be commensurate with both the risk of adverse consumer outcomes and the degree of potential harm to consumers.

 As discussed in Section 4, the decisions made as a result of an insurer's use of AI systems are subject to the Department's examination to determine whether the insurer's reliance on AI systems is compliant with all applicable existing statutory and regulatory standards governing the conduct of the insurer.

AIS Program Guidelines

 The Department suggests the following guidelines and best practices for the development and use of AIS programs. These guidelines are not intended to be binding upon insurers, nor are they intended to, in any way, restrict or limit the Department's discretion to evaluate an insurer's compliance with applicable laws or regulations. Likewise, the following guidelines do not constitute an exhaustive list of items that the Department will consider when determining compliance with the Commonwealth's existing regulatory and statutory requirements.

1.0. General Guidelines.

 1.1. The AIS program should be designed to mitigate the risk that the insurer's use of an AI system will result in adverse consumer outcomes.

 1.2. The AIS program should address governance, risk management controls and internal audit functions.

 1.3. The AIS program should vest responsibility for the development, implementation, monitoring and oversight of the AIS program and for setting the insurer's strategy for AI systems with senior management accountable to the board or an appropriate committee of the board.

 1.4. The AIS program should be tailored to and proportionate with the insurer's use and reliance on AI and AI systems. Controls and procedures should be focused on the mitigation of adverse consumer outcomes and the scope of the controls and procedures applicable to a given AI system use case should reflect and align with the degree of potential harm to consumers with respect to that use case.

 1.5. The AIS program may be independent of or part of the insurer's existing Enterprise Risk Management program. The AIS program may adopt, incorporate or rely upon, in whole or in part, a framework or standards developed by an official third-party standard organization, such as the National Institute of Standards and Technology Artificial Intelligence Risk Management Framework, Version 1.0.

 1.6. The AIS program should identify and address the use of all AI systems across the insurance life cycle, including areas such as product development and design, marketing, use, underwriting, rating and pricing, case management, claim administration and payment, and fraud detection.

 1.7. The AIS program should address all phases of an AI system's life cycle, including design, development, validation, implementation (both systems and business), use, on-going monitoring, updating and retirement.

 1.8. The AIS program should address the AI systems used with respect to regulated insurance practices whether developed by the insurer or embedded within an affiliate or third-party vendor process.

 1.9. The AIS program should include processes and procedures providing notice to impacted consumers that AI systems are in use and provide access to appropriate levels of information based on the phase of the insurance life cycle in which the AI systems are being used.

2.0. Governance.

 The AIS program should include a governance framework for the oversight of AI systems used by the insurer. Governance should prioritize transparency, fairness and accountability in the design and implementation of the AI systems, recognizing that proprietary and trade secret information must be protected. An insurer may consider adopting new internal governance structures or rely on the insurer's existing governance structures; however, in developing its governance framework, the insurer should consider addressing the following items:

 2.1. The policies, processes and procedures, including risk management and internal controls, to be followed at each stage of an AI system life cycle, from proposed development to retirement.

 2.2. The requirements adopted by the insurer to document compliance with the AIS program policies, processes, procedures and standards. Documentation requirements should be developed with Section 4 in mind.

 2.3. The insurer's internal AI system governance accountability structure, such as:

 a) The formation of centralized, federated or otherwise constituted committees comprised of representatives from appropriate disciplines and units within the insurer, such as business units, product specialists, actuarial, data science and analytics, underwriting, claims, compliance and legal.

 b) Scope of responsibility and authority, chains of command and decisional hierarchies.

 c) The independence of decision-makers and lines of defense at successive stages of the AI system life cycle.

 d) Monitoring, auditing, escalation, and reporting protocols and requirements.

 e) Development and implementation of ongoing training and supervision of personnel.

 2.4. Specifically with respect to predictive models: the insurer's processes and procedures for designing, developing, verifying, deploying, using, updating and monitoring predictive models, including a description of methods used to detect and address errors, performance issues, outliers or unfair discrimination in the insurance practices resulting from the use of the predictive model.

3.0. Risk Management and Internal Controls.

 The AIS program should document the insurer's risk identification, mitigation, and management framework and internal controls for AI systems generally and at each stage of the AI system life cycle. Risk management and internal controls should address the following items:

 3.1. The oversight and approval process for the development, adoption or acquisition of AI systems, as well as the identification of constraints and controls on automation and design to align and balance function with risk.

 3.2. Data practices and accountability procedures, including data currency, lineage, quality, integrity, bias analysis and minimization, and suitability.

 3.3. Management and oversight of predictive models (including algorithms used therein), including:

 a) Inventories and descriptions of the predictive models.

 b) Detailed documentation of the development and use of the predictive models.

 c) Assessments such as interpretability, repeatability, robustness, regular tuning, reproducibility, traceability, model drift and the auditability of these measurements where appropriate.

 3.4. Validating, testing and retesting as necessary to assess the generalization of AI system outputs upon implementation, including the suitability of the data used to develop, train, validate and audit the model. Validation can take the form of comparing model performance on unseen data available at the time of model development to the performance observed on data post-implementation, measuring performance against expert review or other methods.

 3.5. The protection of non-public information, particularly consumer information, including unauthorized access to the predictive models themselves.

 3.6. Data and record retention.

 3.7. Specifically with respect to predictive models: a narrative description of the model's intended goals and objectives and how the model is developed and validated to ensure that the AI systems that rely on such models correctly and efficiently predict or implement those goals and objectives.

4.0. Third Party AI Systems and Data.

 Each AIS program should address the insurer's process for acquiring, using or relying on: (i) third-party data to develop AI systems; and (ii) AI systems developed by a third party, which may include, as appropriate, the establishment of standards, policies, procedures and protocols relating to the following considerations:

 4.1. Due diligence and the methods, which may include human oversight, employed by the insurer to assess the third party and its data or AI systems acquired from the third party to ensure that decisions made or supported from such AI systems that could lead to adverse consumer outcomes will meet the legal standards imposed on the insurer itself.

 4.2. Where appropriate and available, the inclusion of terms in contracts with third parties that:

 a) Provide audit rights or entitle the insurer to receive audit reports, or both, such as System and Organization Control 2 reports or other generally accepted reports, performed by qualified auditing entities, which specifically encompass the AI in scope of the review.

 b) Require the third party to cooperate with the insurer with regard to regulatory inquiries and investigations related to the insurer's use of the third party's product or services.

 4.3. The performance of contractual rights regarding audits or other activities, or both, to confirm the third party's compliance with contractual and, where applicable, regulatory requirements.

SECTION 4: REGULATORY OVERSIGHT AND EXAMINATION CONSIDERATIONS

 The Department's regulatory oversight of insurers includes oversight of an insurer's conduct in this Commonwealth, including its use of AI systems to make or support decisions that impact consumers. Regardless of the existence or scope of a written AIS program, in the context of an investigation or market conduct action or at any time determined necessary by the Insurance Commissioner, an insurer can expect to be asked to respond to an inquiry or provide documentation, or both, pertaining to its development, deployment and use of AI systems, or any specific predictive model, AI system or application and its outcomes (including adverse consumer outcomes) from the use of those AI systems, as well as any other information or documentation deemed relevant by the Department.

 Insurers should expect those inquiries to include (but not be limited to) the insurer's governance framework, risk management and internal controls (including the considerations identified in Section 3). In addition to conducting a review of any of the items listed in this notice, the Department may also ask questions regarding any specific model, AI system or its application, including requests for the following types of information or documentation, or both:

1.0. Information and Documentation Relating to AI system Governance, Risk Management and Use Protocols.

 1.1. Information and documentation related to or evidencing the insurer's AIS program, including:

 a) The current written AIS program.

 b) Information and documentation relating to or evidencing the adoption of the AIS program.

 c) The scope of the insurer's AIS program, including any and all AI systems and technologies whether or not included in or addressed by the AIS program.

 d) The structure and mechanisms by which the AIS program is tailored to and proportionate with the insurer's use and reliance on AI systems, the risk of adverse consumer outcomes and the degree of potential harm to consumers.

 e) The policies, procedures, guidance, training materials and other information relating to the adoption, implementation, maintenance, monitoring and oversight of the insurer's AIS program, including:

 i. Processes and procedures for the development, adoption or acquisition of AI systems, such as:

 (1) Identification of constraints and controls on automation and design.

 (2) Data governance and controls, including practices related to data lineage, quality, integrity, bias analysis and minimization, suitability, and data currency.

 ii. Processes and procedures related to the management and oversight of predictive models, including measurements, standards, or thresholds adopted or used by the insurer in the development, validation, and oversight of models and AI systems.

 iii. Protection of nonpublic information, particularly consumer information, including unauthorized access to predictive models themselves.

 1.2. Information and documentation relating to the insurer's pre-acquisition/pre-use diligence, monitoring, oversight and auditing of data or AI systems developed by a third party.

 1.3. Information and documentation relating to or evidencing the insurer's implementation and compliance with its AIS program, including documents relating to the insurer's monitoring and audit activities respecting compliance, including:

 a) Documentation relating to or evidencing the formation and ongoing operation of the insurer's coordinating bodies for the development, use and oversight of AI systems.

 b) Documentation related to data practices and accountability procedures, including data lineage, quality, integrity, bias analysis and minimization, suitability and data currency.

 c) Management and oversight of predictive models and AI systems, including:

 i. The insurer's inventories and descriptions of predictive models, and AI systems used by the insurer to make or support decisions that can result in adverse consumer outcomes.

 ii. As to any specific predictive model or AI system that is the subject of investigation or examination:

 (1) Documentation of compliance with all applicable AIS program policies, protocols and procedures in the development, use and oversight of predictive models and AI systems deployed by the insurer.

 (2) Information about data used in the development and oversight of the specific model or AI system, including the data source, provenance, data lineage, quality, integrity, bias analysis and minimization, suitability and data currency.

 (3) Information related to the techniques, measurements, thresholds and similar controls used by the insurer.

 d) Documentation related to validation, testing and auditing, including evaluation of model drift to assess the reliability of outputs that influence the decisions made based on predictive models. Note that the nature of validation, testing and auditing should be reflective of the underlying components of the AI system, whether based on predictive models or generative AI.

2.0. Third Party AI Systems and Data.

 In addition, if the investigation or examination concerns data, predictive model or AI systems collected or developed in whole or in part by third parties, the insurer should also expect the Department to request the following additional types of information and documentation:

 2.1. Due diligence conducted on third parties and their data, models or AI systems.

 2.2. Contracts with third-party AI system, model or data vendors, including terms relating to representations, warranties, data security and privacy, data sourcing, intellectual property rights, confidentiality and disclosures, and/or cooperation with regulators.

 2.3. Audits or confirmation processes, or both, performed regarding third-party compliance with contractual and, where applicable, regulatory obligations.

 2.4. Documentation pertaining to validation, testing and auditing, including evaluation of model drift.

SECTION 5. CONCLUSION

 The Department recognizes that insurers may demonstrate compliance with the laws and regulations that govern their conduct in this Commonwealth in the use of AI systems through alternative means, including through practices that differ from those described in this notice. The goal of the notice is not to prescribe specific practices or to prescribe specific documentation requirements. Rather, the goal is to ensure that insurers in this Commonwealth are aware of the Department's expectations regarding the information as to how AI systems will be governed and managed and of the kinds of information and documents about an insurer's AI systems that the Department expects an insurer to produce when requested and best practices that should be employed when using AI systems.

 As in all cases, investigations, examinations and market conduct actions may be performed using procedures that vary in nature, extent and timing in accordance with regulatory judgment. Work performed may include inquiry, examination of company documentation or any of the continuum of market actions outlined in the procedures described in an applicable NAIC handbook. These activities may involve the use of contracted specialists with relevant subject matter expertise. Nothing in this notice limits the authority of the Department to conduct any regulatory investigation, examination or enforcement action relative to any act or omission of any insurer thatthe Department is authorized to perform under Commonwealth law.

MICHAEL HUMPHREYS, 
Insurance Commissioner

[Pa.B. Doc. No. 24-484. Filed for public inspection April 5, 2024, 9:00 a.m.]



No part of the information on this site may be reproduced for profit or sold for profit.

This material has been drawn directly from the official Pennsylvania Bulletin full text database. Due to the limitations of HTML or differences in display capabilities of different browsers, this version may differ slightly from the official printed version.