This article has been saved to your Favorites!

A Look At Calif.'s New AI Law For Health Insurers

By Danny Tobey, Kathleen Birrane and David Kopans · 2024-10-15 18:59:36 -0400 ·

Danny Tobey
Danny Tobey
Kathleen Birrane
David Kopans
On Sept. 28, California enacted S.B. 1120, which regulates the use of artificial intelligence, algorithms and other software tools in the performance of utilization review and management functions by healthcare service plans or disability insurers and their contractors.

Referred to by its sponsor as the Physicians Make Decisions Act, S.B. 1120 prohibits AI tools from making medical necessity determinations for healthcare service plans or for disability insurers with respect to healthcare coverage.

While AI tools that meet the requirements of S.B. 1120 may be used in performing UM/UR functions, S.B. 1120 requires that the actual decision as to medical necessity be made by a qualified healthcare professional.

Specifically, S.B. 1120 provides that (1) only a licensed physician or healthcare professional may make a medical necessity determination; (2) AI tools may not deny, delay or modify healthcare services based, in whole or in part, on medical necessity; and (3) utilization review and management decisions, based in whole or in part on medical necessity, must be based on an enrollee or insured's own healthcare and medical record information, and not "solely on a group dataset."

S.B. 1120 also addresses additional standards for AI tools utilized in the performance of utilization review and management functions. S.B. 1120's amendments will become effective Jan. 1, 2025.

We take a closer look at S.B. 1120's requirements and its contextual significance below.

Overview of S.B. 1120's Amendments

S.B. 1120 amends those sections of the California Health and Safety Code and the Insurance Code that govern the performance of prospective, retrospective and concurrent utilization review and management, based in whole or in part on medical necessity, by healthcare service plans and disability insurers and their contractors.

For healthcare service plans, these laws, including the S.B. 1120 amendments, extend to delegation arrangements with medical groups, independent practice associations or other contracting providers.

As amended by S.B. 1120, California law now imposes both substantive and operational standards and restrictions on AI tools used in connection with utilization review and management functions. 

The amendments establish the following.

AI tools cannot be used to deny, delay or modify healthcare services where the decision is based, in whole or in part, on medical necessity. Only a licensed physician or healthcare professional may make medical necessity determinations. That physician or professional must be competent to evaluate the specific clinical issues involved in the requested healthcare services.

AI tools cannot supplant healthcare provider decision-making.

Determinations made by AI tools must be based on the enrollee or insured's specific healthcare information and records, rather than solely on group datasets. The amendments limit the permissible information and records to the following, as applicable:

  • An enrollee or insured's medical, or other, clinical history;

  • Individual clinical circumstances as presented by the requesting provider; and

  • Other relevant clinical information contained in the enrollee or insured's medical, or other, clinical record.

In addition, S.B. 1120 adopts standards for AI tools used in utilization review and management activities, including the following requirements:

  • The criteria and guidelines for using these technologies comply with applicable state and federal law;

  • The technologies do not discriminate, directly or indirectly, against enrollees or insureds in violation of state or federal law;

  • The technologies are "fairly and equitably applied," including in accordance with any applicable federal regulations and guidance;

  • Performance, use and outcomes are periodically reviewed and revised to maximize accuracy and reliability; and

  • The technologies must not directly or indirectly cause harm to the enrollee or insured.

The amendments also impose disclosure obligations and subject the technologies to requirements concerning inspection and review.

Analysis of S.B. 1120

The use of AI by health insurers to make or support coverage decisions has been the subject of recent federal and state regulatory focus, as well as litigation.

A core issue that has often been raised is the extent of the role that AI can or should play with respect to coverage decisions based on medical necessity that are generally required by law to be made by a qualified medical practitioner.

S.B. 1120 seeks to address that question for California. It specifically requires that the actual determination of medical necessity must be made by a qualified healthcare professional. Additionally, it prohibits the use of AI tools to "deny, delay, or modify healthcare services based, in whole or in part, on medical necessity."

The act is consistent with, but is more explicit regarding medical necessity determinations than, the approach taken by the Centers for Medicare & Medicaid Services in the memorandum addressing frequently asked questions regarding the 2024 final Medicare Advantage plan rules issued on Feb. 6.

The FAQ affirms that AI tools "can be used to assist MA plans in making coverage determinations," subject to the requirement that the use of AI tools "complies with all applicable rules for how coverage determinations by MA organizations are made." The requirements relating to the standards and oversight of AI tools themselves are more closely aligned with the FAQ.

S.B. 1120 is part of the broader effort at both the federal and state levels to address the use of AI tools both generally and specifically in the insurance sector.

S.B. 1120 was part of a spate of AI legislation enacted by California in the 2024 session.

Seventeen AI-focused laws were enacted in California during this session, such as S.B. 942, imposing AI transparency requirements on certain covered persons; A.B. 3030, requiring disclosures by healthcare providers generating patient communications via generative AI; and A.B. 1008, amending the California Privacy Act of 2018 to establish that personal information may exist in a variety of formats, including abstract diligence formats, such as AI systems capable of outputting personal information.

Nationally, the National Conference of State Legislators reported that the number of AI-related bills introduced in states legislatures has been increasing rapidly. As of July 29, over 300 AI-related bills were introduced in state legislatures, compared to 125 for the whole of 2023. Over 120 bills related to AI are pending in Congress,

Few of those bills have been directed specifically at the use of AI in the insurance sector. Where a bill was directed specifically at insurance, it has primarily focused, like S.B. 1120, on imposing restraints on the use of AI tools to support utilization review and management functions. Legislation introduced in Georgia, New York, Oklahoma and Pennsylvania has focused on the same.

Legislative restraint in the insurance sector is reflective of the guidance issued by state insurance regulators though the National Association of Insurance Commissioners, which confirmed its view that state insurance regulators have the authority to regulate the use of AI when engaged in regulated conduct.

Adopted unanimously by NAIC members in December 2023, a model bulletin titled "Use of Algorithms, Predictive Models, and Artificial Intelligence Systems by Insurers" characterizes AI tools as a method that insurers may use to perform a regulated activity, stressing that regulated activity must meet regulatory requirements regardless of the methodology used in the activity, including those laws prohibiting unfair discrimination.

The NAIC model bulletin recognizes, however, that the AI tools can present unique risks and, thus, requires insurers to develop a written program regarding their use of AI use, focusing on governance, risk assessment and validation processes.

The bulletin has been adopted in 17 states. Its adoption is pending in other states, while some states insurance regulators have adopted state-specific guidance.[1]

While it is a narrowly focused AI law, S.B. 1120 shares common elements with other legislative and agency efforts to regulate AI that insurers, their vendors and all industries must continue to consider as part of their development and deployment of AI-related technologies.

These elements include establishing AI governance programs that provide risk management and other safeguards to mitigate risks presented by AI systems. In particular, AI laws and guidance often focus on mitigating bias and discrimination as well as ensuring the privacy and security of personal information.

Whether in the insurance industry or other industries subject to these AI laws and guidance, companies will want to ensure that they establish effective governance programs and product designs that meet the requirements of the increasing patchwork of overlapping regulatory frameworks across the country at both the federal and state levels.

This includes a focus on data governance, assessments, and robust data privacy and cybersecurity measures.



Danny Tobey is a partner and the chair of the Americas AI and data analytics practice at DLA Piper.

Kathleen Birrane is a partner and the leader of the U.S. insurance regulatory practice at the firm.

David Kopans is a partner at the firm.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.


[1] See, e.g. the New York State Department of Financial Services' Insurance Circular Letter No. 7 (Jul. 11, 2024).

For a reprint of this article, please contact reprints@law360.com.