Regulation of AI in Prior Authorization and Claims Review: A Look at Federal and State Consumer Protections

6 Views
Regulation of AI in Prior Authorization and Claims Review: A Look at Federal and State Consumer Protections

Federal Regulation and Oversight of AI

Few federal standards apply specifically to the use of AI in the prior authorization and claims review process, but all coverage decision-making for both public and private coverage includes general standards intended to ensure reviews are fair, substantive, and timely. These standards are fragmented across federal agencies with separate oversight responsibilities for different health coverage markets.

For private employer-sponsored plans, the federal government, through the U.S. Department of Labor (DOL), oversees claims and appeals process requirements in the Employee Retirement Income Security Act (ERISA). ERISA generally exempts self-insured plans established by private employers from most state insurance laws, including claims review protections, and would likely preempt state AI laws that relate to the claims review process. Most workers with employer-sponsored insurance are in a self-funded plan, meaning that many consumers are not guaranteed state protections related to the use of AI in claims review, where they exist.

These ERISA claims and appeals rules were the basis for reforms applied across all private health coverage in the Affordable Care Act. These reforms established a federal floor of protections for the internal claims and appeals process for those with Marketplace and off-Marketplace private insurance and added an option for all consumers with private coverage to appeal denied claims through an “external review” by an entity independent of the plan.

ERISA requires all employer plan sponsors to ensure the “full and fair” review of all health claims. What “full and fair” means in the context of the use of AI tools in the claims process is yet to be interpreted through guidance or updated regulation. ERISA also contains “fiduciary” rules requiring employers and other fiduciaries to act in the best interest of plan enrollees and monitor vendors’ activities. While these standards might provide some protection to employees related to an employer plan’s use of AI, in practice, fiduciary standards have rarely been applied to employer health plans, and to date, enrollees have not been successful in advancing litigation to challenge employers for breaching their fiduciary duties related to the health plans they sponsor.

Still, one recent DOL case against a large TPA alleged a fiduciary violation and a failure to follow ERISA claims rules when the TPA automatically denied claims in bulk without making an individual medical necessity evaluation for each under the terms of the plan. While these allegations did not necessarily involve AI, the TPA allegedly used an automated process without human review to issue denials. This case was settled with the establishment of a fund to compensate enrollees for improperly denied claims.

Federal guidance specific to AI use in prior authorization and claims review in Medicare and Medicaid has been limited. Both programs have their own claims and appeal consumer protections under federal requirements (and some state standards also apply to Medicaid).

Medicare. 2023 Medicare Advantage regulations and additional 2024 guidance clarify that Medicare Advantage organizations cannot make medical necessity decisions using an algorithm or software that does not consider individual circumstances. Denials based on medical necessity must be reviewed by a health care professional. Regulations proposed in 2024 that addressed bias and discrimination in the use of AI by Medicare Advantage plans were not finalized by the Trump administration. Additionally, the federal government is testing the use of AI to make certain prior authorization decisions for specific services in traditional Medicare through its Wasteful and Inappropriate Services Reduction (WISeR) Model, contracting with AI technology companies to administer this pilot program in six states.

Medicaid. Current Medicaid regulations do not directly address the use of automation in prior authorization. Medicaid managed care regulations require that any managed care organization (MCO) decision to deny services be made by “an individual” with appropriate expertise, but do not explicitly address AI use. Through state managed care contracts (which are reviewed and approved by CMS), states can set requirements for plan performance and reporting, such as requiring plans to disclose the use of AI in prior authorization processes. The Medicaid and CHIP Payment and Access Commission (MACPAC) has recently issued draft recommendations on the use of automation in Medicaid prior authorization.

State AI Consumer Protections in Prior Authorization and Claims Review

In recent years, some states have advanced laws and regulations aimed at protecting consumers from possible harm stemming from algorithmic decision-making systems, such as privacy breaches, inaccuracies, and bias. AI-related legislation continues to be debated in almost every state legislature, with some efforts garnering bipartisan support. Some states have issued regulations and other guidance under existing laws instead of or in addition to new state laws.

State laws specify new and existing AI consumer protections. Some state laws contain wide-ranging protections meant to cut across different sectors of the economy and apply to a broad range of entities, such as developers and those who deploy or use the technology for business purposes. Other state laws are specific to industry sectors (e.g., health care), topics (e.g., employment, civil rights, education), or uses, such as utilization review in health insurance.

Broad state laws include those that prohibit unfair or deceptive acts and practices. All 50 states have broad consumer protection laws that prohibit unfair or deceptive acts or practices. These laws are enforced by state attorneys general, and sometimes also allow a consumer to sue directly for a violation of the law (a “private right of action”) instead of relying on the state alone to enforce it. Colorado and Utah are examples of states that have amended their consumer protection laws to provide for general AI consumer protections.

Depending on the specific state law, these broader consumer protection laws might be used to address consumer harm resulting from the use of AI in prior authorization and claims review. Additionally, a growing number of states have updated longstanding state health insurance standards for managed care related to utilization review to clarify how these rules apply to AI (Figure 1). Almost all of the laws are focused on the decision-making process of utilization review, sometimes defined under state rules as individualized decisions about whether a given service is medically necessary based on the patient’s individual clinical circumstances. These laws do not necessarily include administrative claim review decisions that do not involve a medical necessity determination, such as whether a claim is for care that is excluded under the plan.

Each state law related to the use of AI in prior authorization and/or claim review has its own unique requirements, but major themes include:

  • Human review of claim denials required. Some state laws include a provision that only a licensed health care provider may issue adverse determinations (a denial) and that AI cannot be used as the sole decisionmaker. For example, Illinois law requires that only a “clinical peer” make an adverse determination based on medical necessity and does not allow the sole use of an “algorithmic automated process” to make these decisions.
  • AI tools must take individual clinical circumstances into account. A couple of these states require that any AI tool used for utilization review bases its determination on an enrollee’s unique clinical history. Alabama, for instance, mandates that insurers who use artificial intelligence to make prior authorization determinations ensure that they base these decisions on an enrollee’s clinical history and clinical circumstances.
  • Disclosure of AI use. A few of these states, such as Utah for example, require entities that use AI to conduct utilization review to disclose its use to the public, the state department of insurance, health care providers in their network, and each enrollee.
  • Review of AI tool outcomes. Some state laws also require entities that perform utilization review to periodically review performance and outcomes of AI tools they use in order to check accuracy and reliability. California law requires that an AI tool be periodically assessed and revised to ensure maximum accuracy and reliability.
  • Limits on the use of patient data to protect privacy. Several of these state laws include language that prohibits those conducting utilization review from using patient data beyond its intended purpose and contrary to HIPAA or state law confidentiality protections. Maryland law is one example.
  • AI tools must be open to inspection, including the underlying algorithms. Some of these laws mandate that AI tools for utilization review be open to audit by regulators. In Texas, the commissioner is allowed to audit and inspect a utilization review agent’s use of an automated decision system for utilization review at any time.
  • AI protections against bias and discrimination. A few state laws, such as Washington’s, require that AI tools be applied “fairly and equitably” and cannot result in discrimination, either directly or indirectly, against an enrollee.

New state guidance aims to exercise state authority to regulate AI use. Some states have issued guidance to make clear how existing state legal protections apply to AI. For example, in 2024, the Massachusetts Attorney General released a public Advisory explaining how the state’s existing consumer protection, civil rights, and data privacy laws apply to developers, suppliers, and users of AI, and how they could impact consumers in Massachusetts.

Insurance regulators in some other states have taken a similar approach, issuing new guidance to clarify how existing state law applies to AI and provide more specific information to insurers about their obligations concerning the use of AI. As of early April 2026, at least 25 states have issued guidance based on a model bulletin adopted in 2023 by the National Association of Insurance Commissioners (NAIC). The model bulletin applies to all types of state-regulated insurance (not just health insurance) and addresses the use of AI across all aspects of the insurance life cycle, including claims administration and payment, fraud detection, product development, and rating and pricing. It establishes the expectation that consumer-facing decisions made or supported by AI systems comply with existing insurance laws and regulations, including protections against unfair trade practices and illegal discrimination. It also instructs insurers to adopt policies and procedures with specifics about how AI is used and to implement controls to mitigate the risk of adverse outcomes. It specifies that insurance oversight includes the ability of regulators to inquire about the development, deployment, use, and outcomes of any AI system or predictive model used by insurers or their third-party vendors, as well as request information about system validation, testing, and ongoing audits of AI systems.

Disclaimer: This story is auto-aggregated by a computer program and has not been created or edited by lifecarefinanceguide.
Publisher: Source link


Leave a comment