Blog Post

Can AI Replace Doctors? Arizona Says No, For Now

By Olivia Cristante.


As we begin 2026, it is safe to assume that artificial intelligence (AI) technology will only grow in its role in the U.S. economy, and might even replace human jobs in many industries—including the health care field. Despite public outcry against health systems’ unbounded use of AI, states have been slow to regulate its control over Americans’ access to medically necessary care. But just last year, Arizona instilled a sense of hope in its concerned residents by signing House Bill 2175 (H.B. 2175) into law, affirming that an AI algorithm will not have the final say on medical decisions.

 

The Growing Demand for AI Regulation in Health Care 

Major U.S. health systems are progressively integrating AI into their workflows in many remarkable ways. AI-driven robots are helping surgeons perform minimally-invasive procedures; AI-based transcription tools are seamlessly translating patient visits into detailed clinical notes; and AI algorithms are reading radiology scans, detecting obscure abnormalities. But when health systems fully empower AI to make critical medical determinations and consequently reduce patient access to necessary treatment, have they gone too far?

 

AI’s expansion into health care has generated mixed reactions among providers and patients. Patients’ opinions on AI’s presence in health systems vary based on how significant a role an AI machine plays in their care. For example, in a 2025 study of seventeen adult patients, participants reviewed five scenarios where AI might be used in their medical care—in portal messaging, radiology review, as a digital scribe, as a virtual human, or for digital support. Patients reported that they were far more comfortable with an AI-based digital scribe that merely acts as a provider’s note taker than a virtual human that entirely replaces a provider. 

 

Similarly, while medical professionals generally accept AI within their systems as a tool to streamline administrative tasks and aid in diagnostics, most insist that human clinicians must be the ultimate decision maker on patients’ care. They highlight how transparency towards their patients regarding their limited use of AI is essential to a successful provider-patient relationship—the cornerstone of effective care. Hence, if AI can ever truly propel health systems in a positive direction, patients must be assured that behind every AI-based decision stands a trained medical professional giving her nod of approval.

 

One of the most prominent concerns regarding AI’s role in health systems lies in AI-based health insurance claim decisions. In early 2025, an American Medical Association (AMA) survey showed that “49% of physicians ranked oversight of payers’ use of AI in medical necessity determinations among the top three priorities for regulatory action.” These concerns are grounded in fact: AI tools are associated with higher insurance claim denial rates—sometimes sixteen times higher than usual. This shift might be explained by AI tools operating under algorithms that do not align with national and local coverage standards.

 

Until the passage of H.B. 2175, state and federal legislatures—except for California, which passed the “Physicians Make Decisions Act” in 2024, requiring human physicians to affirm AI-based medical decisions—have failed to keep pace with AI’s increasing presence within the health care industry. In response to a lack of uniform regulations, the AMA published a set of principles to guide health systems’ implementation of AI—repeatedly noting the need for statutory requirements addressing AI in automated insurance claim processes.

 

H.B. 2175: Arizona’s Solution to Unregulated AI use in Health Care 

On May 12, 2025, Arizona Governor Katie Hobbs signed H.B. 2175 into law with nearly unanimous support from the state legislature. The law requires human clinicians to review insurance claims and prior authorization denials based on an AI-algorithm’s “medical necessity” determination. Specifically, H.B. 2175 states that a “medical director” must exercise their own “independent medical judgement and may not rely solely on recommendations from any other source” in reviewing denials before the decision is finalized and communicated to the affected patient. While H.B. 2175 will not take effect until July 1, 2026, it is expected to ensure that providers—equipped with decades of training and bound by ethical duties—maintain their critical role in determining medical necessity by using each individual patient’s diagnosis and condition in conjunction with insurance guidelines.

 

H.B. 2175 is, in theory, a critical step towards increasing patient protection against unmonitored AI algorithms. However, its efficacy rests on one major premise—that reviewing clinicians use their independent medical judgment rather than rubber stamping an AI-based decision. Many fear that these clinicians simply lack the time, expertise, or incentives to truly set aside an automated decision and reach the same, or possibly a different, conclusion using their medical training. While time will determine this concern’s validity, it raises the question of whether H.B. 2175 will actually execute on the vision it has projected to Arizonans.

 

Looking Ahead: Protecting Health Care Consumers Against AI Through Legislative Action

H.B. 2175 was rather unique at its inception, but today it stands among laws in states including Pennsylvania, Connecticut, Nebraska, and Texas that similarly aim to regulate AI in the medical community while preserving patient trust in the U.S. health care system. Because laws of this kind are receiving support from both Democrat and Republican legislators, it seems their prominence will only continue to grow into 2026 and beyond.

 

Meanwhile, however, the federal government continues to push for increased AI usage in insurance claim decisions. On January 1, 2026, the Centers for Medicare and Medicaid (CMS) rolled out its newest pilot program—the Wasteful and Inappropriate Service Reduction Model (WISeR)—in six states, including Arizona. WISeR employs private technology companies contracted with CMS to use their proprietary AI models in insurance claim processes for certain Medicare Part B services. Unsurprisingly, CMS’ announcement of WISeR was met with harsh criticism from the AMA, the American Hospital Association, and members of Congress. State laws like H.B. 2175 reflect states’ pushback on federal programs promoting unmonitored AI use in health care, but whether they impose sufficient parameters on these programs is still to be determined. 

To be sure, advocates of federal and state legislation surrounding AI’s control within the health care industry are not necessarily directing their efforts toward abolishing the technology entirely. The ultimate goal of proposals from entities like the AMA is to regulate how AI operates in health systems and avoid destroying the deeply personal nature of provider-patient relationships. Many endorse AI’s efficiency-enhancing uses—like transcribing clinical notes and assisting in complex diagnoses and surgical operations—while simultaneously suggesting necessary guardrails on potentially harmful uses—like making insurance claim decisions on medically necessary services. While the Arizona Medical Association, Arizona residents, and Arizona elected officials and health care advocates properly celebrate the passage of H.B. 2175, they should also recognize that H.B. 2175 must only be the first of many Arizona regulations geared toward AI in health care. AI might be here to stay, but to preserve effective and ethical medical treatment, it must always remain within the hands of human physicians. 

"Doctor Patient" by Direct Media is marked with CC0 1.

By Olivia Cristante

J.D. Candidate, 2027

Olivia Cristante is a 2L at ASU law. Olivia was born and raised in Cave Creek, Arizona. She obtained her undergraduate degree in economics from the University of San Diego. Olivia plans to use her law degree to pursue a career in litigation with a focus in health care regulation and medical malpractice. Olivia’s interest in health care law stems from her experience working at a hospital prior to law school and having many medical professionals in her family. Outside of law school, Olivia enjoys spending time outdoors with her two yellow labs, playing pickleball with friends, and traveling.