Utah has allowed AI to prescribe medication



The case for AI prescription renewal is real. So is the case against relying on a state sandbox to capture risks.

In January, a security research company called Mindguard sat down with the chatbot. The chatbot was recently built by Doctronic, an advanced health technology startup The first company in American history obtaining state approval to autonomously update medical prescriptions using artificial intelligence.

Mindgard’s researchers fed the artificial intelligence a made-up regulatory bulletin and watched what happened. A system convinced by a document that does not exist, told them he would triple the standard prescribed dose of OxyContin.

Doctronic and Utah’s Office of Artificial Intelligence Policy clarified that the responsive chatbot is Doctronic’s public-facing tool, not the hardened system that drives the actual prescription pilot. This distinction is important and worth taking seriously.

But that doesn’t address the deeper question raised by the exchange, which is not whether this particular system is at risk, but whether a 12-month government sandbox program run by a commerce department with a mandate to promote AI innovation is the right mechanism to answer that question.

Start with what is actually true about the problem Utah is trying to solve. For too many Americans, prescription renewal is a bureaucratic hurdle that serves no clinical purpose. About half of people with chronic diseases do not take their medications as prescribed. According to XNM. Making healthcare accessible and preventative rather than reactive is one of the broader challenges The tech industry has been struggling for years.

A significant portion of this discrepancy is directly attributable to the renewal process: a two-week wait for a primary care appointment means a missed call from surgery, which means starting over. This was reported by the managed Health Department Matt Pavelle, co-founder of Doctronic, puts the figure at about 30% of all discrepancies.

That’s a big number on a specific and solvable problem. Medication nonadherence costs the American health care system between $100 billion and $300 billion annually and is associated with approximately 125,000 preventable deaths each year, depending on which set of studies you consult. These are not the numbers provided by the startup. They come from peer-reviewed literature and the CDC.

Therefore, the input argument for updating the AI ​​recipe is not trivial. It’s strongest where the care system is thinnest: rural areas, low-income patients, older Americans who struggle to attend in-person appointments.

Vascular surgeon Adam Oskowitz, co-founder of Doctronic, clearly put in January: patients wait weeks for an appointment to renew a prescription for a drug they’ve been taking for years, for a condition that hasn’t changed. This wait is not a feature of the system. It is a failure. If AI can fix this failure safely, it should.

The problem is the so-called safe. Doctronic’s safety benchmark is that its AI matched human clinicians’ treatment plans 99.2% of the time in 500 emergency cases. The company shared those numbers with Utah regulators, and they were pretty convincing.

But 500 cases is a small number for a system that will eventually process prescriptions at scale. The nonadherent 0.8% represents a significant number of patients who are taking anything other than what their clinician recommends in any meaningful amount.

More fundamentally, being consistent with what a clinician recommends in a structured assessment is not the same as being robust to the full range of real-world inputs, including adversarial ones.

The Mindgard test was not a live system stress test; it was a demonstration that the company’s publicly available AI could be manipulated with a fictitious press release. It’s comforting that the live system is different. This is not definitive.

What makes the Utah regulation particularly worth considering is the regulatory mechanism it uses. The state’s Office of Artificial Intelligence Policy, established in 2024 waives its own rules of unprofessional conduct for companies that fall within its regulatory sandbox. He did it for Doctronic.

The three-phase pilot begins with a physician review of each renewal, which sounds rigorous. The third phase, the operational phase, involves physician review between five and ten percent of renewals. The rest continue autonomously. STAT News raised the question whether an artificial intelligence system that evaluates clinical information and issues prescriptions is regulated by the FDA as a medical device.

This question remains unanswered. Utah does not have the authority to respond, and its agreement with Doctronic does not require FDA approval before system-wide.

The American Medical Association and the Utah Academy of Family Physicians have both issued formal objections. Dr John Whyte, chief executive of the AMA, said in a statement that removing doctors from clinical decisions puts patients at risk. The Utah Academy said the program showed a clear willingness to move forward with artificial intelligence without the necessary guardrails.

These are medical groups, and medical groups are not always disinterested observers when it comes to artificial intelligence, which could reduce demand for their services. But concern for guardians can be divorced from guild interests. A state commerce department has different incentives than a regulator whose primary mandate is patient safety.

Utah’s OAIP is clearly charged with promoting the adoption of AI. This is fine as a policy objective. This should not be the primary lens through which the safety of a prescription is evaluated. WHO It was warned in 2021 that existing policies and regulations are insufficient to protect patients from AI in healthcare. Four years later, this gap has not been closed.

None of this means the Doctronic pilot is wrong. It can be really valuable and really safe. Phased approach, monthly reporting requirements, exclusion of controlled substances and injection devices, Malpractice insurance that aligns AI to a physician standard: these are serious design choices, not window dressing.

If the program has been running for 12 months and the data shows clean results, this evidence will be considered for each state to follow.

But the main point is the evidence. The question is not whether AI can help with recipe innovation. It probably could be. The question is who is responsible for creating the evidence that will tell us. The state trade office running a 12-month pilot with a startup established in 2023 is clearly not this institution.

The FDA exists precisely because the history of American medicine is full of seemingly beneficial innovations.

Thalidomide, which never made it to the U.S. market, failed because a startup pilot showed troubling results. Frances Kelsey of the FDA failed because she required some kind of evidence that a sandbox program was not intended to produce.

Patients who wait weeks for a prescription refill deserve a better system. They also deserve to know that the AI ​​work that updates their prescription has been tested by someone who is safety, not innovation.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *