An announcement has been made about radical changes in the healthcare system. An unprecedented pilot project in medical practice has been launched in the state of Utah, USA, and this move has already divided the medical community into two camps.
According to Milli.Az, shortly before the introduction of a healthcare-adapted version of ChatGPT, the state of Utah officially approved the updating of prescriptions for 190 types of drugs used for chronic diseases through artificial intelligence. This is considered the first such experiment in the USA.
The pilot project is being implemented in cooperation with the healthcare technology company "Doctronic." The main goal of the project is to speed up access to healthcare services, reduce medical costs, and ease the workload of already overburdened doctors and healthcare workers.
Currently, a symbolic payment of 4 dollars is required to use the system for prescription updates. In the future, it is planned to include this service in insurance packages or offer it through an annual subscription model. Nevertheless, not all drugs have been included in the system. Strong painkillers with dependency risks and drugs intended for attention deficit hyperactivity disorder (ADHD) have been excluded from this list.
According to the company's statement, prescription decisions made by artificial intelligence show 99.2 percent agreement with real doctor decisions. Despite this high indicator, specialists recall that even the smallest mistake in medicine can lead to serious consequences, thus requiring a cautious approach.
Risks and ethical questions
Although the application of artificial intelligence in healthcare is not new, its authority to write prescriptions, which is vitally important, raises serious debates. According to current statistics, about 46 percent of nurses in the USA already use artificial intelligence tools in their daily or weekly work.
However, experts warn that such systems may sometimes:
overlook dangerous interactions between drugs,
fail to properly evaluate critical changes in a patient’s condition.
Moreover, several scientific studies show that medical artificial intelligence tools underrate women's complaints and can produce biased results related to race and ethnicity. Specifically, the lower risk assessments by algorithms for Black patients can hinder timely and adequate medical care for them.
From a legal standpoint, the matter is more complicated. Although medical practice rules are mainly under state jurisdiction in the USA, the regulation of AI-based medical systems is under the supervision of the Federal Food and Drug Administration (FDA).
If this pilot project in Utah proves successful, it is expected to be implemented soon in other states such as Texas, Arizona, and Missouri.